Encoding event timing with dedicated neural structures

Get Complete Project Material File(s) Now! »

Specialized brain areas for the encoding of time

If neural time can represent space, motion and other qualities of perception _ and as neural transmission delays do not seem to reflect subjective time _ would it be relevant to consider a non-temporal code for event timing perception?.
In particular, encoding time through the activity of a dedicated network would have the ability to easily account for the amodal aspect of time, i.e. that participants are able to estimate the temporal relations between events from distinct sensory modalities. While it is clear that there is no sensory area dedicated to time perception, the existence of brain structures dedicated to the encoding of time is still debated (Treisman et al., 1990; Harrington et al., 1998; Lewis and Miall, 2003, 2006; Coull et al., 2004; Buhusi and Meck, 2005; Ivry and Schlerf, 2008; Wittmann, 2013; Morillon et al., 2009; van Wassenhove, 2009; Wittmann, 2009).
Neuroimaging studies suggest the involvement of several brains structures during time perception: cerebellum has an important role in motor timing, coupled with supplementary motor area (SMA) and motor cortex (Harrington et al., 1998; Ivry and Schlerf, 2008; Schwartze and Kotz, 2013). Basal ganglia and thalamus may provide metrics for absolute timing (Buhusi and Meck, 2005; Schwartze and Kotz, 2013). Parieto-frontal areas are involved in the conscious discrimination of temporal information (Coull et al., 2004; Nobre et al., 2007). Insula may register physiological states of the subject to encode duration (Wittmann and Paulus, 2008; Craig, 2009; Wittmann, 2009, 2013). Finally, early sensory and multisensory areas are also activated when discriminating relative timing or rate of external information (Dhamala et al., 2007; Noesselt et al., 2007; van Wassenhove and Nagarajan, 2007), and in particular the auditory cortex can be recruited even in the absence of sound (Coull et al., 2000). While these structures are specialized in the encoding of a certain aspect of time perception, evidence suggests together they contribute to the final time percept (Lewis and Miall, 2003). It should be noted that many of the reported experiments capture either encoding, emotional, and decisional aspects of timing reports, and may reflect multiple aspects of temporal processing such as duration, temporal prediction, temporal order, or synchrony perception. It is probable that these different facets of time perception recruit different areas and/or exploit different neural mechanisms. Here we review two dedicated mechanisms that could serve the encoding of relative event timing.

ENCODING EVENT TIMING WITH BRAIN OSCILLATIONS

As discussed in section 2, one main challenge for the brain is to recover the external timing of the world from its own dynamics. A first hypothesis states that the brain does not compensate for neural transmission delays from sensors to sensory areas, and thus that perceived timing equates neural timing. However, several bodies of work showed that neural transmission delays from stimulus onset to sensory area were bad indicators of perceived timing. Furthermore, many perceptual effects suggest that the time experiencer compensates for its internal neural temporal delays. Therefore, it has been suggested that perceived timing is encoded at a later stage of sensory processing via dedicated structures (Ivry and Schlerf, 2008). In this theoretical framework, timing compensation could operate through the crossing of sensory informations at specific networks that are specialized for the encoding of timing. Yet it seems that most proposals of dedicated networks for the encoding of time depict networks that act as “readers” of temporal information. Interestingly, this temporal information might be retrieved from the dynamics of non-dedicated brain areas, including sensory networks. This means that the brain feeds on its internal temporal fluctuations to recover time. This view is close to what William James suggested, when stating that time perception could rely on “outward sensible series”, “heart beats” or “breathing”, but also on “the pulses of our attention, fragments of words or sentences that pass through our imagination”. Hence subjective timing could rely on the timing of mental activity. From a neuroscientific point of view, this suggests that ongoing brain dynamics provide the temporal grounds for the inner representation of time. Here, we review evidence that, among indices of brain dynamics, neural oscillatory activity is a relevant candidate to provide the ongoing metrics of time that could be used for sensory processing.

Temporal binding through neural coherence

Neural activity at neuron, network, or area level is characterized by well-described intrinsic periodic fluctuations that span across multiple time scales (Buzsáki and Draguhn, 2004; Roopun et al., 2008; Wang, 2010). These neural oscillations are observed through Local Field Potential (LFP) recordings within the brain, or with electroencephalography (EEG) and magnetoencephalograghy (MEG) that record electrical and magnetic fields coming out of the scalp of the subject. Although brain rhythms range from 0.02 Hz to 600 Hz (Buzsáki and Draguhn, 2004), they are usually classified within distinct frequency bands: infra-slow oscillations concern rhythms below 1 Hz, delta oscillations corresponds to rhythms between 1-3Hz, theta oscillations span between 3-8 Hz, alpha oscillations between 8-13 Hz, beta oscillations between 15-25 Hz, gamma oscillations between 30 and 120 Hz, and ripples above 150 Hz.
Neural oscillations typically reflect the synchronous fluctuating activity of a neural ensemble (Varela and Lachaux, 2001; Buzsáki, 2004, 2010; Lakatos et al., 2005): the presence of oscillations in the signal suggests that the overall activity of the neurons in the networks is grouped at certain periodic time points. Thus neural oscillations constitute a marker of temporal coherency in local networks. Crucially, following the Hebbian rule “cells that wire together fire together”, it is suggested that the networks that are encoding a common perceptual object _ or “cell assembly” (Buzsáki, 2010)_ should fire within the same amount of time. If oscillations modulate neural synchrony, then they should provide mechanistic means for neurons that process the same attribute to communicate (Fries, 2005; Sejnowski and Paulsen, 2006; Buzsáki, 2010). The Hebbian rule is not restricted to local processing; it applies also to the coordination of distant brain regions that are encoding the same object. Then, neural oscillations should play a prominent role in the temporal binding of different sensory features (Pöppel et al., 1990; Engel et al., 1991c, 1999; Senkowski et al., 2008; Pöppel, 2009).

READ  Impact of Acute Injection of M108 into the Hippocampus on CFC

MEG data preprocessing

Signal Space Separation (SSS) method was applied to decrease the impact of external noise (Taulu et al., 2003). SSS correction, head movement compensation, and bad channel rejection was done using MaxFilter Software (Elekta Neuromag). Signal-space projection (SSP) were computed by principal component analysis (PCA) using Graph software (Elekta Neuromag) to correct for eye-blinks and cardiac artifacts (Uusitalo and Ilmoniemi, 1997). A rejection criterion for epochs was applied for gradiometers with amplitude exceeding 4000 e-13T/m.

Structural MRI acquisition

Magnetic Resonance Imaging (MRI) was used to provide high-resolution structural image of each individual’s brain. The anatomical MRI was recorded using a 3-T Siemens Trio MRI scanner. Parameters of the sequence were: voxel size: 1.0 x 1.0 x 1.1 mm; acquisition time: 466 s; repetition time TR = 2300 ms; and echo time TE= 2.98 ms.

Anatomical MRI segmentation

Volumetric segmentation of participants’ anatomical MRI and cortical surface reconstruction was performed with the FreeSurfer software (http://surfer.nmr.mgh.harvard.edu/) (Dale et al., 1999; Fischl and Dale, 2000). These procedures were used for group analysis with the MNE suite software (http://www.martinos.org/mne/). Individuals’ current estimates were registered onto the Freesurfer average brain for surface based analysis and visualization.

Co-registration procedure (MEG-aMRI)

The co-registration of MEG data with the individual’s structural MRI was carried out by realigning the digitized fiducial points with MRI slices. Using mne_analyze within the MNE suite, digitized fiducial points were aligned manually with the multimodal markers on the automatically extracted scalp of the participant. To insure reliable co-registration, an iterative refinement procedure was then used to realign all digitized points (about 30 more supplementary points distributed on the scalp of the subject) with the individual’s scalp.

MEG source reconstruction

Individual forward solutions for all source locations located on the cortical sheet were computed using a 3-layers boundary element model (BEM) (Hämäläinen and Sarvas, 1989) constrained by the individual’s anatomical MRI. Cortical surfaces extracted with FreeSurfer were sub-sampled to about 5,120 equally spaced vertices on each hemisphere. The noise covariance matrix for each individual was estimated from the raw empty room MEG recordings preceding the individual’s MEG acquisition. The forward solution, noise covariance and source covariance matrices were used to calculate the dSPM estimates (Dale et al., 1999). The inverse computation was done using a loose orientation constraint (loose =0.2, depth = 0.8) (Lin et al., 2006). The cortically constrained reconstructed sources were then registered, morphed, onto the FreeSurfer average brain for group-level statistical analysis that was performed with MNE-python (Gramfort et al., 2013a, 2013b).

Event-related fields and source reconstruction

Event-related fields (ERF) were computed by averaging 15 trials at the beginning and at the end of a lag-adaptation block. Data were gathered across the 8 lag-adaptation blocks for each asynchrony condition (S, A200V, V200A). For auditory ERF, the stimulus onset was locked to the sound onset; for visual ERF, the stimulus onset was locked to the visual stimulus. Data were segmented in epochs of 1s (400 ms pre- and 600 ms post-stimulus onset). Baseline correction was applied using the first 200 ms of the epoch (-400 to -200 ms pre-stimulus onset). The inverse solver used to localize the sources was then applied on the averaged normed evoked data. The normalization procedure was done to alleviate source cancellation when averaging sources within a label of interest, and across subjects (Gross et al., 2013). The comparisons of evoked responses between conditions were computed using a non-parametric permutation test. Correction for multiple comparisons was performed with cluster level statistics using as base statistic Student t-test computed at each time sample (Maris and Oostenveld, 2007). Only temporal clusters with corrected p-value ≤ 0.05 are reported.

Power spectrum analysis

Low-frequency components in the frequency spectra could either originate from neural entrainment to the 1Hz stimulation or from noise having a power spectrum density with 1/f distribution. To substantiate a peak neural entrainment at 1Hz, the 1/f component was removed by subtracting at each frequency bin the mean power of the neighboring frequency values (4 frequency values were: [fo – 0.14Hz ; fo – 0.07 Hz; fo + 0.07 Hz; fo + 0.14 Hz] (Nozaradan et al., 2011).

Table of contents :

Chapter 1: Introduction
1.1. The inner representation of time
1.1.1. What is a time experience?
1.1.2. How does the time experiencer sense time?
1.1.3. How is event timing encoded?
1.2. Encoding event timing through neural latency
1.2.1. Brain access to external timing
1.2.2. Temporal binding
1.2.3. Temporal plasticity
1.3. Encoding event timing with dedicated neural structures
1.3.1. Specialized brain areas for the encoding of time
1.3.2. Delay-tuned cells
1.3.3. Brain clocks
1.4. Encoding event timing with brain oscillations
1.4.1. Temporal binding through neural coherence
1.4.2. Oscillations affect perceptual temporal sampling
1.4.3. Neural phase codes of events succession
1.4.4. Phase tagging of perceived event timing?
Chapter 2: Phase encoding of explicit timing
2.1. Introduction
2.1.1. Motivation
2.1.2. Experiment
2.1.3. Summary of the results
2.2. Article
2.3. Phase coding without entrainment
2.3.1. Motivation
2.3.2. Experiment
2.3.3. Results
2.3.4. Discussion
Chapter 3: Low resolution of multisensory temporal binding
3.1. Introduction
3.1.1. Motivation
3.1.2. Experiment
3.1.3. Summary of the results
3.2. Article
Chapter 4: Phase encoding of implicit timing
4.1. Introduction
4.1.1. Motivation
4.1.2. Experiment
4.2. Article
Chapter 5: General discussion
5.1. Summary of the findings
5.2. Two roles for the phase of slow-oscillations in EEG/MEG recordings?
5.2.1. Absolute and relative phase
5.2.2. Temporal integration and segregation
5.3. Multi- and ar- rhythmic brain mechanisms of event timing
5.3.1. Multiplexing of temporal information with multiple brain oscillators
5.3.2. Entrainment vs. arrhythmic stimulation
5.4. Conclusion
References 

GET THE COMPLETE PROJECT

Related Posts