Temporal Dynamics of Semantic Representations
To study the timing of mental processes and representations, behavioral chronometric measures can be used. Traditionally, reaction times in different experimental conditions are considered a proxy of the duration and sequencing of cognitive operations. Regarding semantic processing, for instance, priming experiments have suggested the existence of processing in two phases: first, linguistic co-occurrences determine the content of the representation, then around 200 ms grounded perceptual simulation intervenes (Ostarek and Vigliocco, 2016). Later, while reviewing the behavioral methods available, I will explore in more detail the semantic priming paradigm (Chap 2.1.3). Moreover, I will present the results of our own priming experiment investigating how automatically different components of semantics are activated (Chap 3.4).
So far, in my overview of the neural substrate of semantic knowledge, I have focused on the topographical organization of such a system. However, the content of a representation in a given region might be changing dramatically over a short period, with different dimensions/features being activated at different time points: for instance, one could hypothesize that visual areas are involved in processing perceptual characteristics of the stimuli (e.g., word lengths) at T1 while at T2 they are replaying visual conceptual properties (e.g., the words refer to something red). For instance, Broca’s area appears to code, in rapid succession, for lexical, grammatical and phonological features (Sahin et al., 2009) Moreover, different areas might be involved in this dynamic representation at different points intime: for instance, one could argue that during a given task, information coming from visual areas(T1) is read out by higher order cognitive areas in the temporal lobe (T2), which later provide inputs for complex computations happening in the frontal lobe (T3) .
The neuroimaging techniques of choice when interested in fine-grained temporal dynamics are electroencephalography –EEGand magnetoencephalography –MEG– (see Chap. 2.3). Overall, during reading, brain activation unfolds from occipital areas towards the anterior temporal pole (Marinkovic et al., 2003; Pammer, 2009). Similarly, listening elicits first activity in primary auditory areas and subsequently in supramodal temporal areas including the anterior temporal pole (Marinkovic et al., 2003; Salmelin, 2007). In both cases, the physical features of the stimuli are resolved within the first few milliseconds in modality specific areas (i.e., primary visual areas for written words, primary auditory areas for spoken words) and then converge in anterior temporal and inferior frontal cortices around 400ms (Marinkovic et al., 2003).
During the first 200 ms, analyses of the visual-orthographic feature, starting in primary visual cortex, spreads in a feed-forward wave along the inferior occipital gyrus and fusiform gyrus (Tarkiainen, 1999; Pammer et al., 2004). Likewise, the acoustic– phonetic analysis of spoken words takes places within the first 100 ms (N100) in non-primary auditory cortex (Kuriki and Murase, 1989; Parviainen et al., 2005). The language-specific phonetic and phonological analysis takes place in inferior frontal cortex and angular/supramarginal gyrus within the first 100-350 ms, when the mismatch negativity denotes access to phonological categories (Näätäneiv et al., 1997). Finally, between 200 and 500 ms, activity in superior and inferior temporal cortex, along with the inferior frontal one, denotes lexical-semantic processing (Kutas and Hillyard, 1980; Helenius et al., 2002). Fine-grained features of semantic processing have been explored by studies investigating event-related potentials (or fields – ERP/ERF) following semantically charged stimuli such as sentences and single words. It is traditionally accepted that post-lexical semantic processes (i.e., those processes taking place after the meaning of the word has been retrieved) are reflected by late components of ERP and ERF (Holcomb and Neville, 1990). Nevertheless, lexical effects (i.e., lexicality, word frequency, and word regularity) can be detected as early as 200 ms after stimuli onset (Sereno et al., 1998).
One of the most studied ERPs linked with semantic processing is the N400: a negative (N) deflection of the signal that starts around 300 ms and peaks around 400 ms (Kutas and Hillyard, 1980). It has been associated with the presentation (either auditory or visually) of words generating semantic violations such as socks in the following context: “I like my coffee with cream and socks” (Lau et al., 2008). Numerous factors have been shown to influence the shape of the N400, including:
the degree of anomaly (e.g., in the example above, socks instead of sugar).
the predictability (e.g., in the example above, honey instead of sugar, they are both semantically valid but one is very unlikely).
the number of semantic features shared (e.g., in the example above, salt instead of sugar would produce a smaller N400 than socks).
Format and Implementation of Semantic Representations
After having cleared the current views on when and where what we know is stored and retrieved in our brain, one challenge is left for us. What is the nature of semantic representations? The matter of the discussion here is the format of the representation as well as its underlying neural code. To understand the concept of format, let’s compare a bitmap and a vector graphic. They might have the same content (what), but they diverge in their formats (respectively a collection of pixels or of Bézier curves). Similarly, in the case of semantic representations, there are two competing views. On one hand there are those claiming that conceptual knowledge is stored in an abstract, amodal, propositional format (Mahon and Caramazza, 2008).
On the other hand, there are those conjecturing that a modality specific analogical format is necessary and sufficient to represent semantic information (Barsalou, 2010). These are only the extremes of a continuum, which see in moderate positions those that suspend their judgment (a real ἐποχή, epokhē): as we will see, the question turns out to be an extremely ill-posed problem (Martin, 2015).
Relation between Geometry, Format and Neural Code
I have previously stated that the content of semantic representations are concepts, and, in a reductionist view, the meaning of words. The term representational geometry can be used to describe the organization of such content: it refers to the relationships (i.e., distances) between items (e.g., words) conceptualized as points in a multidimensional space. For instance, the bi-dimensional space described by the visual features of color and shape sees elongated orange-ish items (e.g., carrot), closer to elongated yellow-ish items (e.g., banana), and further apart from round yellow-ish items (e.g., lemon). The geometry – and thus the distances – would be different in a space dominated by conceptual taxonomic dimensions (e.g., lemon and banana would cluster together – being fruit – and would be far apart from carrot – a vegetable). Thanks to this higher-order layer, representations stemming from different sources can be compared (Kriegeskorte and Kievit, 2013): cognitive geometries derived from behavioral data, predicted similarities as estimated by computational models, and neural representational spaces as resulting from neuroimaging observations.
A different problem is that of the representational format. As illustrated by the example of bitmap and vector images, given identical content, what tells apart one format from the other is which kinds of operations we can perform over the content. For instance, only vector drawings can be scaled without loss in quality. The quest for the appropriate descriptors of the format of neural representations (not only semantic ones) is still open.
functional Magnetic Resonance Imaging (fMRI )
Remember the broken television I introduced at the beginning of the chapter? If you were to bring it to an expert technician, alongside your precious information on what’s going on (i.e., your diagnosis) and your ideas on what it means (i.e., your functional hypothesis), he/she would be able to test (some of) them. Looking inside the apparatus, it would be possible to better understand the relation between the software installed by your cable company (i.e., cognitive functions) and the hardware that operates underneath it (i.e., neural substrate). Different methods have been developed to investigate the brain (see Fig. 25), all offering different advantages and disadvantages. Ideally one would like to have (1) whole brain spatial coverage, as likely most cognitive processes recruit more than one cortical area; (2) high temporal resolution, as the dynamics of mental processes are indisputably rapid; (3) high spatial resolution, as it has been shown that functional information in many brain areas is coded at a very fine grained neurobiological level; (4) minimal invasiveness, as to respect the human beings volunteering for the experiment, guarantying maximal comfort and protection from any possible (even indirect) harm; (5) maximal causal inference, as most of the techniques will only show a correlation between a given cognitive process and activity in one brain area, not allowing any kind of causal inference. Thus, cognitive neuroscientist working with neuroimaging techniques are always faced with a trade off in a four dimensional space: time x space (in terms of both resolution and coverage) x causality x invasiveness (see Fig. 25). In the case of the malfunctioning television, if you’d like to know exactly which small component (e.g., portion of a cable, tiny chip) is broken, your focus would be on the spatial resolution. In neuroimaging, this corresponds to choosing functional magnetic resonance imaging (fMRI). Compared to other neuroimaging techniques, fMRI has a good-to-great spatial resolution and a rather poor temporal resolution. The spatial resolution is affected by the strength of the magnetic field (i.e., whether the magnet is 7, 3 or 1.5 Tesla) as well as the sequence used to acquire brain volumes (e.g., whether it is possible to acquire multiple slices at the same time, so called multiband sequences). The temporal resolution is limited by the nature of the signal measured (i.e. it depends on blood flow, see below) and by the choice of parameters for the acquisition sequence (e.g., repetition time or TR).
Table of contents :
INTRODUCTION: ORGANIZATION AND CONTRIBUTIONS OF THE THESIS
1. THE SYMBOLS THAT MADE US WHAT WE ARE
2. OUTLINE OF THE THESIS
3. AUTHOR’S PUBLICATIONS
4. RÉSUMÉ EN FRANÇAIS
5. RIASSUNTO IN ITALIANO
CHAPTER 1: A MULTIDIMENSIONAL REVIEW OF THE LITERATURE
1. NEURO-COGNITIVE INTRODUCTION TO REPRESENTATIONS
1.1 Cognitive Representations
1.2 Neural Representations
2. THE CONTENT OF SEMANTIC REPRESENTATIONS?
2.1 Etymology and Philosophy
2.2 Linguistics and logic
2.3 Computer Science and Artificial Intelligence
2.4 Psychology and Neuropsychology
2.5 Dimensions and Geometries
3. ORGANIZATION AND LOCALIZATION OF SEMANTIC REPRESENTATIONS
3.1 Cognitive and Computational Models
3.2 Clinical Evidence
3.3 Neuroimaging Evidence
3.4 Grounded Cognition
3.5 Latest Developments
3.6 Open Questions and Future Directions
4. TEMPORAL DYNAMICS OF SEMANTIC REPRESENTATIONS
4.1 Temporal Representation
4.2 Spectral Representation
4.3 Long Range: Context and Experiences
5. FORMAT AND IMPLEMENTATION OF SEMANTIC REPRESENTATIONS
5.1 Relation between Geometry, Format and Neural Code
5.3 Skeptical Epoché
CHAPTER 2: INVESTIGATING COGNITIVE AND NEURAL REPRESENTATIONS
1. BEHAVIOR TO LOOK INTO COGNITION
1.1 Semantic Distance Judgment
1.2 Semantic Feature Listing
1.3 Priming Paradigm
2. FUNCTIONAL MAGNETIC RESONANCE IMAGING (FMRI )
2.3 Standard Univariate Analyses
3. MAGNETOENCEPHALOGRAPHY (MEG)
3.3 Standard Univariate Analyses
4. MULTIVARIATE ANALYSES OF NEUROIMAGING DATA
4. 1 Resolutions of Representational Codes
4. 2 Pattern Decoding
4. 3 Pattern Geometry
4.4 Pattern Encoding
CHAPTER 3: BEHAVIORAL EVIDENCES OF MULTIVARIATE SEMANTIC REPRESENTATIONS
1. STUDY 1
1.1 Stimuli Selection
1.2 Stimuli Psycholinguistic Validation
1.3 Stimuli Psychological Validation
2. STUDY 2
2.1 Stimuli Selection
2.2 Stimuli Psycholinguistic Validation
2.3 Stimuli Psychological Validation
3. ONE SPACE, MANY METRICS?
3.1 Distance Judgment vs Features Listing
3.2 Comparison with a Linguistic Database
4. A SPACE TO PRIME
4.1 Semantic Priming
4.2 Perceptual Priming
CHAPTER 4: TOPOGRAPHICAL FEATURES OF SEMANTIC DIMENSIONS
1. 1 The Topography of Word Reading in the Brain
1. 2 Cognitive and Neural Semantic Geometries
1.3 Neural Correlates of Semantic Representations
1. 4 Present Study Hypotheses
2. MATERIALS AND METHODS
2.3 Testing Procedures
2.4 MRI and fRMI Protocols
2.5 Data Pre-Processing and First Level Model
2.6 Regions of Interest
2.7 Univariate Analyses
2.8 Multivariate Pattern Analyses
2.9 Additional Analyses
3.1 Physical Dimension: number of letters
3.2 Perceptual–Semantic Dimension: implied real word size
3.3 Conceptual–Semantic Dimensions: semantic category and cluster
3.4 Controls on Low Level Physical Dimensions
3.5 Interaction between Semantic Dimensions and ROIs
3.6 Standard Pearson Correlation RSA
3.7 Lateralization of the Effects
CHAPTER 5: TEMPORAL FEATURES OF SEMANTIC DIMENSIONS
1. 1 The Temporal Dynamics of Word Reading
1. 2 The Temporal Dynamics of Accessing Semantic Features
1. 3 Present Study Hypothesis
2. MATERIALS AND METHODS
2.3 Testing Procedures
2.4 MEG Protocol
2.5 MRI Protocol and Source Reconstruction
2.6 MEG Data Pre-Processing
2.7 Univariate Analyses
2.8 Multivariate Analyses
3.1 Spatio-Temporal Dynamics of Word Processing: basic effects
3.2 Inter-Trial Phase Coherence
3.3 Spectral Power
3.5 MVPA Results
CHAPTER 6: CONCLUSIONS AND PERSPECTIVES
1. MAIN EMPIRICAL RESULTS
1.1 Motor-Perceptual vs Conceptual Dimensions
1.2 Topographical Dissociations
1.3 Temporal Dynamics
2. IMPLICATIONS FOR THE NEURO-COGNITIVE REPRESENTATION OF WORD MEANING
2.1 What is the Content of Semantic Representations?
2.2 Where are Semantic Representations Encoded in the Brain?
2.3 What are the Temporal Dynamics of Semantic Representations?
2.4 How are Semantic Representations Implemented in the Brain?
3. GENERAL DISCUSSION
3.1 Interpretable Features, Evolutionary Domains or Latent Dimensions?
3.4 Dissociated, yet Interacting
3.5 Early, yet Superfluous?
3.6 Effect of Context and Experience
4. GENERAL PERSPECTIVES
4.1 Clinical Relevance of Features Dissociation
4.2 Role of Hub(s) and Spokes
4.3 Dynamic Emergence of Semantic Representations
APPENDIX: VERBA VOLANT, SCRIPTA MANENT
1. SUPPLEMENTARY MATERIALS
1.1 Behavioral Experiments
1.2 fMRI Experiment
1.3 MEG Experiment
3. DOS AND DONTS