Neuro-anatomy of visual selective attentional process

Get Complete Project Material File(s) Now! »

CAPTURING DECISION SIGNALS IN REAL TIME

As we have seen in Chapter 1, research on the conscious experience of volition addresses a intriguing contradiction. On one side, a growing corpus of studies using imaging methods have identified brain sig-nals predicting the content of a forthcoming decision (see Section 1.3). However, cognitive psychological studies on introspective abilities have shown that people are differentially aware of the different as-pects of their impending decision. While they can prospectively ac-cess and control their intentions to act, they are remarkably unaware of their impending decision (see Section 1.2). This apparent contradic-tion has usually been explained in terms of the illusory introspective process underlying the conscious experiment of our intention. Never-theless, recent innovative studies unravel genuine metacognitive ac-cess to internal motor preparatory signals as the action unfolds. Yet when retrospectively monitoring their intentions, people’s introspec-tive content would be dominated by the decision they took and the perceptual feedback they received (Schultze-Kraft et al., 2020). Those studies depict the subjective feeling of intending to act (i.e the when and whether aspects of intention) as a metacognitive process that inte-grates both preparatory signals and retrospective elements in a hierar-chical organization. Coupled with real-time approach, this theoretical framework allows to inquire « whether and how motor preparation in-forms the conscious experience of intention » (Pares Pujolras, 2019).
In the present work, we investigate the condition under which peo-ple are aware of the content of their decisions. As mentioned in Chap-ter 1, the decision content (i.e. the alternative chosen by the decision maker) could be encoded at different level of the decision process. First people might be prospectively aware of their intention to choose a given alternative by accessing their action selection process. Yet in the physical world, different decisions always correspond to different motor action. Therefore, people might retrospectively infer the option they unconsciously chosen based on the monitoring of the motor sig-nals executing this choice.
In this chapter, we described how our experimental approach based on a BCI decoding covert, feature based, selective attention can help us to tackle this issue. In a nutshell, our BCI setup allows people to make a decision of preferentially attending one among two items on the computer screen. Those items are presented at the same location to prevent spatial cues and gaze monitoring from participating in the inference processes on decision content. In Section 2.1, we detail the characteristics of covert, feature-based selective visual attention that bring us to specifically target this cognitive process for our BCI. As a direct decoding approach of selective attention might be challenging, we present in Section 2.2 the steady state visual evoked potential methods that allow to measure consequences of attention allocation on the visual system. Finally in Section 2.3 , we describe the theo-retical concept and the mathematical procedure underlying our BCI setup.

selective visual attention in the brain

Although cognitive paradigm emphasized the analogy between brain and computer (Searle, 1990b; Searle, 1990a), brain computation has to minimize the number of spikes needed to transmit a given sig-nal(Barlow, 1961). Indeed, neural activity, and more precisely neural firing rates account for the larger part of brain metabolic cost (At-twell and Laughlin, 2001). As we have seen in Chapter 1, one way to tackle complex environments at reduced cost is to adopt a predictive coding approach, where only the difference between brain prediction and actual sensory inputs travel up the cortical hierarchy. In addition, structural solution such as sparse coding have been described in the brain (Chalk, Marre, and Tkaˇcik, 2018; Perez-Orive et al., 2002; Vinje and Gallant, 2000; Harris and Mrsic-Flogel, 2013; Perez-Orive et al., 2002; Vinje and Gallant, 2000; Harris and Mrsic-Flogel, 2013). Indeed, the sparseness of neural coding ranges on a continuum from dense, highly redundant representation, to local representation encoded in one single neuron (Barlow, 1969; Quiroga et al., 2005). Recent stud-ies reveal that different coding schemes meet different computational requests (Raymond and Medina, 2018) like predicting the future or encoding past events.
Above its account by predictive coding theory (Friston, 2012; Ho-hwy, 2013; Clark, 2015), attention can be primarily described as a mechanism contributing to efficient coding. Indeed, neuron spikes metabolic cost imposes a drastic limitation to the number of neurons that can be simultaneously activated (Lennie, 2003). Thus attention operates as a mechanism of selective attribution of computational re-sources. From a bottom up perspective, visual stimuli compete for the access to computational resources such that neurons representing dif-ferent features of the visual scene engage in competitive interaction, resulting in the suppression of the non selected features (Desimone and Duncan, 1995; Carrasco, 2011). From a top down perspective, at-tention selectively allocates available computational resources at the attended features to the detriment of the unattended ones (Carrasco, 2011).
Although both perspectives reflect the limitation of the brain com-putational ability, they also illustrate different aspects of selective pressure exerted on the brain by its environement. In fact, bottom-up perspective corresponds to the need for the brain to stick with environmental dynamics while top-down perspective reflects the ne-cessity for the agents to comply with their current goals. In line with this disjunction, neuroimaging studies described three networks un-derlying different aspects of selective attention mechanism (Posner and Petersen, 1990).

Neuro-anatomy of visual selective attentional process

According to Posner and Petersen, 1990, the attentional system fulfill three main functions that have been associated to three distinct brain networks (Posner et al., 1988; Posner and Petersen, 1990; Carrasco, 2011; Petersen and Posner, 2012):
• The orienting system prioritizes sensory inputs by selecting a modality or location via either « foveating » specific stimulus or covertly attending it. Thereby, computation of the attended stim-ulus is improved. Orienting function has been associated with posterior brain areas including the superior parietal lobe, the temporal parietal junction and the frontal eye fields .
• The executive control system regroups top down executive control networks involved in conflict monitoring and top down control. Anatomically, it has been associated with an anterior attentional system, which involves the anterior cingulate and the lateralpre-frontal cortex.
• The alerting system maintains a certain degree of sensitivity or arousal to incoming stimuli. This modulation of general excitabil-ity is controlled by a network spanning over the frontal and parietal regions of the right hemisphere.
On top of this neuro-anatomical description, psychophysics and cognitive psychology studies classified selective attention along three main axis (Moore and Zirnsak, 2017). Although these forms of atten-tion might be similar at certain mechanistic level (e.g. Hohwy, 2013), it can be relevant to separate them experimentally as can have different effects on the rest of cognition.
• Spatial versus Feature-based attention Attention can be directed toward spatial location of the environment (e.g. stimulus on the right visual field) or directed toward a specific feature (e.g. blue stimulus versus red stimulus).
• Overt versus Covert attention Attention can selectively process a stimulus in absence of any orienting movement toward this stimulus. It is distinguished from a selective process accompa-nied by orientation (e.g. of the gaze) toward the selected stimu-lus location.
• Top-down versus Bottom-up attention Attention could be an en-dogenously generated process, reflecting motivation, preferences or strategy. Alternatively attention could be an exogenously driven process whose selection is solely based on the salience of stim-uli.
As we mentioned before, we address in the present work whether and how decision processes inform participants about their choices. Therefore, participants have to make decisions in a situation in which we can control or measure all the cues that could potentially notify them about their choice. To address this issue, participants make a choice which consists in preferentially attending one among two over-lapping items presented at the center of the screen. This decision in-volves top-down, feature based covert attention and avoids cues like eye movement or spatial location to grant participants with informa-tion about their choice.

Feature-based covert attention (FBCA)

FBCA has been shown to increase visually driven fire rates encoding the attended stimulus along the all visual path, beginning as soon as the thalamic relay of visual stream, namely the dorsal lateral genic-ulate nucleus (Saenz, Buracas, and Boynton, 2002). FBCA not only increases the strength of spiking activity but also the information those spikes convey at both single neuron and population scale. In addition, attention could be linked with increased synchrony among the neurons and neuron populations coding for the attended stimu-lus (Moore and Zirnsak, 2017).
From a behavioral perspective, knowing in advance a feature of the target stimulus improves participants’ detection performance. More-over, such pre-cueing can affect low level sensitivity like motion detec-tion or direction discrimination. Indeed when cued with the direction of moving dots, participants are more sensitive in motion detection. Importantly, improvements are still effective in the presence of distrac-tor. This could indicate that FBCA specifically boost the behaviorally relevant psychophysical channel (Carrasco, 2011) and thereby reflect the cue driven decision process.
In the three examples shown here, participants were asked to at-tend either red or blue dots while their gaze remained located on the central cue.
Data presented here are extracted from our pilot study inspired by Müller et al., 2006 work.

Selective attention and decision

As described by the predictive coding theory, attention filters behav-iorally relevant information by modulating their expected precision (see Section 1.2). Indeed attention favors processing of relevant infor-mation and allows the attenuation of distractors (Kinchla, 1992; Car-rasco, 2011). Furthermore, motivation and reward alter attention al-location (Bourgeois, Chelazzi, and Vuilleumier, 2016). Drift diffusion models of decision posit that decisions are based on evidence accumu-lation operated during gaze fixation (Krajbich and Rangel, 2011). Yet, attention should not be reduced to a passive information sampling process, as it rather plays an active role in decision. Attention is not solely guided by top down preferences but also reflects bottom up process and interaction with working memory. Attention is known to impact value integration and confidence during decision (Kunar et al., 2017; Kurtz et al., 2017). Thereby, decision emerges from an interaction between preference, external stimulation, attentional and memory processes (for a detailed review of influence of attention on decision see Orquin and Loose, 2013).
Recent neuroimaging studies began to dissect the complex inter-connexion of higher order cognitive function involved in decision processes. Indeed, the relationship between executive control of at-tention and the decision based processing of information have been associated with inter-connected sub-region of the pre-frontal cortex (PFC) (Hunt et al., 2018). These sub-region are thought to concomi-tantly process information and guide its sampling by attention shift. In line with those results, sub-region of the PFC reflecting categorical decisions about ambiguous stimuli also drive modulation of visual attention (Roy, Buschman, and Miller, 2014).
Altogether, these studies show the tight relationship between selec-tive attention and decision processes. Attention does not only provide indication about a decision related top-down guidance in information sampling. Instead, it constitutes a core element of the decision process and interact with working memory and valuation process through pre-frontal parallel networks.

measuring the consequence of selective attention.

As we have seen, selective attention recruits an ensemble of neural networks spreading through the entire cortex. It is noteworthy that many of the studies having identified those selective attention related networks rely on fMRI measures and non human primate studies. This method achieves a good spatial resolution and provides access to deeper cortical layers at the cost of poor temporal resolution and invasive procedure. Brain computer interfaces on the other hand re-quire fast and reliable decoding of a neural feature. Since frontal areas are highly integrated regions dealing with a wide range of functions, decoding the attentional process directly might prove unreliable. We thus opted for a method assessing the consequences of selective at-tention on an evoked visual feature: the steady states visual evoked potential.

The steady-state visual evoked potential (ssVEP)

Visual evoked potentials are stereotypical responses to transient vi-sual stimulation, usually assessed through EEG or MEG (magnetoen-cephalography). If the stimulation is periodically modulated as a function of time, the evoked response obtain by averaging over trials with time locked to the stimulation will be a visual evoked poten-tial of small amplitude termed « steady-state » visual evoked potential (ssVEP) (Adrian and Matthews, 1934; Regan, 1966). Therefore, it is possible to define ssVEP as follows:
ssVEPs are evoked responses induced by flickering visual stimuli. ssVEPs are periodic, with a fundamental frequency corresponding to the visual stim-ulation frequency. The spectrum of ssVEP is characterized by peaks at stim-ulation frequencies and related harmonics and is stable over time. (Vialatte et al., 2010).
The brain areas generating ssVEP depends on the frequency of stimulation: high frequencies(> 10-12 Hz) evoked ssVEP sources are localized mostly on V1, lower frequencies evoked ssVEP are likely to originate from pre-cortical structure including retina and lateral geniculate nucleus (Vialatte et al., 2010). If a consensus about the com-plexity of ssVEP localization has yet to be found, it appears that V1 exhibits the strongest signals across a wide range of stimulation fre-quencies and visual features (Herrmann, 2001; Vialatte et al., 2010; Norcia et al., 2015). On the mechanistic aspect, ssVEP have been asso-ciated with non-linear propagation of visual information through the visual pathway. From non linearity stem the presence of harmonics in the ssVEP spectrum even in absence of those harmonics in the vi-sual stimulation (Labecki et al., 2016). Furthermore, inter-modulation frequencies are found in the ssVEP spectrum when two or more vi-sual stimulations at distinct frequencies are concomitantly presented to participants.
ssVEP have been found to be less sensitive to artifacts like eye move-ments and blinks, but also to electromyographic noise contamination compared with regular transient visual evoked potential. Since the late 60’s, ssVEP have been applied to the study of a wide range of phenomena spanning over both clinical research and cognitive sci-ence. Yet in the next section we will focus on the relationship between selective visual attention and ssVEP.

Visual selective attention modulates ssVEP

Although ssVEPs has primarily been viewed as a tool to study early cortical stages of sensory processes, they have since been successfully used to explore higher order processes. As the measure they provide is directly attributable to one specific visual stimulus on the screen and presents a high signal to noise ratio (SNR), ssVEPs are partic-ularly adapted to the study of attention (Norcia et al., 2015). In a previous study, participants were presented with two spatially sepa-rated letters flickering at distinct frequencies. It results that attending to one letter increases the ssVEP amplitude corresponding to the let-ter oscillation frequency (Morgan, Hansen, and Hillyard, 1996).
ssVEP can also be used when two stimuli are present at the same spatial location. Feature-based selective attention was first addressed by Pei, Pettet, and Norcia, 2002 in a study where horizontal and ver-tical bars of 16 cross were oscillating at different frequencies. The study showed that attention modulated neural processing in a spa-tial non specific manner. Indeed, the ssVEP amplitude was larger at the frequency corresponding to the attended orientation. Disso-ciating further the contribution of spatial and feature-based selective attention, Müller et al., 2006 used a stimulus consisting in two super-imposed random dots kinematograms of distinct colors. The study confirms that covert feature-based selective attention selectively in-creases ssVEP amplitude of the attended item. Notably, this increase was found to be maximal over occipital area of V1-V3 . Follow up studies dissected the temporal dynamic of the ssVEP modulation by selective attention. Cues were displayed on the screen asking partici-pants to preferentially attend one colored group of dots. The enhance-ment of ssVEPs amplitude occurs around 220 ms after the cue ap-pearance and is accompanied about 140 ms later by a decrease of the ssVEP amplitude related to unattended dots (Andersen and Müller, 2010).
If ssVEP can reliably track the identity of the currently attended features, whether they can provide information on the quantity of al-located attentional resources is not clear. Clinical studies established that schizophrenic patients have reliably smaller ssVEP during pas-sive exposition but show an hyper-activation compared to control subjects when actively attending a stimulation in the 1-30 Hz range. Schizophrenic patients are known to endure attentional deficits since the early stage of the disease (McGhie and Chapman, 1961). Such deficits include global enhancement of background noise leading to difficulty to selectively allocate attention to relevant part of the en-vironment. Thus, compensatory over-allocation of attention are often observed. This compensation effect echo Bayesian account of schizophre-nia according to which schizophrenic patient might fail to attenuate sensory precision of incoming perceptual inputs, leading to compen-satory increase in the precision of high-level prior beliefs (Fletcher and Frith, 2009). Similarly, children having lived febrile seizure episode show a superior control over working memory, distraction avoidance and attention (Chang et al., 2000; Ku et al., 2014). In parallel, their ssVEP amplitude is larger than age-matched control across the all fre-quency range (Vialatte et al., 2010). Altogether, those results link the quantitative variation of attention allocation with variation in ssVEP amplitude. Therefore, a ssVEP based decoding approach would be able not only to infer the attended object on the screen but also to track the strength of allocated attention.
Although its use has mostly been circumscribed to the study of vi-sual information processing and attention, ssVEP have recently been employed in paradigm addressing other integrated cognitive mecha-nisms.

READ  Cryptographic Hardware Acceleration and Power Minimization 

ssVEP to investigate cognition.

The effect of attention on ssVEP amplitude predicts behavioral out-put. As an example in Andersen and Müller, 2010, the lag between the cue indicating the color of the target dots and the ssVEP ampli-tude enhancement was strongly correlated with the response time to detect a transient coherent movement in the target dots. Similarly, the performance in an orientation discrimination task was predicted by the orientation-selective ssVEP response (Garcia, Srinivasan, and Ser-ences, 2013).
Recent studies have used ssVEP to differentially tag different levels of visual hierarchical perception (Gordon et al., 2017; Gordon et al., 2019). They combined semantic wavelet induced frequency tagging (SWIFT) (Koenig-Robert and VanRullen, 2013) with classical ssVEP (Norcia et al., 2015) to disentangle bottom up from top down pro-cessing. Indeed, ssVEP predominantly tag low level sensory inputs in the visual hierarchy while SWIFT method has been shown to re-flect high level top-down processing. They track the degree of inte-gration between top down predictions and bottom-up sensory sig-nals through inter-modulation components of the EEG spectrum (fre-quencies corresponding to linear combination of SWIFT and ssVEP frequencies). They showed that the participation of top-down predic-tions1 decreased as reliability of sensory inputs increase, lending sup-port to predictive coding theory.
In summary, ssVEP analysis provides an indirect measure of sev-eral aspects of visual selective attention. Indeed ssVEP grants access to qualitative aspects of selective attention such as the location of the attentional spot or the identity of the preferentially attended object. Furthermore, ssVEP also gives insight about quantitative aspects of the attentional process like the strength of allocated attention or its timing. Recent studies have demonstrated that appropriate stimula-tion allows us to use of ssVEP to investigate a wide range of cognitive mechanisms. In the next section, we will describe how we implement a BCI exploiting ssVEP to continuously track selective attention and thereby the decision of our participants.

brain computer interface (bci): toward a real-time measure of selective attention

To continuously track the content of the decision of our participants, we used a BCI combining stimulus reconstruction approach with sweep-ssVEP (Regan, 1973; Ales et al., 2012). Stimulus reconstruction has been successfully used to decode the attended stimulation in au-ditory (O’Sullivan et al., 2015) and visual paradigm (Sprague, Saproo, and Serences, 2015). Although this method achieves a good accuracy, decoding attention toward natural stimulus requires a fair amount of data. To continuously track attention, we follow the most recent devel-opment in the field of BCI (Rezeika et al., 2018; Chen et al., 2015) that apply stimulus reconstruction to a simple pattern of oscillation. We describe our stimulation in detail in the next chapter. In the following section, we expose BCI’s general concept for decoding participants’ decision content.

Linear stimulus reconstruction and individual model creation

As we have previously seen, selective attention specifically alters the attended stimulus processing. One way to infer which stimulus among many is attended is to assess those processing modulations via a stimulus-reconstruction approach. This method attempts to recon-struct an estimate of the input by combining the neural signals recorded during stimulus presentation according to a participant-specific model. After the reconstruction steps, the reconstructed signal is correlated with each stimulus oscillations pattern and the correlation factors can be compared to infer the attended stimulus (O’Sullivan et al., 2015).
The reconstructed stimulus is a linear combination of the neural signals received from the electrodes (in the case of an EEG based approach) according to a participant’s specific model. To build the in-dividual model, we rely on labeled trials (e.g. trials where the target stimulus is known). We detailed the method in the methodological section of Chapter 3. In a nutshell, the model minimizes the differ-ence between on one hand the combination between the EEG elec-trodes and on the other hand the target stimulus oscillations pattern.
Applying this method to detect the attended audio stream in a cock-tail party results in robust accuracy on a single trial level (O’Sullivan et al., 2015). It includes all the scalp information and thus encom-passes top-down and bottom-up contribution to attended stimulus processing. Furthermore, as the model tends to attribute a very low weight to irrelevant information, it does not necessitate advanced methods of filtering and can be conducted without removing blink, muscle or electrical artifact, or even without average-referencing data.
However, studies achieving a good (>80%) classification accuracy rely on a large amount of data. For example, O’Sullivan et al., 2015 classifies 1 minute of auditory data per trial. To gain a better insight in the temporal dynamic of participant decisions, one needs a finer tem-poral grain in the decoding approach. In line with innovative work in the domain of BCI based speller (Chen et al., 2015), we drastically re-duced the required amount of data to 3 seconds by applying the stim-ulus reconstruction approach to a simple modulation pattern. This procedure allows us to modulate the feedback according to partici-pants’ decision to attend one stimulus among two in 5 seconds long trial.

The BCI loop

Brain Computer Interface is a recent neuro-technology which devel-opment is giving rise to a flourishing field of research, both in the engineering and fundamental aspects. Although its first description can be traced back in the 70’s (Vidal, 1973; Vidal, 1977), BCI setup long remained mere proof of concept and concrete application for general public use began only recently to be extensively developed (McFar-land and Wolpaw, 2017). BCI setups have been developed with just about every neuro-imaging method (invasive or not) existing, con-necting vision but also sensory-motor system (Wolpaw et al., 1991), audition (Klobassa et al., 2009) and even taste (Canna et al., 2019). In the present work, we focus on EEG-based visual BCI.
Among its several applications, the main effort has been put in restor-ing communication and control in paralyzed patients (McFarland et al., 2017). Derived from this line of research, BCI based spellers al-low people to write by only attending a virtual keyboard (Rezeika et al., 2018). Other domains of application like video game control are progressively emerging, building on methodological breakthroughs of their predecessors (Kerous, Skola, and Liarokapis, 2018).
Pro-active on this domain, the engineer literature usually define 4 modular subsystems in a BCI (Wolpaw and Wolpaw, 2012; McFarland and Wolpaw, 2017):
• Stimulation: Here visual, the stimulation elicits an appropriate neural pattern to be decoded. In Appendix A, we detail our method to present participants with two equivalent choice op-tions distinguishable patterns.
• Signal recording: The neuro-imaging method used to continu-ously extract brain signals. In the present work, we used a 64 electrodes electroencephalography.
• Signal processing: This subsystem regroups the processing of the raw brain signals along with the extraction of features relevant for BCI mediated interaction. We described the detailed proce-dure in the Methodology section of Chapter 3. Our goal is to maintain a robust and accurate classification while capturing the dynamic of selective attention allocation. We thus applied a stimulus reconstruction approach on a sliding window of 3 sec-onds. Each 250 milliseconds, we computed the reconstructed stimulus and correlated it with the pattern of oscillation of our two potential targets.
• Interaction: Finally, the fourth subsystem specifies system opera-tion. It notably entails how, from the relevant features extracted by the signal processing subsystem (in our case the results of cor-relation between reconstructed signal and display stimuli pat-tern), the BCI decides which item was attended. Thereby,BCI adapts the feedback to the participant’s neural signals.
Below is illustrated our implementation of BCI subsystems allow-ing us to investigate the simple decision of preferentially attending one over two items overlapping in the screen center.
The four subsystems of a BCI are detailled in Section 2.3.2. Top panel) Stimulation. Left panel)Signal recording. Bottom panel) Sig-nal processing. Right panel) Interaction implementation.

Summary

To conclude, attention is an omnipresent cognitive mechanism inter-mingled with virtually all other brain mechanisms. Especially rele-vant for the present work, participants’ decisions influence covert fea-ture based attention allocation. Therefore, to track decisions in time, we choose to rely on a BCI approach to decode selective attention allocated to one among two overlapping items. We present in Ap-pendix A the visual stimulation we developed for our BCI allowing participants to perform a voluntary covert decision. Although such a picking choices lack the motivational aspect that characterize real-life decisions, they are thought to rely on similar mechanism. Fur-thermore, a non-ecological approach allows us to specifically target proximal aspect of decision while neither long term planning nor mo-tor preparation can be informative for participant awareness. Thus, we can address whether and how the representation available at this specific level are susceptible to enter consciousness.

Table of contents :

i introduction 
1 theoretical introduction 
1.1 Rethinking introspective illusion
1.1.1 A hindrance for early psychological science
1.1.2 An object of study per se
1.2 From illusion to integrative process: beyond the introspection accuracy debate
1.2.1 Illusion: the default mode of self-knowledge?
1.2.2 Beyond illusion: integrative perspective of introspection
1.3 Monitoring and controlling decisions
1.3.1 Reframing decision
1.3.1.1 Theoretical account
1.3.1.2 Neural implementation
1.3.2 Introspect decision timing and execution
1.3.3 Introspecting the decision content
2 capturing decision signals in real time
2.1 Selective visual attention in the brain
2.1.1 Neuro-anatomy of visual selective attentional process
2.1.2 Feature-based covert attention (FBCA)
2.1.3 Selective attention and decision
2.2 Measuring the consequence of selective attention
2.2.1 The steady-state visual evoked potential (ssVEP)
2.2.2 Visual selective attention modulates ssVEP
2.2.3 ssVEP to investigate cognition
2.3 Brain computer interface (BCI): toward a real-time measure of selective attention
2.3.1 Linear stimulus reconstruction and individual model creation
2.3.2 The BCI loop
2.3.3 Summary
2.3.4 Empirical contribution overview
ii experimental contributions 
3 imbalanced exogenous and endogenous contribution to introspection lead to illusion and associated metacognitive failures
3.1 Synopsis
3.2 Introduction
3.3 Results
3.3.1 Impact of internal decision evidence and external cues on introspection
3.3.2 Metacognitive failures:
3.3.3 Reliability of internal decision evidence: [ October 1, 2020 at 14:54 – version 1.0 ]
3.4 Discussion
3.4.1 Conclusion
3.5 Methods
3.5.1 Participants
3.5.2 Visual stimulation
3.5.3 EEG recording
3.5.4 Brain Computer Interface
3.5.5 Experimental procedure
3.5.6 Data Processing
3.5.7 Statistical Analysis
4 partial awareness during voluntary endogenous decision
4.1 Synopsis
4.2 Introduction
4.3 Results
4.4 Discussion
4.5 Methods
4.5.1 Participants
4.5.2 EEG recording
4.5.3 Visual Stimulation
4.5.4 Brain Computer Interface
4.5.5 Experimental procedure
4.5.6 Data Processing
4.5.7 Statistical analysis
iii discussion 
5 general discussion 
5.1 Summary of results
5.2 Theoretical implications
5.2.1 Implication for awareness of decision
5.2.2 Implication for interpretation of introspection and Choice Blindness
5.2.3 Implication for allocation of attention
5.3 Methodological consideration and limits
5.3.1 Real time decoding and introspection
5.3.2 Probing complex cognitive mechanisms
5.4 Theoretical limits and Future directions
5.4.1 Concomitant measure of endogenous and exogenous cues
5.4.2 Probing the limits of awareness of decision content
5.4.3 Neural noise and awareness
5.4.4 Toward a metacognitive prosthesis
5.5 Final Conclusion
iv appendix 
a methodological foreword: the visual stimulation
a.1 Specifications
a.2 Creation procedure
a.2.1 Background (see Figure 16) [ October 1, 2020 at 14:54 – version 1.0 ]
a.2.2 Animation and phase modulation (see Figure 17)
a.2.3 Single frame creation (see Figure 18)
b extended data for chapter 3
b.1 Results
b.1.1 Effect of internal decision evidence on introspective accuracy in the absence of feedback
b.1.2 Distinguishing accurate introspection from confabulation.
b.1.3 Relationship between confidence and consistency of internal decision evidence following informative feedback cues
b.1.4 Controlling for late changes of decision
b.1.4.1 Exogenous influence on confidence
b.1.4.2 Internal decision evidence consistency modulate the impact of external cues on confidence
b.2 Figures
b.3 Methods and Tables
c extended data for chapter 4
c.1 Results
c.1.1 Number of random versus determined reports
c.1.2 AAS level does not reflect a successful decision phase
c.2 Methods and Tables
bibliography 

GET THE COMPLETE PROJECT

Related Posts