(Downloads - 0)
For more info about our services contact : help@bestpfe.com
Table of contents
1 Human Decision-Making and Prefrontal Function
1.1 Prefrontal cortex subserves central executive function
1.1.1 Early insights into prefrontal cortex functions: the contribution of lesion studies
1.1.2 Prefrontal cortex neuroanatomy, cytoarchitecture and neurophysiology
1.1.3 Prefrontal cortex in non-human primates and other species
1.1.4 Prefrontal cortex development and evolution during lifetime
1.1.5 Neuropsychiatric diseases involving prefrontal cortex dysfunction
1.2 Functional and anatomical organization: main theories of prefrontal cortex function
1.2.1 Lateral prefrontal cortex and hierarchical cognitive control
1.2.2 Ventromedial prefrontal cortex and orbitofrontal cortex
1.2.3 Dorsomedial prefrontal cortex and cingulate cortex
1.2.4 Frontopolar cortex
1.2.5 Conclusion
2 Affective values in human decision-making
2.1 Aective values: psychological and theoretical aspects
2.1.1 The notion of affective value, based on rewards and punishments
2.1.2 Expected utility theory and prospect theory
2.1.3 Rewards are driving learning: pavlovian and instrumental conditioning
2.1.4 Reinforcement learning computational models
2.1.5 Model-based and model-free reinforcement learning
2.2 Aective values: cerebral aspects
2.2.1 Basal ganglia
2.2.1.1 Subcortical basal ganglia anatomy
2.2.1.2 Electrophysiology and pharmacology studies show reward prediction error in dopamine neurons
2.2.1.3 The contribution of neuroimaging studies
2.2.2 Medial prefrontal cortex
2.2.2.1 Pain and punishments neural correlates
2.2.3 Conclusion
3 Inferences in human decision-making
3.1 Inferential processes: psychological and theoretical aspects
3.1.1 Probabilistic models of learning and reasoning
3.1.2 Bayesian inference
3.1.3 Possible limits of the Bayesian approach
3.1.4 Application of Bayesian inference models to learning and decisionmaking
3.2 Inferential processes: cerebral aspects
3.2.1 Model-based neuro-imaging
3.2.2 A role for vmPFC in inference
3.2.3 The medial PFC functional architecture in decision-making
4 Research question
5 Protocol A: Decorrelate affective value from information of outcomes
5.1 Experiment 1
5.1.1 Experimental design
5.1.2 Randomization
5.1.3 Experiment presentation
5.1.4 Participants
5.1.5 Statistical analysis
5.1.6 Computational modeling
5.1.6.1 Reinforcement learning model
5.1.6.2 Bayesian inference model
5.1.6.3 Bayesian inference model with online learning
5.1.6.4 Decay model
5.1.6.5 Mixed model
5.1.6.6 Action selection
5.1.7 Fitting procedure
5.1.7.1 Model selection
5.1.7.2 Quantitative measures
5.1.7.3 Qualitative measures
5.2 Experiment 2
5.2.1 Experimental design
5.2.2 Randomization, Experiment presentation and Participants
5.2.3 Statistical analysis and modeling
5.3 Experiment 3
5.3.1 Experimental design
5.3.2 Statistical analysis and modeling
6 Protocol A: Results and Discussion
6.1 Experiment 1
6.1.1 Experiment 1: Behavioral Results
6.1.2 Experiment 1: Modeling Results
6.1.3 Experiment 1: Discussion
6.2 Experiment 2
6.2.1 Experiment 2: Behavioral Results
6.2.2 Experiment 2: Modeling Results
6.2.3 Experiment 2: Discussion
6.3 Experiment 3
6.3.1 Experiment 3: Behavioral Results
6.3.2 Experiment 3: Modeling Results
6.3.3 Experiment 3: Discussion
6.4 Experiments 1, 2 and 3: Conclusion
7 Protocol B: Integration of beliefs and affective values in decision- making
7.1 Probabilistic reversal-learning task
7.1.1 Paradigm
7.1.2 Design and Randomization
7.1.3 Trial Structure and Jittering
7.1.4 Participants
7.1.5 Training
7.1.6 Debrieng
7.2 Behavioral Analyses
7.2.1 Learning Curves
7.2.2 Logistic Regressions
7.3 Computational Modeling
7.3.1 First class of models
7.3.2 Second class of models
7.3.3 Third class of models
7.3.4 Action selection
7.3.5 Fitting procedure
7.3.6 Model selection
7.3.6.1 Quantitative measures
7.3.6.2 Qualitative measures
7.4 Neuroimaging
7.4.1 fMRI acquisition
7.4.2 fMRI pre-processing
7.4.3 fMRI: Model-based approach
7.4.4 fMRI: General Linear Model
7.4.4.1 GLM1: Decision Values
7.4.4.2 GLM2: Dissociation belief system/affective values system
7.4.4.3 GLM3: Further dissociation within each system
7.4.5 Regions of Interest (ROI)
7.4.6 3D Bins analysis
8 Protocol B: Results
8.1 Behavior
8.1.1 Learning curves
8.1.2 Rewards
8.1.3 Logistic regressions
8.1.4 Stay/Switch trials
8.1.5 Reaction times
8.2 Modeling
8.2.1 First class of models
8.2.2 Second class of models
8.2.3 Third class of models
8.2.4 Model selection
8.2.5 Best-tting mixed model parameters
8.2.6 Informational Values
8.2.7 Conclusion
8.3 Neuroimaging
8.3.1 GLM1: Decision Values
8.3.1.1 Choice-dependent eects
8.3.1.2 Choice-independent eects
8.3.2 GLM2: Dissociation belief system/aective values system
8.3.2.1 Choice-dependent eects
8.3.2.2 Choice-independent eects
8.3.3 Replication of results in an independent analysis
8.3.4 GLM3: Further dissociation within each system
8.3.4.1 Dissociation within the aective values system: Reinforcement values (historical) vs. Affective values of proposed rewards (current trial)
8.3.4.2 Dissociation within the Bayesian system: Prior belief (historical) vs. Informational values (current trial)
8.3.5 Activations at feedback time
9 General Discussion
9.1 Modeling
9.1.1 Distortions
9.1.1.1 Differences with prospect theory
9.1.1.2 Sub-optimality and effecient coding
9.1.2 Predominance of the belief system
9.1.3 Interaction between the belief system and the affective value system: a hierarchy?
9.1.4 Difference with model-based/model-free reinforcement learning
9.1.5 Prefrontal cortex: a not yet optimized system?
9.1.6 Beliefs, affective values, and stability of representations
9.2 Imaging results: understanding the role of vmPFC and MCC
9.2.1 vmPFC and reliability signals
9.2.2 vmPFC and the default mode network
9.2.3 vmPFC and the notion of value
9.2.4 vmPFC and the notion of confidence
9.2.5 MCC and the affective values representation
9.3 General Conclusion
A Informal debrieng for thefirst series of behavioral experiments
B Instructions for fMRI experiment
C Informal debrieng following the last fMRI session
D Generative model of the fMRI task
D.1 Task Description
D.2 Bayesian inference, known parameters
D.2.1 Inference
D.2.2 Action selection
D.2.3 Contributions to action selection
D.3 Reinforcement learning
D.3.1 Inference
D.3.2 Action selection
D.3.3 Contributions to action selection
Bibliography
List of Figures
List of Tables



