Get Complete Project Material File(s) Now! »

## Group analysis with the GLM for fMRI data

In fMRI experiments, usually the same experiment is performed by multiple participants.

Analysing data from a group of participants, which is often denoted as group-level analysis, aims at identifying significant effects that are common within the population. However, datasets recorded from several participants each consists of multiple images, furthermore each image contains a large number of voxels. For this reason, a hierarchical two-level GLM is implemented.

The two-level GLM is expressed as follows:where Y v,s is the BOLD signal recored from voxel v in subject s, Y v is the concatenation of first-level outputs across subjects. The first level is the subject level during which the GLM is implemented independently for each subject. After the parameter ˆ v,s is estimated for voxel v of subject s, contrast c is applied to ˆ v,s to compute cv,s–the output of the first level–where cv,s = cˆv,s. Then cv,s is carried forward to the second level for group-level analysis. In the second level, the GLM is performed across participants. For a group of S subjects, Y v is a column vector consisting of S subject-specific contrasts computed at voxle v, where Y v = [cv,1, …, cv,S ]T . By introducing a group-level design matrix Xg which contains only one regressor, i.e. Xg = [1, …, 1]T , the second-level GLM model estimates a mean for the group of subjects, i.e. v g represents the mean contrast across subjects at voxel v. The null hypothesis is defined as H0 : v g = 0, which is assessed through a one-sample t-test. After t value and associated p-value are iteratively assigned to each voxel on the brain, the correction for multiple comparison problem is performed to detect regions where the contrast c is significantly non-null across subjects. Besides t test, non-parametric strategies are also used as statistical model. For instance using permutation-based approach [103] to exchange labels of observations in the group level, the null distribution is created for learning the significance level.

### Multi-Voxel Pattern Analysis for fMRI data

Multi-voxel pattern analysis (MVPA) has gained increasing popularity on analyzing neuroimaging data. By switching from a univariate to a multivariate approach, MVPA focuses on information contained in distributed patterns of activation across voxels ([77]; [74]; [57]), rather than in individual voxels. In general, MVPA uses supervised learning which consists in learning a decision function f that maps an input space X to a target space Y. We denote a dataset composed of N labeled examples as D, D = {(xn, yn)}n=N n=1 , where xn 2 X, yn 2 Y. xn is a spatial pattern recording activations across voxels at specific time, e.g. values estimated from the GLM. yn is the corresponding label representing the experimental condition or the category of stimulus. The function learned from {(xn, yn)}n=N n=1 is used to predict y for unseen samples. When the value of y is discrete such as y 2 {1, …,C}, this procedure is known as classification. When the value of y is continuous, it is known as regression. After learning a decoder, the performance of the decoder is usually evaluated by a metric which represents the decoder’s generalization power to unseen data. Since fMRI data consist of limited observations, a strategy named cross-validation [34] is performed to acquire robust evaluation for a decoder. Performing cross-validation requires to split dataset into subsets and repeatedly learn a decoder from several subsets, which involves steps as follows: i) data from one subset form the test set while data from all other subsets form the training set; ii) a decoder is learned from the training set; iii) the decoder makes predictions for data in the test set. Then predictions from all test subsets are integrated to measure the performance of the decoder. In order to assess whether the performance of a classifier is significantly different from the chance level, a non-parametric statistical test [103] is commonly used. This strategy assumes that if there is no difference between experimental conditions, shuffling the labels of examples does not affect the performance of a classifier. Under this assumption labels are shuffled multiple times, each time an accuracy is computed. The accuracies learned from shuffled labels form an empirical null distribution which enables to obtain the p-value of the accuracy learned from true labels. If the p-value is under specific threshold, e.g. p < 0.05, it means the accuracy computed from true labels is significantly different from the chance level, which furthermore indicates there exists information in spatial patterns distinguishing experimental conditions.

#### Multivariate Group analysis for fMRI

When provided with a group of subjects that perform the same experiment, multivariate group analysis aims at detecting the consistency of the individual measurements across the population. Most multivariate group analysis strategies consist in following steps: i) performing cross-validation on single-subject dataset to obtain one measurement per subject; ii) aggregating individual measurements in the second level; iii) detecting the consistency of individual measurements with a statistical test.

Similar to group analysis with the GLM, multivariate group analysis is also capable to generate a statistical map which indicates regions correlating with tasks. The procedure usually involves employing searchlight sphere to produce accuracy maps, the commonly used method consists in: i) sliding searchlight sphere on single subject’s dataset to obtain one accuracy map per subject; ii) using a statistical model, e.g. t statistic, to assess the consistency of accuracies across subjects, voxel by voxel; iii) correcting the multiple comparison problem to generate a thresholded statistical map.

**Aim and scope of the thesis**

This thesis focuses on group level analysis of functional neuroimaging data that aims at identifying invariant traits in the data within the population. While group-level analyses with classical univariate techniques, such as the GLM, have been massively studied, group-level strategies based on multivariate machine learning methods have a open prospect. Multivariate group analysis is important from both neuroscientific and methodological perspective. It allows understanding the functional organization of the brain in healthy subjects and its dysfunctions in pathological populations. In addition, because multivariate group analysis exploits multiple machine learning techniques, it has important application fields such as human-computer interaction, automated brain disorder diagnosis, etc. Therefore this thesis aims for multivariate group analysis. The thesis is organized as follows: Because of the difference in the shape and size of brain between subjects, it is challenging to detect common effects across subjects. In the literature, the standard multivariate group analysis strategy is hierarchical, combining the performances of within-subject models in a second-level analysis. The alternative strategy, inter-subject pattern analysis, directly works at the group-level by using, e.g. a leave-one-subject-out cross-validation. It is natural to ask what the difference between effects detected by these two strategies is and why there exists difference. Therefore, we provide a thorough comparison of these two multivariate group-level schemes, using both a large number of artificial datasets as well as two real fMRI datasets.

**Table of contents :**

Abstract

Résumé

Acknowledgement

Bref Résumé

**1 Functional Neuroimaging: a primer **

1.1 Functional neuroimaging: introduction

1.1.1 Electroencephalography and magnetoencephalography

1.1.2 Functional magnetic resonance imaging

1.2 fMRI data analysis

1.3 Univariate techniques

1.3.1 The General Linear Model

1.3.2 Group analysis with the GLM for fMRI data

1.4 Multivariate techniques

1.4.1 Multi-Voxel Pattern Analysis for fMRI data

1.4.2 Multivariate Group analysis for fMRI

1.5 Aim and scope of the thesis

**2 Inter-subject pattern analysis: a straightforward and powerful scheme for group-level MVPA **

2.1 Introduction

2.2 Methods

2.2.1 Group-MVPA (G-MVPA)

2.2.2 Inter-Subject Pattern Analysis (ISPA)

2.2.3 Artificial data

2.2.4 fMRI data

2.2.5 fMRI data analysis

2.2.6 Classifiers, Statistical inference and performance evaluation

2.3 Results

2.3.1 Results for artificial datasets

2.3.2 Results for fMRI datasets

2.4 Discussion

2.4.1 G-MVPA and ISPA provide different results

2.4.2 ISPA: larger training sets improve detection power

2.4.3 ISPA offers straightforward interpretation

2.4.4 G-MVPA is more sensitive to idiosyncrasies

2.4.5 A computational perspective

2.5 Conclusion

2.6 Supplementary materials

2.6.1 Influence of the number of subjects and number of samples per subject

2.6.2 Influence of the within-subject covariance

2.6.3 Influence of spatial smoothing on the results obtained on the real fMRI datasets

2.6.4 Bias and variance analyses for G-MVPA and ISPA

2.6.5 Assessing false positive rates in G-MVPA and ISPA

**3 Inter-subject pattern analysis for functional neuroimaging: a formalization and a review **

3.1 Introduction

3.2 Formalization of inter-subject pattern analysis

3.2.1 Data description

3.2.2 Cross-validation, training and test set

3.2.3 A multi-source transductive transfer problem

3.2.4 Relationships with similar problems

3.3 Searching the literature for ISPA papers

3.4 Inter-subject pattern analysis: a review

3.4.1 Common space construction

3.4.2 Feature construction or/and feature selection

3.4.3 Predictive models/machine learning algorithms

3.4.4 Performance evaluation

3.4.5 Statistical assessment

3.5 Discussion

3.5.1 Handling inter-subject variability in the training set

3.5.2 Explaining the transductive nature of ISPA

3.5.3 Transferring from the training to test set

3.5.4 The gap between methods and applications in ISPA

3.5.5 Statistical models for ISPA

3.6 Conclusion

**4 On the well-foundedness of the multi-source transductive transfer formalization of ISPA **

4.1 Introduction

4.2 Experimental setting

4.2.1 Experimental data

4.2.2 General experimental setting

4.3 Study 1: multi-source transductive setting vs. classical machine learning setting

4.3.1 Experimental setting

4.3.2 Results

4.4 Study 2: Generalizing to new data vs. new individuals

4.4.1 Experimental setting

4.4.2 Results

4.5 Study 3: Number of subjects vs. sample size in the training set

4.5.1 Experimental setting

4.5.2 Results

4.6 Study 4: Performing ISPA with multi-source transductive transfer strategies

4.6.1 Methods

4.6.2 Experiment setting

4.6.3 Results

4.7 Discussion

4.7.1 The multi-source transductive transfer setting is valuable

4.7.2 Guidelines and suggestions for ISPA studies

4.7.3 The dimensionality of data affects generalization performance

4.7.4 Subspace alignment and stacked generalization provide different results

4.8 Conclusion

**5 Multivariate group-level analysis using Lp distance-based optimal transport**

5.1 Introduction

5.2 Background of optimal transport

5.2.1 Wasserstein barycenter of probability measures

5.2.2 Barycenter of unnormalized measures

5.3 Methods

5.3.1 TLp distance

5.3.2 Barycenter with the TLp distance

5.3.3 Artificial fMRI data

5.3.4 Real fMRI data

5.3.5 Experimental setting

5.3.6 Statistical assessment

5.4 Results

5.4.1 Results on artificial data

5.4.2 Results on real fMRI data

5.5 Discussion

5.5.1 Influence of the noise on the optimal transport methods

5.5.2 From artificial to real fMRI data

5.5.3 Criterion for choosing parameters

5.5.4 KBCM and TLp-BI

5.6 Conclusion

**6 Conclusion**