Auditory-visual stimulation and sensitivity to dog phobia

Get Complete Project Material File(s) Now! »

Multisensory processing of affective stimuli

The number of studies investigating multisensory affective processing has increased in the past decades. This increase was spurred on by the explosion of research activity in multisensory processing. If combining different sensory information about the surrounding environment leads to a better apprehension of external events, it could be extremely beneficial insofar as affective events are concerned. Correctly identifying and reacting to affective events should enhance the chances of survival. The investigation of how emotional signals coming from different sensory modalities are combined is only beginning and started by examining emotional perception, i.e. the processes of identification and recognition of the affective states expressed by emotional stimuli. Studies have mostly focused on auditoryvisual affective processing and have mainly used natural pairs of faces and voices conveying affective information (Klasen, Chen, & Mathiak, 2012). The following sections provide an overview of the current understanding of auditory-visual affective processing.

Auditory-visual integration of affective cues

In the same manner as spatial cues coming from different sensory modalities can be integrated to produce a unified percept of the spatial location of an event, it has been demonstrated that different sensory emotional cues can be combined to identify the affective state expressed by a stimulus.
In a study from 2000, De Gelder and Vroomen presented to their participants pairs of faces and voices with varying degrees of discordance between the affective state expressed by the face and by the tone of voice. They created different visual stimuli, which were pictures of faces from a morphed continuum between extreme happiness and extreme sadness (see Figure 3.3 for an example of a morphed continuum of facial expressions between happiness and sadness) and used sentences pronounced with sad or happy tones as auditory stimuli. For each couple of visual and auditory stimuli that were presented synchronously, participants had to pay attention only to the facial stimuli and indicate as quickly as possible whether they perceived that the person was feeling happy or sad.

Neural consequences of auditory-visual affective stimulation

Studies have mainly used natural pairs of faces and voices conveying affective information to investigate auditory-visual processing of affective stimuli. With this kind of complex stimuli, semantic congruence is an important factor for the integration of two sensory signals (Laurienti et al., 2004). Thus, two kinds of approaches have been used to examine the neural consequences of auditory-visual combination of affective information: (1) contrasting the neural response to auditory-visual affective stimulation with the response to the corresponding unimodal components and (2) contrasting the neural response to auditoryvisual affective stimulation with emotionally-incongruent bimodal stimulation (see Figure 3.4).

The time-course of auditory-visual integration of affective cues

Electrophysiological techniques such as magnetoencephalography (MEG) and electroencephalography (EEG) have high temporal resolutions and enable the monitoring of the time-course of multisensory affective processing in the brain. Electrophysiological studies with auditory-visual affective stimuli have provided strong evidence of an early integration ofmultisensory information in the processing of affect.

Consequences of auditory-visual affective stimulation on emotion identification

It is now well established that redundant affective cues coming from the auditory and visual modalities facilitates the identification of the affective state conveyed by bimodal events. This effect of the combination of auditory and visual cues on the identification of events’ emotional significance has been demonstrated with categorization tasks, where participants had to choose between two or several affective states which one corresponds to the affective state conveyed by a stimulus.

READ  Modeling of infinite ADPLL network by a continuous wave propagation medium

Faster identification of events’ emotional significance

The categorization of an affective state expressed by a person is faster with redundant auditory-visual affective stimulation than with facial expression or vocal tone alone (Collignon et al., 2008; Li et al., 2013; Pourtois et al., 2005). However, a semantic emotional incongruence between the affective information respectively conveyed by face and voice slows the response (De Gelder et al., 1999; De Gelder & Vroomen, 2000; Dolan et al., 2001; Föcker et al., 2011). These results demonstrate that a redundant signal effect not only occurs with the detection and recognition of neutral objects, but also appears when identifying an emotional state. Moreover, the fact that emotionally incongruent information slows categorization demonstrates that the gain in speed is linked to the integration of affective semantic cues delivered by the visual and auditory modalities. A study from Collignon and colleagues also supported the implication of multisensory integration in the faster behavioral responses observed with multisensory affective stimuli. Their participants had to categorize the affective state of a person based on either the bimodal presentation of voices and dynamic faces or based on unimodal stimulations. Participants’ reaction times show a redundant signal effect, which violated the race model (Collignon et al., 2008). This means that the rapidity of their responses in the bimodal conditions can be explained by an integration of the auditory and visual cues. Furthermore, the increase of affective categorization speed found when both the facial expression and the vocal tone were presented was greater if the unimodal affective cues were noisy. This is coherent with the inverse effectiveness principle, which has been established as a key principle of multisensory integration (see chapitre 1).

Table of contents :

INTRODUCTION
1. Virtual reality
2. Multisensory processing
2.1. Multisensory integration
2.2. Neural consequences of multisensory stimulation
2.2.1. Superior Colliculus responses to multisensory stimuli
2.2.2. Cortical responses to multisensory stimuli
2.3. Behavioral consequences of multisensory stimulation
3. Multisensory processing of affective stimulation
3.1. Affect..
3.2. Multisensory processing of affective stimuli
3.2.1. Auditory-visual integration of affective cues
3.2.2. Neural consequences of auditory-visual affective stimulation
3.2.3. Consequences of auditory-visual affective stimulation on emotion identification.
3.2.4. Consequences of auditory-visual affective stimulation on the induction of an affective state
4. Space and Affect
4.1. The space around us
4.2. Affective events in the space around us
4.2.1. Processing of affective events located at close distances from the body.
4.2.2. Modulation of peri-personal space boundaries in the presence of affective events
EXPERIMENTAL CONTRIBUTIONS
5. General methodology
5.1. Stimuli
5.2. Participants
5.3. Methodology specific to the studies in virtual reality
6. Auditory-visual stimulation and sensitivity to dog phobia
6.1. Description and main findings of the study
6.2. Paper A
7. Auditory-tactile stimulation and sensitivity to dog phobia
7.1. Description and main findings of the study
7.2. Paper B
8. Auditory-visual stimulation and sensitivity to crowd phobia
8.1. Introduction
8.2. Sensitivity to Crowd Phobia
8.2.1. Development of the Crowd Phobia Questionnaire (CP-Q)
8.2.2. Selection of participants for the experimental navigation in virtual reality
8.3. Virtual environment containing crowds
8.3.1. Virtual reality setup
8.3.2. Virtual environment containing crowds
8.4. Experimental navigation in virtual reality
8.4.1. Methods
8.4.2. Results..
8.4.3. Discussion
8.4.4. Conclusion
9. General Discussion
ANNEXES
Dog Phobia Questionnaire (French version)
Crowd Phobia Questionnaire (French version)
Cybersickness Questionnaire (French version)
Presence Questionnaire From the I-group (French version)
Liebowitz Social Anxiety Scale (French version)
REFERENCES

GET THE COMPLETE PROJECT

Related Posts