Design of Mixed Reality Training for Automated Cars drivers

Get Complete Project Material File(s) Now! »

DESIGN OF MIXED REALITY TRAINING FOR AUTOMATED CARS DRIVERS

Introduction

In the last section of the previous chapter we explained why prior training is crucial to enable safe interaction between the driver and the automated car. Without training, in fact, the drivers might interact with the system in the wrong way and thus put themselves and other road user at danger. The few research work focused on this topic highlighted the inadequacy of traditional training (e.g. owner’s manual) and highlighted the necessity of a more elaborated training.
Operating a traditional car is already a complex activity, that includes tasks cat- egorized in three levels [McCall et al., 2018]: operational level tasks which includes low-level interactions (e.g. pressing a pedal, steering); tactical level tasks which represent more complex manoeuvres (e.g. obstacle avoidance); strategic level tasks which consist in longer term objectives (e.g. navigation planning). Although these tasks may sound very complex, those skills are acquired by humans with practice in driving schools and improved with everyday experience.
When it comes to automated cars, the driver, besides these skills, needs to acquire a set of additional knowledge related to the automated system (capabilities and limits), their role (the activities they are allowed, not allowed and required to perform) and some (motor) skills related to the driving task, the interaction with the vehicle equipment (HMI) and the take-over. We will analyze the training requirements in more detail in Chapter 4. If a subset of this novel information can be acquired out of the car, the process of familiarization with the automated system (which moreover would mainly work on highways) and its equipment and with the take-over requires actual driving. However, familiarizing while driving in real traffic has lots of drawbacks. First, it would be unsafe and dangerous for the driver itself and the other road users. Second, it would be hard to generalize or diversify the driving scenario, since it would depend on real traffic situations. Last, it is demanding in terms of time, cost and availability of staff and trainer.
For these reasons, it is necessary to explore alternatives to immerse the driver in risk-free driving environments. One of the possibilities is given by environments based on Mixed Reality.
With its possibility of real time and pseudo-natural interaction, Mixed Reality can represent a potential solution to provide an immersive environment where drivers can be trained and familiarized with the car in complete safety. The use of this technology alters user’s perception and transforms the way in which the activities are carried out.
Designing a learning environment in mixed reality requires making choices in terms of visualization of the information, manipulation techniques to modify the environment and pedagogical content.
In this chapter, to meet our goal, we define what a MR system should provide in terms of immersion by analyzing its visualization and manipulation characteristics and we examine the good practice to avoid possible conflicts that a user may experience due to the altered perception of the environment. Subsequently, we present how MR has been used for training purposes and how users can be evaluated in terms of their ability to transfer skills to the real environment.

The Mixed Reality spectrum

Over the past decades, the interaction between humans and computers, humans and environment and computers and environment were well distinguished and independently defined. In recent times, technological progresses in perception, processing and computer vision have unlocked possibilities to understand user’s surrounding and to enhance human-computer interaction with inputs coming from the environment itself (e.g. user location and tracking, object recognition, spatial mapping, and so on).
Mixed Reality is the results of enhancing user’s perception of the real world by combining together human inputs, computer generated content and the surrounding environment.
Figure 3.1: Venn diagram of Mixed Reality. Image from microsoft.com
The term “Mixed Reality” was introduced by Milgram and Kishino [1994]: “the most straightforward way to view a Mixed Reality environment is one in which real world and virtual world objects are presented together within a single display, that is, anywhere between the extrema of the virtuality continuum”. Mixed Reality, therefore, can be considered an umbrella term which refers to a variety of hardware and software combinations used to create Virtual Reality (VR) and Augmented Reality (AR) applications.
Figure 3.2: The Mixed Reality spectrum. Image from microsoft.com
At the left extreme of the Mixed Reality spectrum (Figure 3.2), there is the real environment consisting solely of real objects and with no computer-generated elements; at the opposite extreme, we have fully Virtual Environments consisting solely of virtual objects, in which no elements from the real environment are present.
Mixed Reality, thus, is everything in between: it can either augment the real world with virtual features or augment the virtual world with real features [Wagner et al., 2009]; in other words, it blends the physical and the virtual worlds by anchoring digital content to real-world objects. Users can interact with virtual objects as they would do with real ones and these virtual objects do respond to user inputs and changes of the real environment as well. This is made possible thanks to the use of advanced sensors and processing algorithms that allow for real-time scanning of the environment, digital reconstruction and accurate object tracking. Increasing the amount of reality or virtuality means moving toward purely real or virtual experiences respectively. Where systems fall on the spectrum tends to be defined by how much interaction and awareness there is between the digital and physical environments. In our analysis we will consider Virtual Reality and Augmented Reality technologies at the extremes of the Mixed Reality continuum.
Virtual Reality is a technology which takes the users out from the real world and entirely immerse them in a digital computer-generated environment. According to Arnaldi et al. [2018] the objective of VR is “to allow the user to virtually execute a task while believing that they are executing it in the real world”. Similarly, [LaValle, 2016] defines VR as follow: “Inducing targeted behavior (i.e. the VR experience) in an organism (i.e the individual who is living the experience) by using artificial sensory stimulation (i.e. the way in which one or more senses of the organism become hijacked), while the organism has little or no awareness of the interference (i.e. to what extent the organisms believe that the virtual world is the actual one)”.
Augmented Reality is a technology that allows to overlay digital content to the real world objects. AR keeps the real world central to the experience and enhances it with information related to the environment.
Moving form AR to VR on the MR continuum implies that some of the real elements of the world are replaced by their virtual counterparts. This has important implications on:
• the extent to which the user is able to see the real environment;
• the possibility for the user to perform motor actions within the real environment; and in turn on
• the coherence between the training environment and the environment in which the acquired skills will be applied.
The objective of our research is to explore the Mixed Reality continuum to identify adequate combination(s) of immersion in order to maximize the transfer of the skills acquired in the training environment to the real situation.

Immersion in Mixed Reality

Around the term immersion, researchers have given different definitions and interpretations over the years, in particular in the context of Virtual Reality. Slater gives the following definition of immersion: “the more that a system delivers displays (in all sensory modalities) and tracking that preserves fidelity in relation to their equivalent real-world sensory modalities, the more that it is immersive” [Slater, 2003]. Immersion, thus, refers to what the system delivers from an objective point of view and to the technological characteristics that can be objectively assessed. Also for Ragan et al. [2010] immersion is a function of the simulator’s technology rather than the user’s experience in the virtual environment.
For Mestre et al. [2006], immersion is achieved by removing as many real world sensations and substituting them with the correspondent virtual ones. Bailenson et al. [2008] identify in unobtrusive tracking and minimization of real-world sensory information two of the components of immersion. This concept has been subsequently broaden by Bowman and McMahan [2007] who suggest that “immersion is not all or nothing, [. . . ] but rather a multidimensional continuum”, and thus “we should not consider immersion as a single construct, but rather as the combination of many components”. In fact, when virtual counterparts substitute the real thing, several of the user’s sensory channels (visual, auditory, tactical, etc) are stimulated in different ways.
Also, immersion is considered responsible (not in an exclusive way) for the extent to which a user can experience the feeling of “being in” or “existing in” the VE in which they are immersed. We refer to the user’s subjective and context-dependent response to a virtual experience with the term“presence”, which is defined as the psychological, perceptual and cognitive consequence of immersion [Mestre et al., 2006]. However, an analysis of presence is out of the scope of this thesis.
A consideration that emerges when extending the concept of immersion from Virtual Reality to the entire Mixed Reality spectrum is that while in pure VR it is possible and adequate to classify systems in term of ordinal immersion (VR system A is more immersive than VR system B because it delivers displays and tracking that preserve fidelity in relation to the real-world [Slater, 2003]), in MR the systems should be classified according to a nominal scale of immersion. In fact, a mixed reality system, it is strongly coupled with the real environment in which it is used and it relies on it to display (in all sensory channels) effective mixed reality experiences.
We thus define immersion in Mixed Reality as the objective technological characteristics that produce artificial stimuli in all the sensory channels. Immersion has an effect on how tasks are performed in the virtual environment. One of the reasons for this is that information can be presented as non-congruent between sensory channels. In this work we focus on the question of coherence between the visualization and the manipulation space (two of the components which in our case are the most significant descriptors of immersion), and the possible sensorimotor incoherences that these components may generate.

READ  Titan Electromagnetic Environment Characterization

Visualization space

In order to display the virtual content different graphic rendering devices can be used, such as traditional screens, projection-based displays and head-mounted displays (Figure 3.7). The decisions taken at visualization space-level influence the visual cues and consequently the way in which user are able to perceive and observe the environment.
According to Milgram and Kishino [1994] there are six classes of MR display envi- ronments:
1. monitor-based video displays (“window-on-the-world”)
2. same as 1, but using HMDs
3. HMDs equipped with see-through capability with which computer-generated graphics can be optically superimposed (i.e. optical see-through)
4. same as 3, but using video viewing of the external world (i.e. video see-through)
5. completely graphic display environments to which video “reality” is added
6. completely graphic environment in which real physical objects in the user’s environment play a role in the computer-generated scene.
The technical characteristics and the nature itself of the visualization device may have an impact on:
• depth perception – The capability of the system affects the possibility to create the illusion of monocular depth cues (e.g. motion parallax, texture gradient) and binocular depth cues (e.g. binocular parallax)
• the possibility to display the virtual environment at a 1:1 scale
• the possibility for the user to see their own body – The use of Head-Mounted
Displays may block the view of the real world, preventing the user to see their own body and visual proprioception. This should be taken into account if user’s interaction is considered important.

Table of contents :

1 Introduction
1.1 Context of the thesis
1.2 Motivations
1.3 Objectives of the Thesis
1.4 Structure of the document
I Context and theoretical background
2 Driving Automation
2.1 Introduction
2.1.1 Perceiving the road environment
2.1.2 Levels of Driving Automation
2.2 Level 3 – Conditionally Automated Vehicles
2.2.1 Human Factors in Autonomous Driving
2.2.2 Situation Awareness: the Out-Of-The-Loop problem
2.2.3 Transition of Control
2.2.4 Evaluation of Take-Over
2.2.5 Factors influencing the take-over
2.3 The need for training
2.3.1 Current state of driver’s training in automated vehicles
2.4 Concluding remarks
3 Design of Mixed Reality Training for Automated Cars drivers
3.1 Introduction
3.2 The Mixed Reality spectrum
3.2.1 Immersion in Mixed Reality
3.2.2 Sensorimotor Incoherences
3.2.3 Stimulation vs Information correspondence
3.2.4 The need for light training systems
3.3 Training and Learning in Mixed Reality
3.3.1 Immersive Training
3.3.2 Mixed Reality Environments for Learning
3.3.3 Training Evaluation and Transfer of Training
3.4 Concluding remarks
II Training design, experimental platforms and user studies
4 Experimentation platform
4.1 Introduction
4.2 The characteristics of the target automated system
4.3 Proposed training design
4.3.1 Rasmussen’s SRK model
4.3.2 Using the SRK model for transition of control
4.3.3 SRK training requirements
4.4 The Driving and Onboard Activity Simulator (DOAS)
4.4.1 Vehicles and roads
4.4.2 On-road Driving Scenarios
4.4.3 Non-Driving Related Tasks
4.5 Empirical validation of the VR manipulation interfaces
4.5.1 Realistic interaction
4.5.2 6-DoF controller based interaction
4.5.3 Non-Driving Related Task
4.5.4 Experimental design
4.5.5 Results
4.6 The Light Virtual Reality Training System
4.6.1 The hardware
4.6.2 The Virtual Learning Environment
4.6.3 Preventive measures for Simulator Sickness
4.7 Concluding remarks
5 The role of immersion in VR-based training
5.1 Introduction
5.2 Immersion characteristics of the systems
5.2.1 Light Virtual Reality system
5.2.2 Fixed-base driving simulator
5.3 Experimental design
5.3.1 Training
5.3.2 The test drive
5.3.3 Measures
5.3.4 Procedures
5.4 Results
5.4.1 Self-report measures
5.4.2 Performance measures
5.4.3 Stress and confidence during autonomous driving
5.5 Discussion
5.5.1 Self-reported measures
5.5.2 Simulator Sickness
5.5.3 Objective and performance measures
5.5.4 About learning-by-driving
5.6 Concluding remarks
6 On-Road Evaluation of VR and AR training
6.1 Introduction and motivations
6.2 Characteristics of Immersion of the VR and AR systems
6.2.1 The Light Augmented Reality Training System
6.2.2 The Light Virtual Reality Training System
6.3 Wizard of Oz for Autonomous Driving
6.4 Experimental Design
6.4.1 The training
6.4.2 The target vehicle
6.4.3 The Wizard of Oz setup
6.4.4 The test drive
6.4.5 Participants
6.5 Results
6.5.1 Objective measures
6.5.2 Self-reported measures
6.6 Discussion
6.6.1 Comparison with the previous user study
6.7 Concluding remarks
7 Discussion and Conclusion
7.1 Introduction
7.2 Immersion
7.3 Training Content
7.4 Training Evaluation
7.5 Open questions and perspectives
7.6 Epilogue
Appendices
A Scientific Production
B Questionnaires
B.1 The role of immersion in VR-based training
B.2 On-Road Evaluation of VR and AR training
C R´esum´e en Fran¸cais
8 Bibliography

GET THE COMPLETE PROJECT

Related Posts