Virtual environment is a term often used to describe computer-generated synthetic space, similar to virtual world. However, more strictly speaking, as Ellis (1991b) explained, an environment is the theater of human activity, so virtual environment is a human-centered notion, not only refers to the digital data that forms an artificial world. For example, according to Fox et al. (2009): “A virtual environment is a digital space in which a user’s movements are tracked and his or her surroundings rendered, or digitally composed and displayed to the senses, in accordance with those movements.” This definition emphasizes that a virtual environment is a virtual space that can react to user’s movements and change accordingly.
Ellis (1991b) summarized three parts which form a virtual environment: content, geometry and dynamics. The content is often organized as scene graph with clear hierarchical structure and inter relationship between objects. Each object possesses a state vector containing prop-erties such as position, orientation, velocity and color, etc. The geometry is a description of an environmental field of action that has dimensionality, metrics and extent, and the dynam-ics of an environment are the rules of interaction among its contents describing their behavior, such as physical laws (gravity, object collision, etc.). More details on the description of virtual environment can be found in Ellis (1991a)’s book.
In virtual reality applications, the diversity of origins of the worlds represented in the virtual environment makes virtual reality more than a simple copy of the “reality” that we live in. As summarized by Fuchs et al. (2011), the virtual world could be a simulation of certain aspects of the real world, but also a completely symbolic or imaginary world. The visual representation of objects could vary depending on our need, and spatial, temporal and physical laws may or may not be applied in the virtual world. For example, we can visualize and interact with structure of molecules, flow of fluids under different scales (Ferey´ et al., 2009; Bryson, 1996), we can also change the speed of light to better visualize and understand phenomena of relativity (Doat et al., 2011), or the entire world could just be a figment of imagination of an artist or a science-fiction writer.
Human in Virtual Environments
As we discussed before, virtual reality is a human-centered research domain. Technical ad-vancement concerning the modeling of virtual world and the design of behavioral interfaces all serve to provide a better feeling of “being in another world” (presence) for the human in virtual environments. So to understand human activity and sensation at perceptual and cognitive levels is a key to the success of virtual reality system.
The interaction between the user and 3D virtual world begins once he/she is inside the virtual environment, and these human activities can be divided into some basic behaviors named Virtual Behavioral Primitives (VBP) Fuchs et al. (2011). VBPs can be grouped into five categories:
• communicating with others.
• application control.
In the list above, except for application control, these activities are very similar to what people practice in the real world. Observation is our first triggered action after we dived into a virtual environment, which is a relatively passive action. The only interaction part is that when user turns head or has ocular movements (captured with eye tracking), the system will adapt the visual and audio rendering accordingly. Moving or navigation in the virtual world is also a basic interaction for the user to accomplish various tasks like object searching or transporting, way finding, sightseeing, etc. Besides natural walking, many virtual navigation metaphors are developed to effectuate more efficient virtual viewpoint control under different specific conditions (more details to be presented in Chapter 2). Regarding “acting”, which is a vague notion, is further broke into object selection, manipulation and symbolic input by Bowman et al. (2004). The way we interact with virtual object is still quite different than with real objects due to technical limitations (e.g. lack of detailed and robust haptic feedback), thus many interaction metaphors are proposed to enable other forms of interaction (Hand, 1997).
Computer Supported Collaboration
The development of communicational technology changes the way people work together, for example, telephone and telegraph made the exchange of information much faster than paper-based media. Later on, with the boom of the Internet, emails, forums, video conferences create a tighter link among remote collaborators. However, networked collaboration still faces a lot of difficulties in terms of communication and coordination. For example, how a group of coders can contribute to the same project remains a non-trivial job despite using versioning tools like SVN4 or Git5.
In 1984, Irene Greif of MIT and Paul M. Cashman of Digital Equipment Corporation or-ganized a workshop attended by people from various domains interested in using technology to support people in their work. In this workshop the term Computer Supported Cooperative Work (CSCW) was first mentioned (Grudin, 1994), then it becomes a research domain attract-ing world-wide interests. CSCW studies how computer systems could enable a group of people connected by network to work together efficiently, or as stated by Carstensen and Schmidt (1999): “how collaborative activities and their coordination can be supported by means of com-puter systems”. More details about the history, state-of-the-art and research issues of CSCW can be found in the book of Beaudouin-Lafon (1999).
Characteristics of Collaborative Work
We can learn many things from observation of people working and collaborating in the real world to get some insights into the nature of collaborative work (Churchill and Snowdon, 1998). Here are some general points to summarize characteristics of collaborative work:
Collaboration is human-centered The number and profile of people involved in the task de-fine the general organization for collaboration. For example, small groups are likely to work together in real time with more flexible schedules, communicate informally and share informa-tion with minimal cost, while large groups need much more efforts for coordination and more adapted communication technology. The profile of collaborators also matters, if one person is way more experienced than others, it is better to have a leader-follower mode so everyone works around this leader. However, if working capacity is homogeneously distributed and especially, each one has his/her own expertise, it is better to work with equal responsibility for the task.
Collaboration is task-oriented The nature of the task defines the spatial and temporal con-straints for collaboration, or we can say that sometimes people need to collaborate because of some spatial or temporal issues and the task can not be done without combining efforts from different actors. For example, tasks like painting a wall or repairing a car together require peo-ple to be co-located, other tasks like operating a TV live show, or having a video conference need synchronous actions. Moreover, complex tasks often consist of different components with inherent logical links and the task can only be accomplished with a certain procedure.
In some collaborative tasks people play similar roles, e.g., two workers try to move a heavy object (piano movers’ problem), two pilots control the landing of a plane, etc. Whilst in other tasks collaborators can also have distinct roles with different associated competence (Pouliquen-Lardy et al., 2014), e.g. the relationship between trainer and trainee, field operator and online assistant, etc. Awareness is the base of coordination Awareness is one’s knowledge of task related activ-ities, especially activities of others. As stated by Dourish and Bellotti (1992), awareness is an “understanding of the activities of others, which provides a context for your own activity”. This “context” allows an individual to evaluate his/her actions with respect to group goals and work progress. In addition, awareness may also refer to the knowledge of the state of a task related object (e.g. the history and current state of the shared document for group editing task) and the working atmosphere (whether there is an emergency, whether people are under pressure, etc.).
Immersive Collaborative Virtual Environment
As stated in the previous section, CVE is developing as a convergence of research interests within CSCW and VR communities. On the one hand, most existing virtual reality systems (e.g. CAVEs or HMD-based systems) can offer high level of multi-sensory immersion for a single user (except multi-user systems that we will talk about in Chapter 2), so it is interesting to make connexions between these (remote) immersive systems by CVE technology to enable group immersion. On the other hand, with the progress of telecommunication technology and computer science (computer graphics, high performance computing, etc.) in recent years, many low-level issues of CVEs are already solved or we can say, are no longer the bottleneck for CVE usage. However, desktop-based CVEs with traditional human-computer interface have still limited support for social human communications compared to face-to-face interaction in the real world, and the lack of sensorial feedback makes it difficult for users to feel “being in the virtual world” and impairs their task performance (Narayan et al., 2005; Nam et al., 2008), so it is important to introduce behavioral interfaces for CVEs to convey more social cues for human communication and to improve the immersion level.
As a consequence, more immersive CVEs are implemented with various hardware and soft-ware solutions. With the “perception, decision and action” loop as shown in Figure 1.2, we can apply this loop in a multi-user situation and extend it to illustrate the conceptual framework of immersive CVEs (Figure 1.9). In this shared virtual world, users’ “perception, decision and action” loops are interconnected as one user’s action could affect other users’ perception of the virtual world.
SSQ Pre-assessment and Post-assessment
The questionnaire used for cybersickness evaluation is Simulator Sickness Questionnaire (SSQ) (Kennedy et al., 1993). The SSQ was administered before and after exposure to the virtual environment. The SSQ is a 16-item measure in which participants report symptoms on a scale of 0 to 3 (0 = None, 3 = Severe). Three types of symptoms are assessed: oculomotor dysfunctions (O) (eyestrain, blurred vision, difficulty in focusing), mental disorientation (D) (difficulty in concentrating, confusion, apathy), and nausea (N) (including vomiting). Unit scores (O, D, N) are weighted scores. The SSQ is a widely used measure of simulator sickness that has been shown to be a valid measure of this construct in VR research (Cobb et al., 1999).
Table of contents :
1 Background Study
1.2 Virtual Reality
1.2.2 Virtual Environment
1.2.3 Behavioral Interfaces
18.104.22.168 Visual Interface
22.214.171.124 Audio Interface
126.96.36.199 Tracking Interface
188.8.131.52 Haptic Interface
184.108.40.206 Other Interfaces
1.2.4 Human in Virtual Environments
220.127.116.11 3D Interaction
18.104.22.168 Workspace Management
1.3 Computer Supported Collaboration
1.3.1 Characteristics of Collaborative Work
1.3.3 Collaborative Virtual Environment
22.214.171.124 Issues and Challenges
1.4 Immersive Collaborative Virtual Environment
1.4.1 User Representation
1.4.2 User Communication
1.4.3 User Interaction
1.4.4 From Presence to Copresence
2 Co-located Immersive Collaboration
2.2 Multi-user Immersive Display
2.2.2 Visual Distortion
2.2.3 View Separation
126.96.36.199 Image Separation
188.8.131.52 Spatial Separation
184.108.40.206 Other Methods
2.3 Spatial Organization
2.3.2 Spatial Consistency
2.3.3 Collaborative Mode
220.127.116.11 Consistent Mode
18.104.22.168 Individual Mode
22.214.171.124 Mode Switching
2.4 Virtual Navigation
126.96.36.199 Position Control
188.8.131.52 Rate Control
184.108.40.206 Mixed Control
2.4.2 Virtual Vehicle
2.4.3 Navigation in Multi-user IVE
2.5 Addressed Issues
2.5.1 Perceptual Conflicts
2.5.2 User Cohabitation
3 Experiment on Perceptual Conflicts
3.2 Experimental Aims and Expectations
3.3 Experimental Design
3.3.3 Independent Variables
220.127.116.11 Delocalization Angle
18.104.22.168 Instruction Type
22.214.171.124 Performance Measures
126.96.36.199 SSQ Pre-assessment and Post-assessment
188.8.131.52 Subjective Measures
3.5.1 Collaborator Choice
184.108.40.206 Global Analyses
220.127.116.11 Inter-group Analysis
3.5.3 Subjective Data
18.104.22.168 Global Analyses
22.214.171.124 Inter-group Analyses
4 Study on User Cohabitation
4.2 Approach – Altered Human Joystick
4.2.1 Basic Model
126.96.36.199 Altered Transfer Function
188.8.131.52 Adaptive Neutral Reference Frame
4.3 Evaluation Framework
184.108.40.206 Navigation Performance Metrics
220.127.116.11 User Cohabitation Capacity Metrics
18.104.22.168 Subjective Questionnaires
4.3.2 Pre-test Condition
4.4 Collaborative Case Study
4.4.1 Experimental Design
4.5 Follow-up Experiments
4.5.2 Experiment T
22.214.171.124 Experimental Design
4.5.3 Experiment R
126.96.36.199 Experimental Design
4.5.4 Experiment TR
188.8.131.52 Experimental Design
5 Dynamic Navigation Model
5.3.1 Transfer Function
5.3.2 Physical Workspace constraints
184.108.40.206 Potential Field
220.127.116.11 Command Configuration
5.3.3 Virtual World constraints
18.104.22.168 Interaction with Virtual World
22.214.171.124 Interaction between Vehicles
5.4.1 Control Process
5.4.2 Data Structure
5.5.1 Generic Control Process
5.5.2 Flexible Metaphor Integration
5.5.3 Mode Switching
5.5.4 Adaptive Parameter Settings
Appendix A Acronyms
Appendix B Test Platform – EVE
Appendix C Dual-Presence Questionnaire
Appendix D Cohabitation Questionnaire
Appendix E Kinematic State Update
Appendix F Publications By the Author
Appendix G Synth`ese en Franc¸ais