Study on User Cohabitation 

Get Complete Project Material File(s) Now! »

Multi-user Immersive Display

Immersive display constitutes a main component of immersive virtual environment as the visual sense plays a predominant role in user’s perception process. As mentioned in previous chapter (section, immersive displays differ from traditional displays in that they have large field of view, high resolution, stereoscopic images associated with viewpoint tracking technology.


Stereoscopy is the production of the illusion of depth by presenting a pair of 2D images showing two perspectives that both eyes naturally have in binocular vision. Then the brain merges the two images into a “single” one to achieve stereopsis (Blake and Sekuler, 2006). Besides stereoscopy, other cues (e.g. object occlusion, linear perspective, etc.) can also help to determine relative distances and depth in a perceived scene, but stereoscopy remains the most effective factor that provides users with instant depth perception.
To achieve stereoscopic vision, we should first generate a pair of images depending on user’s head position and orientation, as well as the interpupillary distance (IPD) (Dodgson, 2004). Then we need to provide two images separately for each eye. Existing separation methods can be classified into three categories:
• Head-mounted displays: we can put two small screens in front of the eyes, or a single screen displaying two images side by side to show only the desired image to each eye (Figure 1.3e);
• Auto-stereoscopic screen: auto-stereoscopic display separate images at the screen level by using the lenticular lenses or parallax barrier to assure that each eye of the user sees different pixel columns which correspond to two different images (Perlin et al., 2000);
• Eyeglasses separation: images can be separated passively by colorimetric differentiation or polarized glasses, or actively by rapidly alternating shuttered glasses. Each type of stereoscopic display has its advantages and inconveniences. HMD is the most straightforward way to provide two different images for the eyes and allows fully visual immersion of the user (the real world is completely blocked from user’s eyes except for see-through HMDs (Schmalstieg et al., 2002)). However, various technical limitations like limited resolution and field of view enhance user’s visual fatigue and restrain the wide use of HMDs. Unlike  HMDs that are heavy to carry, eyeglasses based technology combined with large immersive projection display (e.g. image wall, CAVE, dome) gives user a more comfortable stereo experience. Auto-stereoscopic screens do not require users to equip any glasses or headgear, but have limited work range – users should stay at predefined positions (at a certain distance from the screen) to correctly perceive stereo images. Recent technical developments largely improved the usability of all three types of stereoscopic display. HMDs now are lighter and less expensive with more compact design, along with better resolution, larger field of view. The improvements of immersive projection technology (IPT) (Bullinger et al., 1997) makes projection-based systems with eyeglasses a widespread solution both in academic institutes and industries, and they are specially useful to create large immersive virtual environment for group immersion by the combination of multiple projectors, such as CAVEs. Auto-stereoscopic screen combined with user tracking can now allow the tracked user to pass through different viewing areas without discontinuity (Kooima et al., 2010), but still requires complex hardware and software setup. So HMDs and projection-based systems become the major platform to support immersive or partially-immersive experience.

Visual Distortion

Stereoscopic images are generated from a single location – the center of projection (CoP) (Banks et al., 2009). A user can properly perceive the displayed 3D content on a projection screen when viewed from the CoP. So in an immersive virtual environment, user’s viewpoint is captured by head-tracking to serve as the CoP to generate appropriate images in real time as the user moves physically.
While HMDs provide personal stereoscopic images for a single user, projection-based systems are often shared by a group of people for team work, as shown by Figure 2.1. In such multi-user situation, only the tracked user receives correct stereo images from the CoP, other co-located users (followers) share the same stereo images intended for the tracked user and participate passively in the collaborative task (Bayon et al., 2006). When users are close enough to the tracked user, they can get a relatively faithful representation of the virtual environment, but as the distance increases, displacement from the CoP results in increasingly inappropriate stereo cues and distorted perception of the virtual space.

View Separation

To better support co-located use of immersive virtual environments, we need to provide perspective-correct (distortion free) individual stereoscopic views for each user. One direct way is to use personal displays such as HMDs (Salzmann and Froehlich, 2008) or see-through HMDs (Schmalstieg et al., 2002), and another solution is to introduce adaptations to existing immersive displays which already support group immersion. Bolas et al. (2004) categorize solutions for displaying multiple images in a common area:
• Spatial barriers use the display’s physical configuration and user placement to block users from seeing each other’s view.
• Optical filtering involves systems that filter viewpoints using light’s electromagnetic properties, such as polarization or wavelength.
• Optical routing uses the angle-sensitive optical characteristics of certain materials to direct or occlude images based on the user’s position.
• Time multiplexing solutions use time-sequenced light and shutters to determine which user sees an image at a given point in time. Except the first solution, the three other options are the same technologies that are used to separate images for the left and right eye of a single user as presented in section 2.2.1. Multiuser systems are often build with mixed solutions from these categories, here is a description of existing systems that provide independent stereo images for different users inside immersive or semi-immersive environment.

READ  The influence of teacher professional identity on the practice of inquiry

Image Separation

Typically we can create separate image channels for different users by adding shutters and/or optical filters in front of projectors combined with synchronized counterparts in front of user’s eyes. The two-user Responsive Workbench developed by Agrawala et al. (1997) relies purely on a time-multiplexing method which displays four different images in sequence on a CRT projector at 144Hz, thus each eye of a user views the virtual scene at 36Hz (Figure 2.3a). This workbench is the first demonstration of a two-user stereoscopic system, but the timemultiplexing method largely reduces projection time for each eye (low brightness) and users suffer from image flicker and crosstalk. Blom et al. (2002) then extended this active shuttering method to multi-screen systems like CAVEs. (Froehlich et al., 2004) further studied active shuttering technology by testing two kinds of shutters on the projector side with a range of shuttering frequencies (Figure 2.3b). The mechanical shuttering delivers higher brightness and less cross talk, but does not extend as easily to more than two users as liquid crystal (LC) shutters because of the required rotation speed and size of the disc. They tested LC shutters from 140Hz to 400Hz and found that users did not perceive flicker above a refresh rate of 200Hz, but a frequency higher than 320Hz would result in very dark images.

Table of contents :

1 Background Study 
1.1 Introduction
1.2 Virtual Reality
1.2.1 Definition
1.2.2 Virtual Environment
1.2.3 Behavioral Interfaces Visual Interface Audio Interface Tracking Interface Haptic Interface Other Interfaces
1.2.4 Human in Virtual Environments 3D Interaction Presence Cybersickness Workspace Management
1.2.5 Summary
1.3 Computer Supported Collaboration
1.3.1 Characteristics of Collaborative Work
1.3.2 Groupware
1.3.3 Collaborative Virtual Environment Definition Issues and Challenges
1.3.4 Summary
1.4 Immersive Collaborative Virtual Environment
1.4.1 User Representation
1.4.2 User Communication
1.4.3 User Interaction
1.4.4 From Presence to Copresence
1.4.5 Summary
1.5 Conclusion
2 Co-located Immersive Collaboration 
2.1 Introduction
2.2 Multi-user Immersive Display
2.2.1 Stereoscopy
2.2.2 Visual Distortion
2.2.3 View Separation Image Separation Spatial Separation Other Methods
2.2.4 Summary
2.3 Spatial Organization
2.3.1 Stage
2.3.2 Spatial Consistency
2.3.3 Collaborative Mode Consistent Mode Individual Mode Mode Switching
2.4 Virtual Navigation
2.4.1 Taxonomy Position Control Rate Control Mixed Control
2.4.2 Virtual Vehicle
2.4.3 Navigation in Multi-user IVE
2.5 Addressed Issues
2.5.1 Perceptual Conflicts
2.5.2 User Cohabitation
2.6 Approach
2.7 Conclusion
3 Experiment on Perceptual Conflicts 
3.1 Introduction
3.2 Experimental Aims and Expectations
3.3 Experimental Design
3.3.1 Task
3.3.2 Participants
3.3.3 Independent Variables Delocalization Angle Instruction Type
3.3.4 Procedure
3.3.5 Measures Performance Measures SSQ Pre-assessment and Post-assessment Subjective Measures
3.4 Hypotheses
3.5 Results
3.5.1 Collaborator Choice
3.5.2 Performance Global Analyses Inter-group Analysis
3.5.3 Subjective Data Global Analyses Inter-group Analyses
3.6 Discussion
3.7 Conclusion
4 Study on User Cohabitation 
4.1 Introduction
4.2 Approach – Altered Human Joystick
4.2.1 Basic Model
4.2.2 Alterations Altered Transfer Function Adaptive Neutral Reference Frame
4.2.3 Combinations
4.3 Evaluation Framework
4.3.1 Measures Navigation Performance Metrics User Cohabitation Capacity Metrics Subjective Questionnaires
4.3.2 Pre-test Condition
4.4 Collaborative Case Study
4.4.1 Experimental Design
4.4.2 Results
4.4.3 Discussion
4.5 Follow-up Experiments
4.5.1 Participants
4.5.2 Experiment T Experimental Design Results
4.5.3 Experiment R Experimental Design Results
4.5.4 Experiment TR Experimental Design Results
4.5.5 Discussion
4.6 Conclusion
5 Dynamic Navigation Model 
5.1 Introduction
5.2 Motivation
5.3 Concepts
5.3.1 Transfer Function
5.3.2 Physical Workspace constraints Potential Field Solutions Command Configuration
5.3.3 Virtual World constraints Interaction with Virtual World Interaction between Vehicles
5.4 Implementation
5.4.1 Control Process
5.4.2 Data Structure
5.4.3 Components
5.5 Discussion
5.5.1 Generic Control Process
5.5.2 Flexible Metaphor Integration
5.5.3 Mode Switching
5.5.4 Adaptive Parameter Settings
5.5.5 Summary
5.6 Conclusion
Appendix A Acronyms


Related Posts