CHAPTER 2: PHOTO-REALISM IN VIRTUAL ENVIRONMENTS
What is Photo-realism?
Photo-realism refers to construction of computer images that in addition to geometry accurately simulate the physics of materials and light. The goal in this study, both for static images and moving scenes in virtual reality applications, is to achieve as efficiently as possible, sufficient accuracy to create photo-realistic images.
At present, the nearest we have come to “photo-realism” in “real-time” in computer graphics requires very powerful computer workstations, such as Silicon Graphics (SGI®). The main additional requirement of VR is that displays be shown in three-dimensions using stereoscopic vision. This requires still greater computer power since double the information is being processed at any given point in time. Photo-realism also requires that images should have a frame-refresh rate suitable for the human eye to view. Also, whilst moving through the “virtual world” both the size and the perspective view of the “virtual buildings” should change. If not, “display mismatch” will make the experience seem less than believable, (McMillan, 1994).
For many years now computer scientists have tried to make objects look more realistic on computer displays. Realism here means to generate images on a computer, which are indistinguishable from real photos. This realism is possible through the use of local and global illumination methods. The basis for a physically correct calculation of the illumination depends on the accuracy of the scene description. This means both the geometry as well as the description of the surfaces and light sources
Lighting (global illumination) in a scene can be categorized into view-independent (diffuse) and view-dependent (non-diffuse) components.
View independent illumination accounts for all effects of light that are not dependent on the viewer’s position. This illumination depends only on the configuration of the geometry and the lights and so may be precomputed and stored as a dense mesh with pervertex colors or as texture maps rendered using traditional shading hardware (Cohen et al., 1993) (Diefenbach, 1996).
View-dependent illumination accounts for specular and glossy directional-biased lighting. These effects depend on the viewpoint and are difficult to pre-compute and render accurately.
There are several techniques used for photo-realistic visualization of 3D models, many of which incorporate in real-time, effects such as lighting and reflection, that are commonly only possible in frame by frame rendering. These are discussed below:
This is the most basic shading technique. A single shading color is determined per polygon, and the entire polygon is either rendered with this color, or the color is added to the texture on the polygon. This technique tends to emphasize the flat nature of the polygons in the scene.
In the area of 3D graphics the most traditional way of implementing polygonal lighting for VR real-time applications is to take the light and color information from each vertex and to linearly interpolate the lighting and color of pixels between the vertices. Each vertex of a scene contains information about its location, its color, and finally about its orientation relative to a light source. ‘Lighting’ means making an actual solid looking object out of a wire frame. This is a fixed shading method, meaning that once the vertex colors have been calculated they will not vary with time-of-day changes such as moving sun angle and sun intensity. It only relies on the RGB values assigned to each vertex and the polygons color will vary in between these RGB values. Most VR systems including Leeds Advanced Driving simulator and arcade games such as SEGA® ‘Street Fighter’, use gouraud shading extensively.
This approach has several disadvantages. It poses serious difficulties in achieving detailed lighting without creating a large number of polygons. The lighting can look perfectly realistic as long as an object consists of enough polygons or vertices. However, many vertices put a high strain on the ‘transform’ part of the 3D rendering process and it puts a high strain on the bus, since each vertex has to be sent to the graphics chip. Also, this lighting is not perspective correct, It lacks correct diffuse-specular simulation and photo-realism. For example, it is also impossible to have a highlight in the middle of a large polygon. It is restricted to the diffuse component of the illumination model.
If used correctly, this technique makes objects look round. The technique can not be used in a convincing way if there are multiple light sources in the scene. Another disadvantage is that when the polygon moves or rotates, a static light source does not cause static lighting on the polygon. It also varies according to the viewing angle.
To summarize it can be said that ‘Vertex Lighting’ or ‘Gouraud Shading’ is a good and valid technique for objects that consist of many polygons, but its not good enough for objects that consist of only few polygons. How NVIDIA® puts it is “ vertex lighting is of limited use in some scenarios because its base unit is the triangle” (Pabst, 2000)
Phong shading is one of the most realistic techniques for dynamic lighting. It overcomes the limitation of gouraud shading by incorporating specular reflection into the scheme. A texture is attached to every light source. This texture is then projected on every polygon, by using the normals of each polygon vertex as an index in the lightmap. This way, highlights can occur in the middle of a polygon. Also, the lightmap is fully configurable: It can be dithered, smooth, or very sharp in the center. Also, the intensity transition from one polygon to the next is smoother. The primary objective is efficiency of computation rather than for accurate physical simulation. It is very hard however to have directed spotlights with this technique, or to have multiple lights on a single polygon.
With dynamic light sources, their sphere of illumination is projected into the world and their dynamic light contributions are added to the appropriate lightmaps, which are used to rebuild the affected surfaces.
Local Illumination Algorithms
Local illumination algorithms describe only how individual surfaces reflect or transmit light (Lightscape 3.2, User’s Guide). And these are usually applied in real-time 3D graphics shading. Given a description of light arriving at a surface from specific light sources, these mathematical algorithms predict the intensity, spectral character (color) and distribution of light leaving the surface
Global Illumination Algorithms
For production of more accurate results it is important to take into account not only the light sources themselves, but also how all the surfaces and objects in the environment interact with the light and how this is transferred between the model surfaces. Such rendering algorithms are called global illumination algorithms and include radiosity and raytracing.
The radiosity method is a theoretically rigorous method that provides a solution to diffuse interaction within a closed environment. One interesting aspect of a radiosity solution for a scene is its view-independent nature. This is an important result, because this means that during a walk-through, when the user changes their field of view (FOV) to focus on another view, there is no need to re-compute the form factors and solve the linear radiosity equations. Here, it is only required to render the new FOV based on pre-computed radiosity solution. As the solution is view-independent, it is based on subdividing the environment into discrete patches, or elements, over which the light intensity is constant. Radiosity is most successful in dealing with such manmade environments like interiors of large architectural structures, and has certainly produced the most impressive computer graphics images to date (Watt et al., 1992).
Using software like Lightscape, you can render your lighted model using radiosity techniques. Radiosity computes the illumination of a surface from light shining directly from a source and from indirect light reflected from other surfaces in the environment to achieve very realistic lighting of diffuse-surfaced environments. Unlike view-dependent rendering algorithms like raytracing, radiosity is view-independent and once the lighting of each surface has been calculated, a participant can navigate through a very “realistic” virtual environment in real-time. Using Lightscape, you can add or import point lights, linear lights, and area lights to your model, as well as day-lighting, spotlights, or even customized lamps.
Once the materials have been defined and lights added to the model, Lightscape can be used to process a radiosity solution. This means that the light emitted from the sources in the file is calculated, directly and indirectly from inter-reflections between surfaces in the model. Upon completion of the radiosity solution, the virtual environment is simply a geometric mesh, colored appropriately to simulate the light that it has received. The radiosity solution has no light sources itself, but when the file is rendered in real-time, parameters in the file tell the simulation software to use the colors of the mesh and not to add or render the effects of additional light sources. Thus, only with radiosity solutions, is there no explicit light source in the virtual environment. Radiosity works best with indirect lighting and diffuse surfaces.
Raytracing is a method that allows us to create stunning photo-realistic images on a computer. It excels at calculating reflections, refractions and shadows. The first stage of creating a raytraced image is to “describe” what it is that you want to depict in your picture. You may do this using an interactive modeling system, like a CAD package, or by creating a text file that has a programming language-like syntax to describe the elements. Either way, you will be specifying what objects are in your imaginary world, what shape they are, where they are, what color and texture they have and where the light sources are to illuminate them.
Having done all of this, it is fed into the raytracer. The main drawback of raytracing is – it is often very slow and processor intensive. The software actually mathematically models the light rays as they bounce around the virtual world, reflecting, refracting until they end up in the lens of your imaginary camera. This can quite literally involve thousands and millions of floating-point calculations and takes time. Raytracing images can take anything from a few seconds to many days. It’s a long process, but the results can make it all worthwhile.
This method, however, omits the most important aspect for creating a true photo-realistic image, that is, diffuse inter-reflections such as ‘color-bleeding’ effects, eg: colored light bouncing off blue tiles onto the lens in the picture below should result in a blue tint. Raytracing works best with direct lighting and highly reflective materials
CHAPTER 1: INTRODUCTION
1.3 Previous Work
1.4 Motivation and Problem Statement
1.5 Software Used
CHAPTER 2: PHOTO-REALISM IN VIRTUAL ENVIRONMENTS
2.1 What is Photo-realism?
2.2 Lighting Algorithms
CHAPTER 3: REAL-TIME RENDERING
3.1 The Graphics Rendering Pipeline
3.3 Optimization Techniques- by not drawing what is not seen
CHAPTER 4: CASE STUDIES
4.1 Study Examples demonstrating the potential of Real-Time Visualization
4.2 Survey of Available Virtual Reality Toolkits
CHAPTER 5: METHODOLOGY
5.2 Preparatory Phase
5.3 Real-Time Visualization
5.3.1 Features of the Real-Time engine:
CHAPTER 6: RESULTS
CHAPTER 7: CONCLUSIONS AND FUTURE WORK
7.2 Future Work
GET THE COMPLETE PROJECT