Central Catadioptric Omnidirectional Cameras 

Get Complete Project Material File(s) Now! »

Central Catadioptric

Omnidirectional Cameras

This chapter is dedicated to central catadioptric cameras. First, we present a brief overview of catadioptric camera through its history. Then, we describe how to link the image in the sensor to the light rays emitted by a region of space. In this part, we show that wide-angle sensors impose a different view of the world through spher-ical perspective projection. We also present the unified projection model which uses spherical perspective projection. We focus on this model because it presents a com-promise between genericity and over-parameterisation and its parameters are easily identifiable; therefore, we use it throughout this work in order to accomplish all the experiments and to provide reliable results. Finally, we analyse a state of the art of existing calibration methods which exploits prior knowledge about the scene, such as the presence of calibration patterns, lines, points or sphere images.

Overview

The concept of catadioptric camera already appeared in the presentation of Ren´ Descartes in 1637 in Discours de la Methode [DS37]. He showed that refractive as well as reflective ovals (conical lenses and mirrors) focus light into a single point. The idea was later re-phrased by Feynman et al. in 1963 [RPFS63]. The first cata-dioptric cameras were composed of lenses rotating around a given axis (swing lens cameras). Because the camera was stationary, the acquired field of view was lim-ited between 120 degrees and 150 degrees. Rotating cameras, created shortly after, did not have this limitation and made it possible to create 360 degrees views of the environment. However, if we can process the images, as is done regularly with computers nowadays, a way of capturing large field of views without any moving parts can be achieved. Two main approaches have been used: (1) a mirror system constructed from several planar mirrors, where a camera is assigned to each mirror plane; (2) one camera observes convex or concave mirror. Rees [Ree70] was the first to patent the combination of a perspective camera and a convex mirror (in this case a hyperbolic mirror). In his US patent, he described how to capture omnidirectional images that can be transformed to correct perspective views (no parallax). Jeffrey R. Charles designed a mirror system for his Single Lens Reflex (SLR) camera. He proposed a darkroom process to transform a panoramic image to a cylindrical pro-jection [Cha76]. It is much later, in the 90’s, that omnidirectional vision became an active research topic in computer and robot vision. Chahl and Srinivasan de-signed a convex mirror with regard to quality imaging [CS97]. Geb proposed an imaging sensor with a spherical mirror for teleoperational navigation on a mobile vehicle [Geb98]. Kawanishi et al. proposed an omnidirectional sensor covering a whole viewing sphere [TKY98]. They designed an hexagonal mirror assembled with 6 cameras. Two such catadioptric cameras were symmetrically connected back to back to observe the whole surroundings. Yagi et al. used a conic-shaped mirror. Vertical edges in the panoramic image were extracted and with acoustic sensor coop-eration a trajectory of a mobile robot was found [YYY95]. Yamazawa et al. detected obstacles using a catadioptric sensor with a hyperbolic mirror [YY00]. Under the assumption of planar motion they computed the distance between the sensor and the obstacle. Their method was based on a scale invariant image transformation. In [DSR96], the authors proposed a catadioptric system with a double lobed mirror. The shape was not precisely defined. Hamit [Ham97] described various approaches to get omnidirectional images using different types of mirrors. Nayar et al. presented several prototypes of catadioptric cameras using a parabolic mirror in combination with ortographic cameras [Nay97]. Hicks and Bajcsy presented a family of reflective surfaces that provided a wide field of view while preserving the geometry of a plane perpendicular to their axis of symmetry [HB99]. Their mirror design had the ability to give a normal camera a birds eye view of its surroundigs. In the last decade many other systems have been and continue to be designed, as new applications, technological opportunities or research results. For instance, Layerle Jean-Francois [LSEM08] has proposed the design of a new catadioptric sensor using two different mirror shapes for a simultaneous tracking of the driver’s face and the road scene. They showed how the mirror design allows the perception of relevant information in the vehicle: a panoramic view of the environment inside and outside, and a sufficient resolution of the driver’s face for a gaze tracking.

Image Formation

By image formation we will understand the formation of a digital image from a surrounding scene through an optical (including mirrors) and a digitization process. This is the basic step for using vision sensors but it is not a straightforward task: sensors are not perfect and we need to model small errors of design.
Let I be an image of the world obtained through an optical device. We will consider I to be a two-dimensional finite array containing intensity values (irradiance). I can be seen as a function:
I: Ω⊂R2 −→ R+ (2.1)
(u, v) → I(u, v)
The irradiance at an image point p = (u, v) is due to the energy emitted from a region of space determined by the optical properties of the device. In the case of a central device with a unique viewpoint, the direction of the energy source is represented by a projective ray (a half-line) with initial point the optical center (or the focus) of the device (noted C in Figures 2.1 and 2.2).

Projection Models

As we said in the previous chapter, catadioptric omnidirectional cameras are sensors composed of a perspective camera pointed upwards to the vertex of a convex mirror. Therefore, the image formation of this kind of sensor is described by the well known perspective camera model complemented by a non linear part given by a function due to the mirror standing between the world scene and the camera. Let us present how the planar perspective projection model and the spherical perspective projection can be related to model the image formation for this kind of a sensor.

Planar perspective projection

The most common way to model a single camera is to use the well known perspective projection model [Fau93, HZ00]. The standard perspective camera model maps all scene points X (3D point) from a line passing through an optical center of a camera to one image point p. Figure 2.1 depicts the standard perspective projection model. Let the optical center of the device be the camera reference frame, so point X becomes XC = (X, Y, Z)⊤ and its projection is made as follows :
1. The point XC = (X, Y, Z)⊤ is projected to the normalised plane πm by the following equation:
m = (x, y, 1) = 0 Z 1 (X, Y, Z) = Z,Z,1
2. Let α1 = kuf be the horizontal focal length in pixels and α2 = kv f the vertical focal length in pixels, where f is the focal length, ku and kv are the scaling factors for row pixels and column pixels (camera pixels are not necessarily square). s the skew factor (represents the non-ortogonality between rows and columns of the sensor cells) and (u0, v0) the principal point (it is typically not at (width/2, height/2) of image), then the projection of m in homogeneous coordinates to the image plane πp is obtained linearly by :
p = (u, v, 1)⊤ = PXC = Km = 01 α2 v0 m
This model is not adapted for a field of view greater than 180 degrees. Real om-nidirectional cameras project points in front of the camera to one point and points behind the camera to a different point. The representation of image points by lines assigns to a single image point p points on a ray passing trough an optical center that are in front as well as behind of the camera. Thus, all points from the ray project to the same point in the projection plane. It allows to represent only scene points lying in a halfspace including an image plane as a border. So, the perspective model is sufficient for directional cameras which cannot see two halfspaces divided by the plane containing the optical center at the same time, but it cannot be used for omnidirectional cameras.

READ  Molecular toughening mechanism, mechanoluminescence as a probe for bond breaking

Spherical perspective projection

The spherical perspective projection model maps all scene points X on the unit sphere S2 = {Xs ∈ R3| Xs = 1}. The constraint that the scene points are in front of the camera can be imposed even for a field of view greater than 180 degrees. The scale factor λ relating points on the sphere to the 3D points must be positive: ∃Xs ∈ S2 =⇒ ∃λ > 0|X = λXs.
From the unit sphere, we can then apply the projection function noted Π that depends on the intrinsic parameters of the sensor : Π : Υ S 2 → Ω ⊂ R2. Π is not defined on all of S2 because we wish Π to be bijective which cannot be the case between S2 and R2 as they do not share the same topology. If Π is bijective, Π−1 will relate points from the image plane to their projective rays.

Omnidirectional projection

Recalling the standard projection model p = PX. Where p is the projection of the world point X, P is the projection matrix which comprises the intrinsic parameters and the projection of the world point to a normalised plane [Fau93, HZ00]. For om-nidirectional cameras one has: p = PF(X) . Where F is the function introduced by the reflection of the light rays at the mirror. This function depends on the mirror shape and is in general non-linear. Stated simply it determines the point of the mirror where the ray coming from the 3D point X is reflected and directed towards the camera’s optical center. Depending on the particular camera and mirror setup, the light rays incident to the mirror surface may all intersect at a virtual point. In this case the system is referred as having a Single Projection Centre. A single projection centre is a desirable property as it permits the generation of geometrically correct perspective images from the pictures captured by the omnidirectional camera. This is possible because, under the single view point constraint, every pixel in the sensed image measures the irradiance of the light passing through the viewpoint in one particular direction. When the geometry of the omnidirectional camera is known, that is, when the cam-era is calibrated, one can precompute this direction for each pixel. Therefore, the irradiance value measured by each pixel can be mapped onto a plane at any distance from the viewpoint to form a planar perspective image.
Baker and Nayar derived the complete class of central catadioptric sensors that have a single viewpoint under the assumption of the pinhole camera model. They identified four possible configurations of camera and mirror that are non degener-ate and two configurations that are degenerate. Figure 2.3 depicts the four non degenerate configurations combined an ortographic camera with a parabolic mirror or a perspective camera associated to a hyperbolic, elliptical or planar mirror. The degenerate configurations (spherical mirror and conical mirror ) cannot be used to construct cameras with a single effective view point. Indeed, a sphere can be seen as the limit of an ellipse when the two focal points concide; thus, to obtain the single viewpoint property, we would need to place the camera in the center of the sphere. We would then only see the camera itself. By putting the camera in another position, we obtain a caustic. Similarly, a conical mirror is an interesting example of the limit of the pinhole camera model. The single viewpoint constraint imposes that the cone be situated in front of the camera with the vertex at the focal point.

Unified Projection Model

A unifying theory for all central catadioptric systems was proposed by Geyer and Daniilidis [GD00, Gey03]. They showed and proved that every catadioptric (parabolic, hyperbolic, elliptical) and standard perspective projection is isomorphic to a pro-jective mapping from a sphere (centered in the effective viewpoint) to a plane with the projection center on the perpendicular to the plane. Figure 2.3 shows the entire class of central catadioptric sensors. Table 2.1 details the equations of the 3D sur-faces and the relation between the standard (a, b) parameterisation and the (p, d) parameters used in Barreto’s model [Bar03].

Table of contents :

1 Introduction 
1.1 Omnidirectional Vision in Science
1.1.1 Different sensors for capturing wide fields of view
1.1.2 Omnidirectional vision in computer vision and robotics
1.2 Camera calibration
1.3 Motivation
1.4 Goal of the thesis
1.5 Methodology
1.6 Contributions of this thesis
1.7 Structure of the thesis
2 Central Catadioptric Omnidirectional Cameras 
2.1 Overview
2.2 Image Formation
2.3 Projection Models
2.3.1 Planar perspective projection
2.3.2 Spherical perspective projection
2.3.3 Omnidirectional projection
2.3.4 Unified Projection Model
2.4 Classical Calibration
2.4.1 Pattern Calibration techniques
2.4.2 Line based calibration
2.4.3 Points based calibration
2.4.4 Polarized images calibration
2.4.5 Classical calibration drawbacks
3 Uncalibrated Visual Tracking 
3.1 Visual Tracking
3.1.1 Tracking by matching
3.1.2 Direct tracking
3.2 Uncalibrated Visual Tracking Algorithm
3.2.1 Warping Function
3.2.2 Optimization problem
3.3 Experimental Results
3.3.1 Synthetic data
3.3.2 Real data
3.4 Conclusion
4 Direct On-line Self-Calibration 
4.1 Self-calibration of Omnidirectional cameras
4.1.1 Self-calibration techniques
4.2 Proposed Direct Self-Calibration Method
4.2.1 Calibration parameters
4.2.2 Experimental Results
4.2.3 Synthetic data
4.2.4 Real data
4.3 On the Uniqueness of the Solution for the Calibration of Catadioptric Omndirectional Cameras
4.3.1 Warping using the true parameters
4.3.2 Warping using the estimated parameters
4.3.3 Parabolic mirror
4.4 Conclusion
5 Conclusion and Future research 
5.1 Future Research
A Current Jacobian and Reference Jacobian
A.1 Current Jacobian
A.1.1 Homography parameters Jacobian
A.1.2 Mirror parameter Jacobian
A.1.3 Intrinsic parameters Jacobian
A.2 Reference Jacobian
A.2.1 Homography parameters Jacobian
A.2.2 Mirror parameter Jacobian
A.2.3 Intrinsic parameters Jacobian
B Theoretical demonstrations
B.1 Proof of Property 1
B.2 Proof of Property 2
B.3 Proof of Property 4
B.4 Proof of Lemma 1
Bibliography

GET THE COMPLETE PROJECT

Related Posts