Spherical model: the camera in a fixed environment

Get Complete Project Material File(s) Now! »

Spherical model: notations

The generic model chosen in the following (Parts I and II) is a spherical model: the projection surface is a unit three dimensional sphere (S2) and the projection center its center, denoted by t as previously. The coordinates of C t are expressed in the reference frame RW . Orientation of the camera is given by the quaternion q t : any vector ς in the camera frame Rcam corresponds to the vector qςq in the reference frame RW using the identification of vectors as imaginary quaternions (see Appendix 1.2). A pixel is labeled by the unit vector η in the camera frame: η belongs to the sphere S2 and receives the brightness y t, η . Thus at each time t, the image ∋ η ↦ y t, η ∈ R.  The notationsproducedbythecameraisdescribedbythescalarfieldS2 of this model are represented in Figure 1.3. In this model, no particular axis is preferred to another, as it is the case in the pinhole model where the optical axis, orthogonal to the image plane partially defines orientation and direction of the camera reference frame.

Motion equations

Spherical model: the camera in a fixed environment

The motion of the camera is given through the linear and angular velocities v t and ω t expressed in the camera reference frame Rcam. Linear and angular velocities are related to the time derivatives of position and orientation through the motion equations (see Appendix C and [Radix, 1980])

The model adopted for the scene is based on two main assumptions:
– (H1): the scene is a closed, C1 and convex surface Σ of R3, diffeomorphic to S2;
– (H2): the scene is a Lambertian surface;
– (H3): the scene is static.

Formulation as a minimization problem

Dans ce chapitre sont décrites les méthodes variationnelles et leurs applications à l’estimation du flot optique. Les plus connues de ces techniques d’estimation, la méthode d’Horn-Schunck et la méthode TV-L1, sont détaillées. De telles méthodes peuvent être adaptées à l’estimation de profondeur par la minimisation de notre contrainte de flot optique spécifique (1.16) plutôt que de la contrainte de flot optique classique : le mouvement apparaissant explicitement, le problème est bien posé au sens où, à chaque pixel de l’image, une équation scalaire est associée à l’unique inconnue du problème, la profondeur. Ces méthodes ont été présentées dans [Zarrouati et al., 2012a, Zarrouati et al., 2012b, Zarrouati et al., 2012c].
In this chapter, variational methods and their application to optical flow estimation are described. The most well-known, the Horn and Schunck and the TV-L1 optical flow estimation methods are detailed. Such methods can be adapted to depth estimation by the minimization of our invariant optical flow constraint (1.16) where motion is made explicit, instead of the regular optical flow constraint: the problem is well-posed and geometrically constrained in the sense that at each pixel, one scalar equation is associated to the only unknown of the problem, the depth. These methods were presented in [Zarrouati et al., 2012a, Zarrouati et al., 2012b, Zarrouati et al., 2012c].

Variational methods applied to optical flow estimation

Optical flow

In [Horn and Schunck, 1981], Horn and Schunck described a method to compute the optical flow, defined as « the distribution of apparent velocities of movement of brightness patterns in an image ». The entire method is based on the assumption that the light intensity emitted by a specific object does not depend on time or direction of observation (assumptions (H2) and (H3) of the Section 1.3.1 and the consequent equation (1.12)). Provided that the same physical point

Variational methods applied to inverse problems

An important field of image processing is devoted to image restoration: the object is to improve the quality of a given damaged image. The term « damage » covers a wide range of effects, from additional noise to missing pixel data or blur. The usual formulation of restoration is the following: one aims to reconstruct the ideal input y0 given an output y (the image) related by  Ky0 + ν
where ν designates noise and K is a linear operator that describes the « damaging » effect. For example, to restore
– a noisy image: K is simply the identity (noise is already taken into account in ν);
– a blurred image: K is the convolution with a Gaussian kernel;
– an image where pixel data are missing: K is the characteristic function 1S of a discrete set S.
Variational methods (specifically variational minimization) apply to inverse problems as the solution is claimed to be the argument of the following minimized functional y − Ky0 + α2J y
The convergence of this numerical method of resolution was proven in [Mitiche and reza Mansouri, 2004]. Three parameters have a direct impact on the speed of convergence and on the precision: the regularization parameter α, the number of iterations for the Jacobi scheme and the initial values of VHS1 and VHS2 at the beginning of this iteration step. To be specific, α should neither be too small in order to filter noise appearing in differentiation filters applied on y, nor too large in order to have VHS close to V when V ≠ 0.

READ  Pseudo Double Categories and Concurrent Game Models 

Variational method with total variation prior

In their seminal article [Rudin et al., 1992], Rudin, Osher and Fatemi proposed for image denoising the total variation prior defined by the sum of the gradients in L1 norm. Its main advantage resides in the fact that edges are better conserved: the smoothing/regularizing effect related to diffusion in the heat equation is attenuated. This prior enables the extension of variational methods to non-smooth functions, and is well-adapted to piecewise constant images (cartoon images). It can however sometimes lead to a staircase effect. The main issue with total variation prior comes from the non differentiability of this prior at point 0. A solution to this problem consists in the following approximation

Reconstruction of the depth

Filtrage de la profondeur par la contrainte de flot géométrique Dans ce chapitre, nous montrons comment notre contrainte de flot géométrique invariante (1.17) peut être mise à profit pour la création d’observateurs asymptotiques non-linéaires : les mesures de flot optique ou de profondeur ainsi que la dynamique de la caméra peuvent être utilisées comme entrées de l’observateur de manière à améliorer progressivement l’estimation de profondeur. La convergence de ces observateurs est étudiée, et repose sur des hypothèses géométriques sur l’environnement et la dynamique de la caméra. Ces observateurs et les preuves de convergence ont été présentés dans [Zarrouati et al., 2012a, Zarrouati et al., 2012c].
In this chapter, we show how our invariant geometrical flow constraint (1.17) can be of great benefit in the frame of an asymptotic non-linear observer: either optical flow or depth measurements can be used as inputs, along with camera dynamics, to incrementally refine depth estimation. Convergence of such observers, relying on geometric assumptions concerning the camera dynamics and the environment, is addressed. These observers and the proofs of convergence were presented in [Zarrouati et al., 2012a, Zarrouati et al., 2012c].

Table of contents :

Introduction
Acronyms
I From motion to depth 
1 Camera models and motion equations 
1.1 Camera models
1.1.1 Pinhole model
1.1.2 Other models
1.2 Spherical model: notations
1.3 Motion equations
1.3.1 Spherical model: the camera in a fixed environment
1.3.2 Pinhole-adapted model
2 Formulation as a minimization problem 
2.1 Variational methods applied to optical flow estimation
2.1.1 Optical flow
2.1.2 Variational methods applied to inverse problems
2.1.3 Variational method with Sobolev prior
2.1.4 Variational method with total variation prior
2.2 Variational methods applied to depth estimation
2.2.1 Estimation of depth with Sobolev priors
2.2.2 Estimation of depth by total variation minimization
3 Reconstruction of the depth 
3.1 Asymptotic observer based on optical flow measures
3.2 Asymptotic observer based on depth estimation
4 Implementations, simulations and experiments 
4.1 Simulations
4.1.1 Sequence of synthetic images and method of comparison
4.1.2 Implementation of the depth estimation based on optical flow measures
4.1.3 Implementation of the asymptotic observer based on rough depth estimation
4.2 Experiment on real data
II From depth to motion 
5 Estimation of biases on linear and angular velocities
5.1 Statement of the problem
5.2 Observability of the reference system
5.2.1 Characterization of the observability
5.2.2 Scenes with non trivial stationary motion
5.3 The asymptotic observer
5.3.1 A Lyapunov based observer
5.3.2 Decrease of the Lyapunov function
5.3.3 Existence and uniqueness
5.3.4 Regularity and bounds
5.3.5 Continuity
5.3.6 Convergence
5.4 Practical implementation, simulations and experimental results
5.4.1 Adaptation to a spherical cap
5.4.2 The observer in pinhole coordinates
5.4.3 Simulations
5.4.4 Experiments on real data
5.5 Proof of the well posedness of the observer
5.5.1 Local solutions
5.5.2 Bounds on solutions
5.5.3 Global solutions
5.6 Continuity of the flow
6 The geometrical flow method for velocity estimation 
6.1 The geometrical flow as a conservation law
6.2 Implementation of the method for a pinhole camera model
6.3 Implementation of the method for a pinhole camera model including radial distortion
6.4 Experiments on real data
7 Estimation of biases on angular velocity and acceleration 
7.1 Problem formulation
7.2 Observability
7.3 Observer design
7.4 Asymptotic convergence
7.5 Exponential convergence
7.6 Robustness to noise
7.7 Implementation on real data
III Simultaneous tracking and mapping for augmented reality 
8 Architecture of the original algorithm 
8.1 Overview of the algorithm
8.2 Map initialization
8.3 Pose estimation (tracking)
8.4 Mapping
8.4.1 Keyframe insertion
8.4.2 Bundle adjustment
8.5 Relocalisation
9 Fusion of magneto-inertial, visual and depth measurements 
9.1 Sensors and synchronization
9.1.1 Depth sensor
9.1.2 Magneto-inertial sensors
9.1.3 Synchronization
9.2 Map initialization
9.3 Mapping
9.3.1 Keyframe insertion
9.3.2 Depth data in bundle adjustment
9.4 Pose estimation
9.5 Tracking quality and relocalization
9.5.1 Assessing the visual tracking quality
9.5.2 Relocalization procedure
9.6 Localization and augmented reality
9.6.1 Attitude estimation
9.6.2 Position estimation
10 Experimental results 
10.1 Experimental conditions
10.2 Linear velocity, position and attitude estimation
10.2.1 Translation along the lateral direction
10.2.2 Exploration of an entire room
10.3 Cartography
Conclusion 
Appendix

GET THE COMPLETE PROJECT

Related Posts