Color formation process in digital camera

Get Complete Project Material File(s) Now! »

Manual selection of masks

These approaches require the user to select manually the areas of occlusion in the images [CPT03] or in the frames of videos [PSB05] where the pixel information will be discarded. When background information is missing, the reconstruction of missing parts is usually achieved by inpainting methods in which the sources are found within a single frame or the whole video.
The searching process in many recent inpainting approaches relies on patch similarity that takes into account both structural and textural consistency of the nearby region. The missing background is completed either gradually from the border to the center [CPT04, WR06, YWW+14, WLP+14], or is generated globally by iteratively minimizing an energy function [Kom06, KT07, KT07, NAF+13, LAGM17]. However, the performance of these methods is limited when the gaps to be completed are large and of varying textures. Many problems such as unconnected edges, smoothing and blurring artifacts may occur in the results [GLM14, Tha15].

Special photographic technology

This section involves the methods relying on special experimental technologies. They distinguish occlusions based on their distances to the camera, which can be measured by varying the focal length or by stereo estimation. However, all these designs are based on stringent experimental conditions.

Chosen pixels and their invariant features by SIFT

Scale Invariant Feature Transform (SIFT) is a widely used image matching algorithm introduced by D.Lowe in 1999 [Low99]. It shows a strong robustness to scene deformation of images under slight changes of viewpoint which is exactly the case of our assumption. Although numerous detectors such as SURF [BTVG06] and ASIFT [MHB+10] have been proposed ever since, experiences show that SIFT is fast and accurate enough in our case. So we apply SIFT in our process to establish the geometric relationship between images. Here is a general description of this algorithm.
Keypoints in an image Not all the pixels of image are suitable for feature matching. The shape of object may bear various deformations such as translation, rotation and scaling. Keypoints are chosen so as to be discriminative and as robust as possible to changes such as translations, rotations or blur.

A short review of projective geometry

The explanation should start with an extension of space. In order to represent the real world, Euclidean space is no longer convenient to be used as a geometry frame. The drawback lies in its lack of definition at infinity, which makes it impossible to explain why parallel lines such as railway tracks may intersect at horizon in image.
Therefore, a more general expression – projective space is introduced to computer vision problems. It is an extension of Euclidean space with an additional definition of infinity. Under projective assumption, there is no distinction between parallelism and intersection as parallel elements (lines in 2D space for example) meet at infinity. As a result, we are able to map every points of real world to pixels in image without worrying about non-defined cases. Based on this space, we try to find out geometrical properties of pinhole camera model. It belongs in the category of central projection where the camera center corresponds to center of projection in Definition 3.1.

READ  Comparing RTOs to TTOs/universities as intermediaries in the scienceindustry relationship

Table of contents :

1 Introduction 
1.1 Challenges
1.2 Overview
2 Related Work 
2.1 Manual selection of masks
2.2 Special photographic technology
2.2.1 Multi-focus
2.2.2 Light field equipments
2.3 Automatic evaluation
2.3.1 Mask detection
2.3.2 Background detection
I Image Alignment 
3 Geometrical Alignment 
3.1 Extraction of matched pixels
3.1.1 Chosen pixels and their invariant features by SIFT
3.1.2 Matching pixels between images
3.2 Homography
3.2.1 A short review of projective geometry
3.2.2 Sample selection and model decision
3.2.3 Reconstruction and interpolation
3.3 Experiments
4 Photometric Adjustment 
4.1 Experimental assumptions
4.2 Color formation process in digital camera
4.2.1 From luminance to raw
4.2.2 From raw to RGB
4.2.3 Color adjustments in RGB space
4.2.4 A short resume
4.3 Color alignment methods
4.3.1 Problem model
4.3.2 Propositions of method
4.3.3 Sample selection
4.4 Experiments
4.4.1 Algorithm details
4.4.2 Estimation
II Fusion Methods 
5 Median Filtering 
5.1 Median and median filter
5.1.1 Median
5.1.2 Median filter
5.2 Possible choices of vector median
5.2.1 Definitions of vector median
5.2.2 Performance prediction
5.3 Experiments
6 Meaningful Clique 
6.1 Feature of background pixels
6.2 Graph and clique
6.3 Clique based algorithm
6.3.1 Dense clique and search
6.3.2 Meaningful clique and iteration
6.4 Experiments
6.4.1 Simulated sequences
6.4.2 Real sequences
7 Jitter Blur Correction 
7.1 Error sources
7.1.1 Problem description
7.1.2 Aberrations
7.1.3 Our judgment
7.2 Existing tools
7.2.1 Lens distortion correction
7.2.2 Patch match
7.3 Combination method
7.3.1 General description
7.3.2 Pipeline
7.3.3 Parameter selection
7.4 Experiments
7.4.1 Attempt on the pipeline
7.4.2 Attempt on the iteration of pipeline
III Experiments 
8 Experimental Performances 
8.1 Data set
8.2 Summary of parameters
8.3 Geometrical alignment
8.4 Photometric correction
8.5 Image fusion
8.6 Combination method
8.7 Conclusion
9 Conclusion 
Source Code
Bibliography

GET THE COMPLETE PROJECT

Related Posts