Robustness of plasmonic filters to process variations and Front-End fabs limitations

Get Complete Project Material File(s) Now! »

Case of back-side illumination

The image sensors market has been more and more demanding for the fabrication of higher resolution sensors at competitive price, which implies a high fabrication throughput. Yet, with the constant shrink of pixel size, less light is being captured in each pixel, which deteriorates the performance of the sensors, especially in low-light conditions. The most widespread structure presented in Figure 1.3 is known as the front-side illumination (FSI). We have seen that in this case, the fill factor of the active area is small (inferior to 50 %) due to the presence of the metal wiring and the transistors between the filters and the photodiodes, that take up a lot of space and hinder light propagation.
An innovative alternative was thus introduced by Sony in 2006 [10], called the back-side illumination (BSI). The principle of this structure is to place the photodiodes directly under the filters and to wire the transistors beneath the photodiodes, where they do not hinder light propagation. A description of the BSI principle is given in Figure 1.8. In practice, BSI and FSI sensors own the same elements, but the FSI ones are sequentially fabricated on the front on the silicon wafer, hence their name. Conversely in the BSI case, after the photodiodes then the metal interconnections are realized, the wafer is flipped and thinned so that the filters and the microlenses can be fabricated directly above the photodiodes, on the back-side of the wafer. This Sony development was made on SOI and demonstrated an increase on the quantum efficiency of the sensor (e.g. a 38 % increase at 540 nm) and a significant improvement of the angular response compared with the conventional FSI solution since the interconnections are not in the optical path [11]. STMicroelectronics and the CEA Leti-MINATEC also demonstrated in 2007 a BSI solution on SOI with a high level of miniaturization (1.45 µm pixel size) with a very low noise level [12]. In our case, BSI structures could provide interesting performance improvements, but they are a bit more expensive to fabricate.

Image reconstruction

After the electrical charges have been collected and transferred, the corresponding tension at each pixel’s output (CMOS) or pixels column’s output (CCD) is digitally sampled. This analog-to-digital conversion allows for an easier processing of the captured image and can be directly done on the chip with its logic elements. The first processing from the raw image is the offset correction, which consists in removing the spatial noise by subtracting a black image obtained with hidden pixels to each color plane. Afterwards, there are three main treatment steps to go from the collected signal to the final image. The three steps consist of: the color interpolation, the white balance and the color correction. Other additional steps can be added for the image correction and for the image display on a screen.

Color interpolation

Since each pixel only detects one color, the sensor signal does not contain any color information and the raw image is thus monochrome (blue, green and red pixels each return a single value related to their respective number of collected photons, which leads to a greyscale information: white for a high signal level and black for a theoretically null signal). As we have seen it with the trichromatic invariance, a color is defined with its combination of the three primary colors. In order to display a colored image on a screen, each pixel has thus to contain not one, but three values corresponding to its red, green and blue components. For example, a red pixel only contains its red components, but lacks the blue and green information to reproduce the actual color of the scene.
The determination of the lacking information is performed with the color interpolation. This algorithm consists in interpolating the color information of the closest neighboring pixels, as shown in Figure 1.9. The most common method for a Bayer matrix is the bi-linear interpolation which calculates the lacking components simply as the mean of the components of the surrounding pixels. Each color is generally encoded in 8 bits, which corresponds to 256 shades (the human eye can distinguish about 250 shades). Each pixel is thus encoded in 24 bits after the color interpolation step (8 bits for each primary color), which leads to the possible restitution of more than 16 million colors.

White balance

The term “light source” does not provide any quantifiable information regarding the spectral power distribution enlightening the scene (i.e. the amount of energy received at each wavelength). For colorimetric considerations, we rather talk about illuminant, which refer to a mathematical spectral power distribution instead of solely the nature of the source (sun, lamp, etc.) (Figure 1.10). An illuminant may represent an existing light source or only a theoretical spectral power distribution. The ‘Commission Internationale de l’Eclairage’ (CIE, from the French acronym) defined a set of reference illuminants that will be used later in this thesis. Since no illuminant has a constant spectral distribution, the spectrum of the illuminant enlightening the scene thereby has a substantial impact on the way the colors are sensed by the pixels. Furthermore, the sensor does not have the same sensibility for each color. In order to correct these effects, we use a correction called the white balance whose purpose is to make sure that a white object in the scene will indeed be white after the color correction. The correction is performed with a diagonal matrix given in Eq. 1.4.

Color correction

The filters are not perfect and a portion of photons with undesired wavelengths can still reach the photodiode of a given pixel (spectral cross-talk) during the exposure. Photons can also reach the neighboring photodiodes due to the reflections and diffractions that may occur on the optical path (optical cross-talk). Similarly after the photo-conversion, some electrons can diffuse towards the neighboring photodiodes (electrical cross-talk). These phenomena induce additional color errors for the image restitution. For example, red pixels would still capture photons even if there is no red color in the scene. To compensate for these effects, a color correction is applied to the three color planes with a color correction matrix (CCM) given in Eq. 1.5.

Main figures of merit

We provide below the definition of the main electrical and optical parameters of an image sensor that will be used or referred to throughout this thesis to quantify the performance of the filtering structures that will be presented later. The reader can refer to the thesis of Arnaud Tournier [4] for further information regarding the various noise sources in image sensors, and to the thesis of Clemence Jamin [9] for an extensive reading on optical performance and photometry.
Dark current: This value is defined as the current generated by the sensor in the absence of incident light. It is produced by the charge generation in different areas of a photodiode or a MOS capacity. The thermal energy is absorbed by the material and can create hole-electron pairs that can be collected and added to the signal. The dark current is very sensitive to the morphological defects of silicon and to the temperature. It is generally given in e−/s.
Quantum efficiency (QE): The quantum efficiency of a sensor is defined as the ratio of the number of collected electrons to the total number of incident photons. The components that lead to a QE decrease in a pixel are: the fill factor, the reflection or the absorption of light in the optical path or the recombination of the photo-generated electrons.
The market of ambient light sensors has experienced a significant growth since 2007, especially thanks to their first integration in smartphones with the first iPhone. One can see that the global ALS revenue is about to reach 1 000 M$ within the next two years (Figure 1.12(a)). In the chart, a distinction is made between conventional ALS and what we will call here RGB-ALS. ALS are the sensors we will focus on for the whole study. We will see in the following sub-sections that it only consists of a single “green” photodiode, whereas RGB-ALS consist of a combination of red, green, blue and potentially infrared photodiodes. The RGB-ALS purpose is the same than conventional ALS, but they also allow for the white balance of the displays. The use of three color photodiodes indeed brings more information regarding the color spectrum of the illuminant and the sensor’s output can thus be used to adjust the display’s backlighting to the color temperature of the ambient light. We can notice from Figure 1.12(a) that RGB-ALS tend to grow at the expense of ALS. Actually, it is likely that the market shares between the two types of sensors are quickly going to reach a steady state since RGB-ALS tend to be used in mid to high-end device, whereas ALS remain a cheaper and sufficient solutions for low to mid-end handsets. The chart takes into account both single ALS and combined ALS solutions, i.e. ALS combined with one or two additional functions (proximity sensor, etc.) on the same chip (see multi-sensing in Figure 1.1).
As previously mentioned, the use of ALS in general has skyrocketed thanks to their integration in smartphones: nowadays, wireless communication constitutes 85 % of the global revenue of ALS (Figure 1.12(b)). Consumer electronics have also shown interest for ALS in laptops, tablets, PDAs, TVs, monitors, cameras and even in high-end white goods such as washing-machines or fridges.

READ  OSS related projects in Kivos

Table of contents :

1 Introduction to CMOS imaging applications and spectral filtering
1. 1 CMOS image sensors
1. 1.1 Principle of image sensing
1. 1.2 Structure of a CMOS image sensor
1. 1.2.1 Microlenses
1. 1.2.2 Color filters
1. 1.2.3 Photodiodes: the photoelectric effect
1. 1.2.4 Case of back-side illumination
1. 1.3 Image reconstruction
1. 1.3.1 Color interpolation
1. 1.3.2 White balance
1. 1.3.3 Color correction
1. 1.4 Main figures of merit
1. 2 Ambient Light Sensors
1. 2.1 Background
1. 2.2 Market trends
1. 2.3 Principle
1. 2.4 Structure and integration
1. 3 Review on spectral filtering solutions
1. 3.1 Organic resists
1. 3.2 Interference filters
1. 3.3 Diffractive filters
1. 4 The plasmonic alternative
1. 4.1 Electromagnetic waves propagation
1. 4.2 Optical properties of metals
1. 4.3 Surface plasmon resonances
1. 4.3.1 Surface plasmon polaritons
1. 4.3.2 SPP excitation conditions
1. 4.3.3 Plasmons properties in the visible range
1. 4.3.4 Localized plasmon resonances
1. 5 Thesis objectives: the challenges of plasmonic filtering
2 CMOS integration of nanostructures for plasmonic filtering
2. 1 Method for optical simulations: RCWA
2. 1.1 Numerical methods for periodic structures
2. 1.2 The RCWA principle
2. 1.3 Information about the code used
2. 2 Performance evaluation tools
2. 2.1 Ambient Light Sensors
2. 2.1.1 The global ALS sensitivity
2. 2.1.2 The performance criteria
2. 2.2 RGB image sensors
2. 3 Plasmonic structures for spectral filtering
2. 3.1 Metal-Insulator-Metal structures
2. 3.1.1 Generalities on MIM arrays
2. 3.1.2 Material for the dielectric core
2. 3.1.3 Structures investigation
2. 3.1.4 Angular stability
2. 3.2 Metallic patch arrays
2. 3.2.1 Introduction to metallic patch arrays
2. 3.2.2 Square patches
2. 3.2.3 Cruciform patches
2. 3.3 Hole arrays in a metallic layer
2. 3.3.1 Extraordinary Optical Transmission in subwavelength hole arrays
2. 3.3.2 Impact of the material properties
2. 3.3.3 Square arrays of holes
2. 3.3.4 Hexagonal arrays of holes
2. 3.3.5 RGB filters
2. 3.4 Conclusion on plasmonic structures
2. 4 Optimal integration solution for plasmonic ALS
2. 4.1 Integration level of plasmonic filters
2. 4.2 Location of the metallic layer
2. 4.3 Distance of the filters to the passivation layers
2. 5 Conclusion on Chapter 2
3 Evaluation of plasmonic filters for Ambient Light Sensors
3. A.1 Review on the impact of geometrical parameters
3. A.1.1 Metal thickness hm
3. A.1.2 Effect of the a/P ratio
3. A.1.3 Shape factor b/a
3. A.1.4 Intercrosses distance d
3. A.2 Illustration with color filters
3. A.2.1 RGB filters selection
3. A.2.2 Color errors and illumination conditions
3. A.3 Conclusion on optimal design rules
3. B.1 Positioning of plasmonic filters
3. B.1.1 Evaluation of plasmonic filters performance
3. B.1.2 Angular tolerance
3. B.1.3 Other potential ALS structures
3. B.2 Noble metals alternative
3. B.2.1 Silver-based filters
3. B.2.2 Gold-based filters
3. B.3 Conclusion on ALS performances
3. B.4. Conclusion on Chapter 3
4 Robustness of plasmonic filters to process variations and Front-End fabs limitations
4. 1 Identification of the process limitations
4. 1.1 Simplified process route
4. 1.2 Expected hard points
4. 2 Materials-related dispersions
4. 2.1 Thicknesses and refractive index dispersions
4. 2.2 Metal oxidation
4. 3 Lithography inaccuracies
4. 3.1 Rounding of the crosses
4. 3.2 Impact of rounded corners on angular stability
4. 3.3 Variations of the apertures dimensions
4. 4 Filter nanostructuration
4. 4.1 RCWA modeling of etching slopes
4. 4.2 Impact of sloped profiles
4. 5 Holes filling
4. 5.1 Importance of the dielectric surrounding
4. 5.2 Impact of air gaps during holes filling
4. 6 Conclusion on Chapter 4
5 Realization and characterization of ALS plasmonic filters
5. 1 Experimental conditions
5. 1.1 Integration constraints and limitations
5. 1.2 Process route description
5. 1.3 Designs selection and Electron-Beam Lithography bases
5. 2 Development of a wafer-level integration
5. 2.1 Materials and interfaces quality
5. 2.2 Electron-beam lithography and corrections
5. 2.3 Aluminum nanostructuration
5. 2.4 Holes filling
5. 2.5 Validation of the bonding on glass wafers
5. 3 Evaluation of the fabricated ALS filters
5. 3.1 Responses and performance at normal incidence
5. 3.2 Angular tolerance of the filters
5. 4 Conclusion on Chapter 5


Related Posts