Boulder Characterization from optimized DTMs 

Get Complete Project Material File(s) Now! »

Comet 67P/Churyumov-Gerasimenko

67P is a short-period comet belonging to the Jupiter family of comets. It is a regular visitor to the inner Solar System with an orbit period of 6.5 years (Lamy et al., 2007). It was discovered by Klim Ivanovic Churyumov and Sveltana Ivanova Gerasimenko at Alma-Ata Observatory, Russia in 1969.
Figure 2: 67P/Churyumov-Gerasimenko. Image Credit: ESA/Rosetta/NAVCAM Jorda et al. (2016) performed a high-resolution reconstruction of the comet surface using stereophotoclinometry (SPC) method developed by Gaskell et al. (2008a) to obtain a model of the comet and characterize its properties. The bilobed shape of the comet was used to estimate the volume of the comet to be 18.8 0.3 km3 with a mean radius of 1.743 0.007 km and a density of 532 7 kg m-3. Deriving the density also gave insight into the very high porosity of the aggregated material to be about 70 – 75% suggesting the presence of large-scale voids in the internal structure of the nucleus.
Using light curves from observations since 2003, Lamy et al. (2007) deduced the spin period of the comet to vary between 12.4 and 12.7 hours. Later, Mottola et al. (2014) used another set of light curves from the OSIRIS instrument obtained during the comet’s perihelion passage in 2009. These observations indicated a reduction in the spin period from 12.7 to 12.4 hours. This value increased and reached a maximum of 12.43 hours in May 2015 and started dropping rapidly just before the perihelion in August 2015 until finally converging post-perihelion to 12.07 hours in December 2015 (Jorda et al., 2016). This variation (Fig. 3) is explicable by the thermal model by Keller et al. (2015). This clearly shows a trend that the comet spin period is decreasing by 0.3 hours with each perihelion passage. This can be explained by the torque created by the recoil force being caused by the outgassing process.
As seen above, the properties of 67P (summarized in Table 2) have been strongly characterized through various studies. However, to identify and understand in-depth the processes that are responsible for the activity on the nucleus, there is a need to study locations where surface changes takes place and explain those changes. The purpose of this study is precisely that. The aim was to study a local section of the surface of 67P, the Khonsu region in the southern/equatorial area, where a large boulder was observed to have moved in the images taken before and after perihelion (Fig. 4).
The following sections describe the work done in this study. Section 2 describes the process of 3D reconstruction that was essential to characterize the aforementioned boulder and study the changes. In section 3, the method used to characterize the boulder and its various properties are explained. These measurements were then used to study the dynamic scenario that potentially resulted in the observed changes and are described in section 4. Finally, in section 5 the results of this study are presented and discussed with final conclusions in section 6. We pose the questions: “Is it realistic to explain the displacement of the boulder by the outgassing of an ice pocket at its surface?”.

D reconstruction

3D reconstruction methods have been extensively used to obtain accurate models of comets and asteroids and study their properties since the Vega and Giotto satellites acquired the first images of the comet 1P/Halley. Different methods have been developed and implemented in the past to perform reconstruction but today, they can be separated into two categories: stereophotogrammetry (SPG) and stereophotoclinometry (SPC).
SPG involves extracting information about the three-dimensional coordinates of a point using multiple image patches taken from different locations. It uses the line of sight to triangulate the coordinates and has been established to produce accurate models since it only deals with the geometry. It can also work between images of the same location that may be geometrically different (transformations like rotation, translation etc.). Conversely, SPC is a method developed by Gaskell et al. (2008b), that uses the relative intensities of pixels and reference heights from stereo to construct a map representing the elevation of the surface. It uses several images of an area taken from different points of view and illumination conditions to perform the reconstruction. This method is restricted by the illumination parameters and the positions of the light source and the observer. In unfavorable illumination conditions, SPC tends to smooth out small topographical features. Moreover, areas in shadows are reconstructed poorly. Additionally, a method that has been recently developed by Capanna et al. (2013) known as Multi-resolution PhotoClinometry by Deformation (MPCD) (section 2.1.) combines features of both SPG and SPC. This is the method that has been used in this study to reconstruct the local 3D models.

MPCD technique

This method works by taking an input 3D mesh (section 2.1.1.) with triangular faces (Fig. 5) and running it through a non-linear optimization process (section 2.1.3.) with the aim of minimizing the chi square differences between the model and the observed images (section 2.1.2.).
Synthetic images are created from the mesh and the mesh is deformed until the difference between the observed and synthetic images is minimized. This deformation employs a “Full multigrid” multiresolution scheme (Fig. 6) wherein the resolution of the mesh and the pixel scale of the images change through the optimization process. For a resolution ‘R+1’, the scheme goes to the lowest resolution ‘R’ through each intermediate level of resolution and then moves back to resolution ‘R+1’ again through the intermediate levels. It then moves up another level of resolution to ‘R+2’ repeating the process with each iteration. Hence, the name Multiresolution PhotoClinometry by Deformation.
The MPCD method (displayed in Fig. 7) works in the following steps:
The chi-square differences between the synthetic and observed images are computed using a maximum likelihood estimation.
A deformation is applied onto the mesh in order to minimize the differences. Optimization: the chosen algorithm (section 2.1.3.) maximizes the likelihood. The multiresolution scheme is applied.
The MPCD method makes use of a 3D triangular mesh as in input model. This model is a low-resolution model that can be obtained from other methods. The 3D mesh of the global shape of the nucleus of 67P has been obtained from a colleague, R. Gaskell. Since this study is focused only on a local area the first step is to extract this portion of the mesh as a squared digital terrain model (DTM) which will be referred to as a ‘maplet’ (Fig. 8).
The extraction of the DTM was done using the tools available in the software “MeshLab” (Cignoni and Ranzuglia, 2011) and another software called “ReMESH” (Attene and Falcidieno, 2006) to create uniform subdivisions of the facets in the mesh. The extracted DTM is subject to adjustments based on the number of high-resolution images that are selected in the next step. The image selection (section 2.1.2.) happens in such a way that any images which do not contain the entire DTM area is rejected. Hence, there is a need to find the middle ground between the size of the DTM and the number of high-resolution images, which can lead to the gain or loss of a level of resolution.

Image Selection

As discussed before the MPCD reconstruction process requires both an input model as well as observed images. For acquiring the images showing the area, a software called ‘SUMXSEL’ was used. The software takes the DTM that was extracted as an input which provides the Sun-comet and comet-Sun vectors as well as the rotation matrix from the camera to the comet for the OSIRIS images. SUMXSEL then uses this information to compute the emission angle, phase angle and incidence angles of all the facets of the DTM for each image. The favorable range of these angles which would be useful for the reconstruction process are set beforehand in a parameter file along with the start and end date between which the images taken are considered. As the software goes through the images falling within the time range it selects the images that fulfils all the conditions and creates a mask of the DTM area on the image (Fig. 9).
The selection process can return more than 500 images out of the available 30000 OSIRIS narrow angle and wide-angle camera (NAC and WAC) images depending on the conditions and the DTM area. These first-selection images can be projected on a plot of incidence angle versus emission angle (Fig. 10) and are represented by the red dots.
As we can see there are a lot of images that are very close together. These are images that were taken by the satellite in small time intervals and therefore have very similar geometric conditions. Since such images will not contribute any additional information for the reconstruction process and will only end up increasing the computation time, the software rejects images with very similar geometric conditions and keeps the ones with a higher resolution (Fig. 11). This second step returns a maximum of 40 images. The final selected images are then sorted by the software into different levels based on their resolution.
The reconstruction process depends strongly on this step of the image selection since a good distribution of the geometric conditions ensures that the area has been observed from various points of view and more information is available for a better reconstruction. However, that is not always the case and sometimes because of either the area of interest, the extraction of the DTM or the OSIRIS images available, the final selection from SUMXSEL might not be very promising. Such a case is shown in Fig. 12.
Once this automated step is complete, there is a need of performing a final manual inspection. This step is required to remove any images that have made it through the final selection of the SUMXSEL software but may possess certain instrumental errors or post processing issues such as: Shadows cast on the area to be reconstructed by external topography (Fig. 14(a)) Blurring of image
Saturation of image (Fig. 14(b))
In some cases, the image might be a good one because of its high resolution, making it essential for a good reconstruction, but it might have a bright spot due to specular reflection that would introduce inaccurate artifacts in the final reconstruction. This can be dealt with by creating a mask that removes the bright spot (Fig. 15) therefore, all information except the bright spot is taken into account during the MPCD reconstruction.
This step of manual selection must be done with caution for two reason:
To ensure that there are at least of 12 – 15 images for the MPCD algorithm to reconstruct the model accurately.
To ensure no less than 3 images at the last resolution as these images will most affect the accuracy of the model.
After this step a final list of images (Fig. 16) is obtained which are free of any features that may hinder the reconstruction process. As seen below, the images that were either covered in shadow from external topography or saturated images have been removed.

READ  THE DIET OF HARBOUR PORPOISES (PHOCOENA PHOCOENA) IN THE SOUTHERN NORTH SEA: A RELATIONSHIP WITH PREY AVAILABILITY

Optimization

Once the image selection is done, we can then move on to the optimization. For this step, we used the services of the cluster at LAM which allows the algorithm to run in parallel in order to achieve faster computation time.
The optimization process uses a maximum likelihood estimation to minimize the difference between the synthetic images (created from the 3D mesh) and the observed images. We have a function F that is minimized:
Is the non-linear maximum likelihood function of the free parameters (pk) and quantifies the differences between the observed images (O) and the synthetic images (S). These differences are used to apply the deformation of the mesh by shifting the vertices along the normal. In order to ensure that the deformations that are applied produce accurate topographic features and do not introduce artifacts, it is important to make sure that the deformation of one vertex does not directly influence the deformation of its surrounding vertices. For this reason, a smoothness parameter (SC) is applied during the maximum likelihood estimation which forces the slope of the surrounding facets to remain close from each other. The smoothness parameter is computed for each vertex by taking the normal of the facets which share that vertex.
Depending on previous factors, the deformation of the 3D mesh is done by changing the height of each vertex along a normal vector which is calculated for each vertex by taking the average of each facet sharing this vertex. The optimization process applies a non-linear method based on a quasi-Newton large scale minimization algorithm, “L-BFGS-b” (Morales et al., 2011). Although, there are other methods available, Capanna et al. (2013) found L-BFGS-b to be the best method for minimizing the function. One key factor of this method is that it will only converge to a global minimum if the function being minimized is a convex function.
Finally, the multiresolution scheme (Fig. 6) is applied. This helps in overcoming a key obstacle which is that the function may not always be convex therefore, the optimization process will not converge to a global minimum but only a local one. This scheme allows the minimization algorithm to leave the local minima by first using the low-resolution images to recreate larger features and then moving to higher resolutions to bring out more detailed features, removing the local minima and eventually reaching to the global minimum.
Taking all the above factors in consideration, the optimization process is applied to the DTM. After each optimization iteration, an image simulator tool called OASIS (Optimized Astrophysical Simulator for Imaging Systems) developed by Jorda et al. (2010), is used to generate a set of synthetic images from the shape model. This complete optimization process of MPCD has been applied on the section of the comet containing the boulder shown in (Fig. 8). This site was reconstructed for two different time periods, one before the perihelion and one after (Fig. 17), which allows us to study the evolution.
Figure 17: Top left: DTM of pre-perihelion boulder location, Top right: DTM of post-perihelion boulder location, Bottom left: Reconstructed DTM (Pre-perihelion), Bottom right: Reconstructed DTM (Post-perihelion), Boulder position marked with arrow, original location marked with X.
We noticed that the first reconstruction with a smoothness parameter of 750 was creating high-frequency artifacts (which look like folds or waves on the surface) in the output model. To reduce this the smoothness parameter would have to be changed which would also lead the chi-square value to change. To constrain the optimum value of the smoothness parameter for which the chi-square value would be the least, we tried the reconstruction with several value, tracking the variation of the chi square values with the smoothness parameter (Fig. 18).
It can be seen from the plot that the for a smoothness of ~4000, we get the best value for the final chi-square. The reconstruction for the DTM was performed again with the optimum smoothness value (Fig. 19(b)) and this value was kept constant for the reconstruction of other DTMs. The reconstructed DTMs are presented in Table 3.
We see that the reconstruction with the optimum smoothness value produces much less artifacts and gives a much better chi-square value. However, because of the area contained in the DTM itself, the reconstruction was still not ideal. Particularly, because of the presence of the big rock (jokingly called the ‘hotdog’ rock) the MPCD algorithm could not refine the smaller features of the boulder more accurately. To tackle this, the DTM was further modified to only contain the area where the boulder was situated and the reconstruction was done for these DTMs (Fig. 20).

Table of contents :

1. Introduction 
1.1. Rosetta
1.2. Comet 67P/Churyumov-Gerasimenko 
2. 3D reconstruction
2.1. MPCD technique
2.1.1. Input Model
2.1.2. Image Selection
2.1.3. Optimization
3. Boulder Characterization from optimized DTMs 
3.1. Position Estimation
3.2. Dimension Estimation
3.3. Volume Estimation from DEM
4. Dynamic Models 
4.1. Light received by local area
4.2. Surface Temperature
4.3. Water production rate
4.4. Non-gravitational acceleration
5. Results
5.1. Scenario 1 – Periodic Bursts
5.1.1. Movement of boulder
5.2. Scenario 2 – Single Outburst
5.2.1. Movement of boulder
5.3. Minimum surface fraction required
5.4. Summary
6. Conclusions
7. Bibliography

GET THE COMPLETE PROJECT

Related Posts