Get Complete Project Material File(s) Now! »
The COSMIC-DANCe project
The COSMIC DANCe project3 (DANCe standing for Dynamical Analysis of Nearby ClustErs) started as a survey to map nearby (< 500 pc), young (< 500 Myr) associations and open clusters (Bouy et al. 2013). With wide-field, ground-based images we can detect objects several orders of magnitude deeper than Gaia and study the least massive objects down to a few Jupiter masses. Additionally, the stars in the youngest star-forming regions are still embedded in the molecular cloud where the extinction is high and Gaia, which operates in the visible, is mostly blind. On the contrary, the DANCe survey combines images in the visible and the IR. These latter are especially suited for regions with a significant amount of extinction. Figure 1.7 shows a composite image of the central part of IC 4665 where a large number of faint sources escape the Gaia detection limit but are clearly detected in the DANCe survey.
The DANCe survey analyses a massive amount of wide-field ground-based images (several thousands, depending on the region) in the optical and IR to provide catalogues of proper motions and multi-filter photometry for millions of sources. The coverage of the DANCe survey is far from uniform with a spatially uneven pattern which also depends on the photometric filter (see e.g. Fig. 4.1 and Fig. 1 from Bouy et al. 2013). In consequence, the magnitude limit and completeness in each band strongly depend on the region under study and the amount of data available. Something similar happens with the proper motions and, in this case, it is of uttermost importance to have images with a long time baseline to have good precision. In both cases (photometry and proper motions) the selection function is very complex and we did not attempt to characterise it. The typical uncertainties in proper motions in DANCe are of . 1 mas yr1.
The main goal of the DANCe project is to study the mass function down to the least massive objects. Besides the observational challenges related to the detection of the faintest objects, the membership classification is a major difficulty and is a classical example of a causality dilemma where we need to find at the same time the members that define a cluster and the cluster properties. A first attempt to solve this issue was presented by Sarro et al. (2014). Combining this new algorithm with the DANCe catalogue for the Pleiades the authors increased the number of known members by a 40% and multiplied algorithm presented two major drawbacks. First, the sources with missing data (i.e. sources which are not observed in all the photometric bands) could not be used to model the cluster although a final membership probability was eventually computed for them. This can introduce biases in the model since typically the missing data are not randomly distributed but are more frequent among the faint sources. Second, the desired degree of completeness or contamination is a free parameter which has to be set at the beginning by the user. Nevertheless, Sarro et al. (2014) showed that the results obtained were not significantly affected by that parameter. To overcome these difficulties, Olivares et al. (2018) developed a method based on Hierarchical models. In this framework, the observations in terms of uncertainties, correlations, and missing data are better modelled. Besides, it is a fully Bayesian model which provides posterior distributions for the parameters of the model (e.g. the luminosity function) and no longer needs to fix the completeness/contamination rate. With this new algorithm, also applied to the same observations of the Pleiades, the authors found 10% of new members. The main drawback is that this method is significantly more computationally expensive.
Compilation of wide-field images
We did an effort to compile the most complete dataset in each of the regions we study. We aimed to have a large spatial coverage so that we can study also the outskirts of the association and a large time coverage to have the best precision possible in proper motions. For that, we combined our own observations with all the images found in various public in archives.
New wide-field images
The DANCe project started in 2011 (Bouy et al. 2011) and since then observational proposals have been successfully accepted in several observatories:
– The Dark Energy Camera (DECam) mounted on the 4 m telescope Blanco in Cerro Tololo Inter American Observatory (CTIO).
– The Wide Field Camera (WFC) mounted on the Isaac Newton Telescope (INT).
– The MegaCam from the Canada-France-Hawaii Telescope (CFHT).
– The Wide-field InfraRed Camera (WIRCam) from the CFHT.
– The Newfirm mounted on the 4 m telescope Mayall in Kitt Peak National Observatory (KPNO). This telescope is a twin of the Blanco telescope.
– The SuprimeCam from Subaru.
– The Hyper Suprime-Cam (HSC) from Subaru.
All the observations were carried in dithering mode with sequences of overlapping exposures at slightly shifted positions in the sky. This technique facilitates the elimination of cosmic rays and other random sources of noise present in the images. It is also fundamental to derive an accurate distortion map for the instrument, using the overlapping plate method described in (Bouy et al. 2013). I personally participated to or led the observations at the telescope during the nights of 28–30 December 2017 with DECam at CTIO, 15–20 June 2019 and 5–11 November 2019 with the WFC at the INT.
Probability threshold from synthetic data
Once the model converged, all the sources in the input catalogue have a membership probability. To select members and estimate the mass function, we thus need to define a membership probability threshold. The intuitive threshold at 50% could be highly non-optimal in terms of contamination and completeness. To better assess the completeness and contamination rate as a function of membership probability, we generated a synthetic dataset from the model learnt with observed data. Therefore, it has similar properties to the observed data (e.g. missing values, frequency of members, uncertainties). As a consequence, the results derived from the synthetic dataset are only valid for the used representation space and learnt model. We refer to Olivares et al. (2019) for the details on how this synthetic dataset is generated.
We used this synthetic dataset to analyse the goodness of our classification and to choose the optimum probability threshold, popt, used for the final classification based on the contamination and completeness rates. The optimum threshold of course depends on the scientific goal behind the membershipanalysis. In our case and in order to study the mass function, we are interestedin reaching a compromise between the contamination and the completeness. To this end, we chose as popt the value that minimises the distance to the perfect classifier (DST). This distance is defined in terms of the contamination rate (CR) and the true positive rate (TPR), which in turn depend on the confusion matrix: true positives (TPs), false positives (FPs), false negatives (FNs), and true negatives (TNs). These indices are defined as follows: CR = FP FP + TP .
towards the mass function
In this section, we describe our strategy to obtain the mass function from the observables. First, we obtained absolute magnitudes from the apparent magnitudes and a distance estimate. Then, we inferred the luminosity and mass from evolutionary models. Finally, we used individual masses to derive the mass distribution of the region.
From apparent to absolute magnitudes
The conversion from apparent (m) to absolute (M) magnitudes involves the distance (d) and extinction (Am) towards each source: M = m 5 log10(d[pc]) + 5 + Am (2.3).
The term m = 5 log10(d[pc]) 5 is referred as the distance modulus and accounts for the differences in brightness (measured in units of magnitude) caused by the distance of the source. Since individual measurements of the extinction are not available for all the sources, we included it as a free parameter in the next step where we use the absolute magnitudes to infer the luminosity and mass of each source.
Individual parallax measurements are now available for many stars thanks to Gaia. In theory, the distance can be derived in a very straightforward way by simply inverting the parallax. However, in practice, this can lead to important biases when the uncertainties in the parallax are large (typically when greater than 10%). Following the recommendations of Luri et al. (2018) we used a Bayesian approach to convert parallaxes to distances. We used the Kalkayotl4 code (Olivares et al. 2020), which performs a Bayesian probabilistic inference to compute posterior probability distributions for the distance of each member. I participated in the validation of this code.
To compute the absolute magnitude of each source and properly estimate the corresponding uncertainties, we sampled the apparent magnitude with a Gaussian centred at the observed magnitude and a standard deviation equal to the uncertainty. Then, each sample was converted to absolute magnitude by sampling the posterior distance distribution obtained with Kalkayotl and applying it to Equation 2.3. For the sources in the DANCe catalogue, beyond the limit of sensitivity of Gaia and without parallax measurement, we sampled the distance from the cluster distance distribution obtained with all the Gaia members.
Table of contents :
1.1 Star formation in the solar neighbourhood
1.1.1 The star formation process
1.1.2 Formation of substellar objects
1.1.3 The initial mass function as a proxy for star formation
1.2 Astrometric and photometric surveys
1.2.1 The Gaia mission
1.2.2 The COSMIC-DANCe project
1.3 Motivation and goals of the thesis
1.4 Thesis outline
ii origin of the initial mass function
2.1 The DANCe catalogue
2.1.1 Compilation of wide-field images
2.1.2 Astrometric solution
2.1.3 Photometric solution
2.2 Membership analysis
2.2.1 Field model
2.2.2 Cluster model
2.2.3 Probability threshold from synthetic data
2.3 Towards the mass function
2.3.1 From apparent to absolute magnitudes
2.3.2 From absolute magnitudes to luminosity and mass .
2.3.3 From individual masses to a mass distribution
3 imf at 30 myr: ic 4665
3.2.1 The Gaia catalogue
3.2.2 The DANCe catalogue
3.3 Membership analysis
3.3.1 Parameters of the membership algorithm
3.3.2 Internal validation
3.3.3 External validation
3.4.2 Apparent magnitude distribution
3.4.3 Present-day system mass function
3.4.4 Spatial distribution
4 imf at 1–10 myr: upper scorpius and r ophiuchus
4.2.1 The Hipparcos catalogue
4.2.2 The Gaia catalogue
4.2.3 The DANCe catalogue
4.3 Membership analysis
4.3.1 Parameters of the membership algorithm
4.3.2 Internal validation
4.3.3 External validation
4.4.1 Structure in the 6D phase space
4.4.3 Apparent magnitude distribution
4.4.4 Luminosity and present-day system mass function
4.4.5 Constrains on the formation of free-floating planets
iii properties of stellar groups
5 dynamical ages
5.2 Data and sample selection
5.2.1 Proper motions and parallaxes
5.2.2 Radial velocities
5.2.3 Kinematic sample selection
5.2.4 Bona fide b Pic sample
5.3 Traceback analysis
5.3.1 Towards a dynamical age estimate
5.3.2 Signs of substructure at birth time
5.3.3 Effect of the Galactic potential
6 debris discs
6.2.1 Photometric database
6.2.2 Photometry filtering
6.3 Infrared excess detection
6.3.1 MIPS 24 mm data
6.3.2 IRAC 3.6 8.0 mm data
6.3.3 WISE 3.4 22 mm data
6.4 Spectral energy distribution
6.5 Candidates of hosting a debris disc
iv conclusions and perspectives
7 summary, conclusions, and perspectives
7.1 Summary and conclusions
7.2 Future perspectives
a dynamical age of b pic
a.1 Cross-match with Gaia DR2
a.2 Kinematically discarded sources
b.1 IC 4665
b.2 USC and r Oph
c additional tables
d résumé substantiel