The Observational Input
The last decade have seen the emergence of a clear cosmological context into which extragalactic astronomy is developed. This was allowed both by the observational breakthroughs but also to the upswing of numerical techniques that allowed the astrophysicists to perform numerical experiments and test their theories.
Observationally there are three major breakthroughs that have shaped our modern understanding of structure formation in the universe: the ob-servations of the Cosmic Microwave Background, the large surveys of galaxy populations and the detection of the weak gravitational lensing.
The Cosmic Microwave Background (CMB) is a background that is homogeneous (up to one part in one million) in all directions and its energy as a function of wavelengths fits nicely into a description by a black body spectrum. The discovery of the CMB in 1963 provided strong evidence to-wards the picture of a Universe that was much hotter and denser in the past than it is now. In other words it was fundamental support to the Hot Big Bang Cosmology. Cosmologist realized that over this smooth background, the initial fluctuations in the Universe, responsible for the structure that we see today, must have been imprinted.
The Observational Input
Fig. 1.2. Galaxies in the local Universe for two of the most influential galaxy surveys Center for Astrophysics (CfA, Harvard) Survey and the Sloan Digital Sky Survey (SDSS). The total number of galaxies included in each sample is also indicated. Figure taken from [GJS+05] from COBE discovered these tiny fluctuations of order 10−5 of the CMB temperature [SBK+92].
The analysis of these anisotropies have yielded the most solid constraints we have over cosmological parameters. Nevertheless, there is much more information the remains to be extracted from the CMB, especially in its po-larization, which could confirm the nature of the initial density fluctuations.
Galaxies surveys reached during the last decade a peak in performance. They covered wide portions of the sky up to distances of a few thousand billions of light years, while others went very deep, showing the properties of galaxies at very large distances showing galaxies when the universe had only tenth of its age. For wideness, we think especially of the Sloan Digital Sky Survey (SDSS) [YAA+00] and the Two Degrees Field (2dF) [FRP+99], for deepness we think of the Hubble Space Telescope (HST) [ATS+96]
Fig. 1.3. Image taken with the Hubble Space Telescope. It shows a cluster of galaxies acting as a lens for smaller galaxies in the background, which are the light source of the arcs seen around the brightest galaxies in the cluster.
From wide surveys we can have an idea of the large-scale structure in the universe up to the scales were in fact the distribution starts to be homoge-neous and isotropic (Fig.1.2); from the deep surveys we have the proof that galaxies have suﬀered strong evolution, merging, changing their morphology and color, interacting with their environment. We have enough evidence to think from a hierarchical view, where interactions between galaxies play an important role on its evolution [KWG93].
All these observations represent a zoo asking to be understood, but first of all, classified. This has been the work of astronomers during the last decades. Dividing galaxies by mass, luminosity, color. Watching how they evolve through the history of the universe. Observed from the optical to infrared wavelengths, also in radio and now in energetic γ-rays or cosmic rays.
Matter deflects light. In 1919 when deflection of the star light by the Sun was measured, Einstein’s General Theory of Relativity was confirmed. Gravitational lensing, as this eﬀect is called, applied to cosmological scales is today a highly valuable tool to measure the properties of matter distribution in the Universe [VMR+01].
Conceptually, gravitational lensing is an eﬀect composed by a source emit-ting light which will be later deflected by matter. Its detection not only gives information about the masses of the intervening ”lenses” but also on the ge-ometrical configuration of the system source-lens-observer – provided that we have some information on the light source.
The ability to infer properties of the cosmological matter field (lenses) and the geometry of the universe (distances) has put weak gravitational lensing at the forefront of observational tools able to give us a better understanding of our Universe. At the same time, as observations grow in complexity and precision, and we have additional information about the general composition and geometry of the Universe, the interpretation of lensing measurements is limited by the information available from the light sources, i.e. the galaxies. Much of the premises that allow the study of gravitational lensing are not longer valid when looking at finer levels of measurement, demanding a finer understanding of galaxy evolution to make further advances [HVB+06].
The Theoretical Eﬀort
The generic picture we can have after all the sets of observations is that 13 billion years ago the Universe looked very simple. Homogeneous and isotropic to one part in 105. Today the universe is homogeneous and isotropic on large scales (hundreds of millions of light years) but in smaller scales there is a diversity of complex structures with statistical properties we can measure [Pee93].
The goal is to understand what (when and how) happened in the mean-time. Using the known physics to date to do this job, we will require gravity to deal with the large-scale processes and electrodynamics and quantum physics to understand the small scale process.
As the early universe was homogeneous and isotropic (just like our present Universe on large scales), our first task is to understand the behavior of such a space. The most immediate theoretical choice we have is the theory of General Relativity, which is a theory describing the interaction of a space time with its matter content. In this theory the basic -mathematical- ingre-dients are the metric gαβ (describing space-time) and the energy momentum tensor Tαβ(describing matter content).
In the appendix we summarize the principal results for the theoretical description of the large scale evolution of an homogeneous and isotropic universe, emphasizing as well the diﬀerent approaches to describe an inho-mogeneous one. The main conclusion we can derive is that the analytical approach is only useful to describe the average evolution of the universe. As soon we intend to describe the growth of a density perturbation under the eﬀect of gravity, we be must satisfied to study very simple cases with high symmetry such as the collapse of an spherical over-density.
When we come back from the vastness of cosmological context to galactic scales, to discuss the formation of the galactic structure we see the outlook is not much more promising. This is equivalent, to a great extent, to perform a study of star formation, as most of the information we receive from galaxies comes in the form of radiation emitted by the stars.
Downgrading our interest to a more simple problem, we can say that the central problem behind the formation of galaxies is to understand how a tenuous gas can collapse and form a star. We can think that at some point a gravitational instability sets in making a clump of gas start to collapse under its own weight. As it collapses it will lower its gravitational energy, and to keep things in balance the thermal energy of the gas will rise, rising the pressure that at some point will balance the pull of the cloud’s own gravity.
To continue the collapse there must be a way to reduce the pressure drain-ing thermal energy away [RO77, Sil77, Bin77, WR78]. In other words, there should be cooling mechanisms that allow the gas to radiate and loose ther-mal energy. These can be done through diﬀerent radiation mechanisms as in the case of an excited atom that going back to its ground state emits a photon, at higher temperatures one can think of an ionized gas where the accelerated electrons radiate.
Describing the subsequent steps of collapse can be attempted through the study of hydrodynamics. In general one should also include magnetic fields, as they are observed in the interstellar medium. Complex as it is, analytical descriptions of the evolution of the gas under these circumstances are not available. The numerical approach is the only way out.
The Computational Trap
The methods of extragalactic astronomy are closer to archaeology than any-thing else. We only have access to traces of past events, and we can not reproduce experimentally our observations over and over again with diﬀer-ent conditions just to test some hypothesis about the formation or evolution of galaxies.
Under these conditions we can only receive help from the third way of exploring nature that started to be established in the scientific and technical community during the last fifty years: numerical simulations. Besides, the complexity of most of these situations, very often resist analytical treatment, leaving numerical simulations as the only available tool.
The Semi-Analytic Hope
In our case we are interested in following a great variety of processes. Gravity to follow the large-scale structure; hydrodynamics to follow the fate of gas that is going to collapse and form stars, magnetohydrodynamics (as we observe magnetic fields in the universe) and radiative transfer (as we seek to understand how radiation make its way from its originating process to us). Just to mention the most important.
Of all the fundamental process we mentioned, we have just obtained a satisfactory numerical implementation in the case of the gravity [FWB+99]. For hydrodynamics we have the control in some special cases, and form magnetohydrodynamics in very restrictive conditions. All the rest is ter-ritory largely unexplored [ICA+06]. But even including all the physics we are confident enough to model together, from large cosmological down to a galactic scale, it is still very challenging in the usual numerical approach that discretize space and time (or mass), to try to solve a relevant set of equations (gravity, hydrodynamics) to capture the physics [HLF+07]. From the computational point of view it involves achieving an eﬀective resolution spanning 5 to 6 orders of magnitude in mass, length and time. Bearing always in mind that a meaningful comparison with the observations must include theoretical predictions for at least hundreds of thousands of galaxies representing our local universe.
This fully numerical approach, even if reachable for simplified cases, is still reserved for the few with enough computational power available. That said, in order to advance our comprehension of the process of galaxy for-mation and evolution more democratic and flexible tools must be employed, allowing a broader range of people from the community to test eﬃciently hypothesis about the process of galaxy evolution. Hypothesis which might not be properly resolved in more detailed numerical schemes.
One approach that has been developed uses numerical simulations for the formation of structure of the dark matter (which is the problem we can model the best) and uses the heuristics for star formation and other hydrodynam-ical processes to include them into the model as analytic prescriptions (to model the physics we know the worst). This is known as the semi-analytic approach which is the one we have used for our work [KWG93, SP99].
The Semi-Analytic Hope
To reconcile the need for large simulated volumes with detailed populations describe in it, the semi-analytic model (hereafter SAM) approach proposes to describe first the non-linear clustering of dark matter on large scale at coarse resolution, and describe later the small scale baryonic physics through analytical prescriptions. The connection between the two scales is provided through the dark matter halo, which is the most basic unit of non-linear dark matter structure.
These models are the only window we have on the process of galaxy for-mation in a explicit cosmological context. In order to be fully confident of its predictive power, the work on these models have been focused on the reproduction of the observed luminosity functions and galaxy counts over a wide range of wavelength and redshift. The dialogue between these two trends (exploration of the model results and construction of new parts in the model) has been very fruitful, and most of the semi-analytic mod-els have reached a point of internal self-consistency where they are able to reproduce a large set of observations, opening the path for a third element in the discussion; the capitalization where the semi-analytic model opens new trends for the theoretical study of extragalactic astronomy.
In this thesis these three points — exploration / construction / capital-ization — were addressed with one of the three or four fully fledged semi-analytic models of galaxy formation models existing in the world [GLS+00, HDN+03, BBM+06].
The Semi-Analytic approach is just another numerical technique that must be controlled to know which part of the results is coming from artifacts in the numerics, and which part is coming from the physics we intend to model. Furthermore, a SAM seen as a numerical technique is sui-generis. The plethora of possible implementations of the same physical phenomenon and the infinite ramifications of possible interactions between diﬀerent parts of the model convert into a challenge the simplest intention of defining, for instance, an equivalent for the concept of convergence in usual numerical simulations. It makes almost impossible the comparison of numerical per-formance between two diﬀerent semi-analytic codes.
During my thesis I have proposed a protocol to provide a framework to address all these issues. The core idea is to introduce small perturbations in the scalar parameters controlling the baryonic physics inside the SAM and see how this aﬀects the final results. The final impression is that for large objects, which form in the most hierarchical way after many mergers, the SAMs show a low power of predictability for its properties, which is very likely to be a common feature for a hierarchical aggregation paradigm.
Another theme I have been interested in, comes from the fact that it is widely recognized that the backbone of a SAM is the merger tree, which is the structure describing the growth history of a dark matter halo. In apparent contradiction to its relevance, the methods to describe and com-pare merger trees obtained by diﬀerent approaches are rather poor, as they oversimplify their structure in the sake of simple tests. For that reason I have also proposed a way to describe merger trees by its contour process, a simpler one-dimensional structure that keeps intact all the geometrical information. I applied this description to the merger trees of a large numer-ical simulation and to merger trees derived from Monte Carlo simulations. This allowed me to describe the coarse geometry of the merger trees and find critical physical scales where the geometric properties of the merger tree change, a fact that can have an influence over the description of the underlying galaxy population.
Table of contents :
1.1 The Observational Input
1.2 The Theoretical Effort
1.3 The Computational Trap
1.4 The Semi-Analytic Hope
2 GALAXY FORMATION
2.1 Dark Matter Structure
2.1.1 Cosmological Scales
2.1.2 Galactic Scales
2.1.3 Halos and Merger Trees
2.1.5 Star Formation
2.3.1 Low redshift
2.3.2 High redshift
2.4 Our model
2.5 Our results
2.5.1 Geometry of Merger Trees
2.5.2 Predictability of SAMs
3 THE INFRARED UNIVERSE
3.1 Physical Processes
3.2 Infrared Observations
3.3 Theoretical Predictions
3.3.1 Phenomenological models
3.3.2 Hierarchical models
3.4 Our results
4 WEAK GRAVITATIONAL LENSING
4.1 Theoretical Bases
4.1.1 One Lens, Point Source
4.1.2 One Lens, Extended Source
4.1.3 Cosmological context
4.2.1 Image from the shear
4.2.2 Shear from the image
4.3 Numerical Simulations
4.3.1 Multiple Plane Theory
4.3.2 Ray tracing algorithm
4.3.3 The galaxies
4.4 Our Results