Column densities as tracers of the underlying density field: the link with observation

Get Complete Project Material File(s) Now! »

Mass functions

A mass function (MF) describes the number density of objects per mass interval. These functions are the cornerstones as well as the starting points of statistical studies of cosmological and astrophysical populations. In his pioneering study, Salpeter (1955) defined the stellar mass function as follows:
where n is the the number of stars per unit volume at time t and per mass interval m. The stellar initial mass function (IMF) pertains to the mass of stars at their creation. More details on mass functions can be found in Chapter 5.
The IMF provides an essential link between the evolutions of galaxies and stars and determines the chemical, light and baryonic contents of the universe (Chabrier, 2003). It is used in almost all numerical models of star formation as a closure equation for the evolution of sub-grid regions. Two key questions remain: what determines the IMF and does the IMF remain unchanged as the universe evolves ?
Mass functions were first investigated in cosmology (see Monaco 1998; Zentner 2007 for reviews), and the formalism that was developed was adapted to stellar scales by Hennebelle & Chabrier (2008); Hopkins (2012a,b). These models are successful in explaining the shape and the rather universal character of the IMF over a wide range of stellar cluster conditions in Milky-Way like environments. They are based on an initial lognormal distribution for the statistically homogeneous fluctuating density field, which is appropriate for turbulence in compressible media. Observations have shown, however, that, in star forming clouds, probability density functions (PDF) deviate from a lognormal form and develop preeminent power law tails at high density values, a feature which has been identified as the signature of gravity Stars are formed in high density subregions embedded in larger molecular clouds. Such local density enhancements are attributed to supersonic compressible turbulent motions which are known to generate disorganized, random-looking and evolving structures. These dense and intricate structures are suitable seeds for gravitation-driven condensation and modern astrophysicists are striving to understand their role in the star formation process. Thanks to observations from cutting edge space telescopes, high resolution numerical simulations and theoretical advances, our knowledge of the density fields of molecular clouds and their statistics has greatly improved. On the numerical side, the development of increasingly refined methods has made it possible to account for an increasing number of physical effects and to probe the star formation environment with increasing resolution. From a theoretical point of view, simplified, but elaborate, models have led to a better understanding of the physical processes involved in star formation. More precisely, they have allowed us to separate between processes that are essential to the evolution of molecular clouds and those that are comparatively minor contributors.
The most common description of density enhancement in molecular clouds relies on the phenomenol-ogy of isothermal gravitation-less turbulence. In this case, the density field is lognormal and its variance is proportional to M2, where M is the Mach number. Observations and numerical simulations have shown, however, that, in star forming clouds, the probability density function (PDF) deviates from a lognormal form and develops a preeminent power law tails at high density values, a feature which has been identified as the signature of gravity. This thesis includes a review of the most important attempts at explaining this feature and at introducing it in models of star formation. Theoretical studies have been focussed on the gravitationally unstable parts of a cloud and have been based on analytical models of scale-free gravitational collapse and/or geometrical arguments. They have led to asymptotic values for power-law tail exponents but have fallen short of a complete statistical description of density fluctuations in both gravitationally stable and unstable parts of a cloud. In addition, with the exception of the numerical simulations of Girichidis et al. (2014), these studies treat the PDF as a static property and do not deal with its evolution through time.
Considerable improvement of numerical capabilities has occurred in parallel with theoretical advances on basic physical processes. In studies of turbulence, theoretical work have continued to refine description tools and to guide the numerical exploration of the phenomenon. Thus, in astrophysics, the current trend is to run numerical simulations that tackle various physical effects with varying amplitudes and varying degrees of resolution. Results can rarely account for observations over the whole range of dimensions and physical properties of real astrophysical systems. They are extremely useful exploration tools and provide unprecedented insights into many aspects, but they are costly, time consuming and have a huge ecological impact. For example, a large proportion of the Australian and German Max Planck Institutes CO2 emissions are generated by supercomputers, to an extent that depends on the source of their electricity (Stevens, 2020).
In this thesis, we have explored new, and hopefully more elaborate, theoretical frameworks in order to obtain new insights on the complex physics behind star formation and tools to describe them. Two different communities have been at work on highly non linear and complex dynamical systems. In turbulence, the statistical approach for compressible media can be traced back to the famous initial formulation for incompressible flows (Batchelor, 1953). Chandrasekhar (1951b,a) for example, studied the dynamics of the density field for homogeneous and isotropic turbulence. However, progress on the PDF of density fluctuations in turbulent flows has been slower. A large number of efforts have dealt with incompressible fluids and it is only recently that a robust theoretical framework has been made available for compressible ones by Pan et al. (2018, 2019a,b). This framework relies on the formalism developed by Pope (1981, 1985) and Pope & Ching (1993) in which the PDF of any quantity is expressed as the conditional expectation of its time derivatives. Pan et al. (2019b) was able to derive a theoretical formulation of the density fluctuation PDF at steady state from first principles, but without gravity. In cosmology, the study of primordial density fluctuations and the subsequent formation of structures due to gravity has lead to the development of a vast statistical framework (see e.g Peebles 1973; Monaco 1998).
It is hoped that this thesis is one step toward a description of the statistics of random (turbulent) fluid motions under the influence of gravity. In one sense, this work connects formalisms that have been developed in the distinct fields of turbulence and cosmology. In another sense, it is aimed at improving our understanding of the role played by turbulent structures in the star formation process.
Introduction It is focussed on the statistics of density and velocity in molecular clouds and more specifically on how they evolve with time. On the one hand, it leads to new tools for studying and describing the statistics of astrophysical flows which should be useful to both observers and numerical specialists. On the other hand, it aims to improve our understanding of the evolution of the statistics of velocity and density fields and to develop a new approach on how gravitational collapse gets initiated and proceeds in dense Molecular Clouds.

This thesis is organized as follows.

In Chapter 1, we present some theoretical tools that are relevant to the present study. We first give a review of probability theory and of the statistical tools that will be used throughout this thesis. We then briefly review what information is obtained from observations and how it is obtained. Finally, we summarize a set of numerical simulations due to Federrath & Klessen (2012, 2013) that will allow us to test our theoretical formulation.
In Chapter 2, we tackle the issue of how to extract statistical properties of the 3D density field from the observed 2D projected density field. We quantify which variables of interest can be derived from observations and how they are related to those of the underlying density field.
In Chapter 3, we study the relevance of a statistical approach to the study of molecular clouds. Indeed, be it for observations or numerical simulations, only one realization (one snapshot) of a cloud is available. For a statistical approach to be meaningful, the cloud must be large enough for statistical information to be reliable. We clarify and define rigorously what is meant by « large enough » using ergodic theory and apply our results to the Polaris and Orion B clouds. Finally we apply the same formalism to other characteristics (observational or numerical) of molecular clouds. Observations cannot unravel the full complexity of the inner structure of these clouds. They do provide robust estimates of bulk characteristics, such as total mass and size, and it is useful to assess how these can be used to obtain accurate estimates of other global physical quantities of relevance to cloud dynamics, such as the total gravitational (binding) energy and the virial parameter.
In Chapter 4, we study the evolution of statistics of the density field in star forming clouds. More specifically, we study the time evolutions of the density PDF, the auto-covariance function of the density field and its associated correlation length. We develop in detail an analytical theory for the statistics of density fluctuations in star-forming molecular clouds under the combined effects of supersonic turbulence and self-gravity. The theory relies on general properties of solutions to the coupled Navier-Stokes equations for fluid motions and the Poisson equation for gravity. It extends previous approaches by accounting for gravity and by treating the PDF as a dynamical variable, not a stationary one. We derive rigorously transport equations for the PDF and ACF with a magnetic field present, determine how they evolve with time and solve for the density threshold above which gravity strongly affects and eventually dominates the flow dynamics. The theoretical results and diagnostics are compared to data on several molecular clouds as well as to the results of numerical simulations.

A first approach to the notion of probabilities

Observations of many phenomena occurring in apparently random fashion show that certain averages tend to a constant value as the number of observations increases. Moreover, these values do not seem to depend on any particular subset of events, provided it is large enough. For example, it is intuitively known that if we throw dice a very large number of times, it is highly likely that we will find about as many 1s or 6s. The purpose of probability theory is to formalize these issues and investigate the properties of averages in relation to probabilities of events. The probability of some event A is a real value P(A), which can be interpreted in the so-called relative-frequency framework as follows: if one performs a large enough number of experiments N and event A occurs NA times, one expects that P(A) ’ NA=N. It is clear, however, that this statement is imprecise as we do not really know what is meant by « large enough » or « expects », or even what « ’ » stands for. In an attempt to solve this issue, one may introduce the relative-frequency definition of probability as the limit of the frequency of occurrence as the number of experiments tends to infinity (von Mises, 1957):
However, if this definition seems to specify what is meant by « large enough », it presupposes the existence of a limit and cannot be used in practice to evaluate the accuracy of the prediction made by ratio NA=N for a finite number N of experiments. It is worth noting at this point that this lack of precision cannot be totally avoided, but, as will be seen below, it is possible to give working definitions developed logically from clearly defined axioms that can be applied to real problems.

Axiomatic of probability theory and set theory

We introduce here the formalism introduced by Kolmogorov (von Plato, 2005) which is considered the most correct approach to probability theory today. For a complete review of this approach, we refer to the excellent book of Papoulis & Pillai (1965) and only mention what is needed for the present work. This approach formalizes both the concept of events as subsets of a (larger) set and the notion of probability as a measure on this set. This measure can intuitively be interpreted as the size of a subset.

Elements of set theory

We will only briefly review the basic notions of set theory here, mainly in order to clarify the terminology of probability theory.
We first start by introducing the so-called universal or possibility set , whose elements are called elementary events. As mentioned above, an event is described as some subset A of and if in any trial outcome ! is an element of A , one states that event A occurs. The first two trivial subsets and events are itself, sometimes called the « sure event » (occurs at every trial), and the empty set ; sometimes called impossible event. From now on, we shall refer to « events » in order to use the language of probability theory but will keep in mind that the word really stands for subsets.
A natural procedure is to construct from two events A1 and A2 either the event such that either one of these events occurs or the event such that both occur simultaneously. For example in a dice experiment, from the events A1 = the dice gives 2, A2 = the dice gives 4, one may wish to consider event A1+2 = the dice gives 2 or 4. This is easily accomplished in set theory with the union [ and intersection \ operations. Thus, from two events A1 and A2, one considers the union A1 [A2, sometimes noted A1 + A2 as the event such that either A1 or A2 occurs, both not being necessarily exclusive. Moreover, one also constructs the intersection A1 \ A2, sometimes noted A1 A2, as the event such that A1 and A2 both occur simultaneously. Two events A1 and A2 are then called mutually exclusive if the occurrence of one prevents the occurrence of the second one, which is written as A1 \ A2 = ;. Another natural operation is to introduce the event opposite to A, noted A, such that A and A are mutually exclusive and their union is the universal set . This operation is the complementation operation in set 1 theory.

READ  Genome-wide association study with imputed whole-genome sequence variants including large deletions for female fertility in three Nordic dairy breeds

Probability space

As discussed above, the set of events A (which is a set of subsets of ) we seek to deal with must possess several properties highlighted below:
• A is not empty (it contains and ;),
A is closed under complementation: If A is in A, then so is its complement A,
A is closed under countable unions: If (An)n2J is a countable collection of elements of A, then so is their union [n2J An,
A is closed under countable intersections: If (An)n2J is a countable collection of elements of A, then so is their intersection \n2J An.
We note that item 4 is a consequence of item 2 and 3 by De Morgan’s law, and as such does not need to be included as a property of A. All these properties provide A with the structure of a -Algebra, as named in set theory.
The last item one needs to set up is the probability associated to each events in A. Given that A is a -Algebra, it possesses the properties required to be endowed with a probability measure P (or law) which is a function defined on A with values in the [0; 1] interval, such that:
The notion of random variable allows one to deal with a measure in probability space through a relation with a measurable space, for example R (or Rn). The interest of this association, in the case where the measurable space is R (or Rn), is that one can assign a number (or mutliplet) to the abstract result of an experiment, such as « heads » or « tails » for example, that allows one to treat in a unified way similar experiments using different terminologies. Consider first the case of a scalar random variable. The measurable space (R; B; ) is usually endowed with Lebesgue’s measure and B is Borel’s -Algebra which contains all the R interval. A random variable X is then a map X : ! R, which attributes a real number X(!) to each elementary event ! 2 .
In order to benefit from the properties of the measurable space (R; B; ), random variable X should be such that the preimage of every integral Ix =] 1 ; x] is an element of the -Algebra A: Ax = X 1(Ix) 2 A and X is said to be a measurable function. In other words, the events of A are mapped onto the events in B. This can be generalized to Rn in a straightforward manner.

(Cumulative) Distribution function

Definition

The mapping of events in A on those of B allows one to define a probability measure on R (hence not a Lebesgue’s measure), PX , which is defined as PX (X(A)) = P(A) for every A in A. The notation PX serves as a reminder that this depends on the choice of X and it is called law of the random variable X.

Vector random variables

So far, we have studied the case of scalar random variables mapping the possibility set onto R. However, on the one hand some physical quantities are described by vectors and on the other hand one would like to be able to study the properties of a combination of scalar random variables. One therefore needs to generalize the previous result to the case of a n-dimensional vector random variables = (X1; :::; Xn). Such a random variable is a measurable function from ( ; A; P) to (Rn; Bn; ) where Bn is Borel’s -algebra and contains every cartesian products of intervals.

Table of contents :

Introduction
1 The interstellar medium
1.1 A variety of structures in our Galaxy
1.2 Where stars are formed
1.3 Density and chemical composition
1.4 Various phases
1.5 Magnetic field
2 Molecular clouds
2.1 Difficulties with observations
2.2 Transient structures
2.3 Characteristic scales and filamentary substructures
2.4 Complex thermodynamics
2.4.1 Heating
2.4.2 Cooling
2.4.3 Overall effects
2.5 Turbulent motions
2.5.1 Non thermal motions
2.5.2 Reynolds number and turbulence
2.5.3 Compressible, isothermal turbulence
3 Fundamentals of core and star formation
3.1 Newtonian Gravity
3.1.1 Attraction of point masses
3.1.2 Continuous media and Poisson equation
3.1.3 Gravitational potentials and fields of homogeneous distributions
3.2 Two descriptions of fluid motions
3.3 Homologous collapse: free-fall time scale
3.3.1 Lagrangian description and mass conservation
3.3.2 Solving the equation of motion
3.3.3 Dynamical regimes
3.3.4 « Standard » initially motionless free fall: the free fall time.
3.4 Virial equilibrium
3.4.1 Evolution of the moment of inertia
3.4.2 Virial parameter
3.5 The Jeans criterion
3.6 Overview of low mass star formation
4 Mass functions
5 Motivation for this work
Chapter 1 Tools
1 Elements of probability theory
1.1 A first approach to the notion of probabilities
1.2 Axiomatic of probability theory and set theory
1.2.1 Elements of set theory
1.2.2 Probability space
1.2.3 Simple properties
1.2.4 Conditional probability
1.2.5 Random variables
1.3 (Cumulative) Distribution function
1.3.1 Definition
1.3.2 Properties of the distribution function
1.4 Probability Density Function (PDF)
1.4.1 Defintion
1.4.2 Properties
1.5 Change of variables
1.6 Vector random variables
1.6.1 Distribution and density functions
1.6.2 Marginal distributions and densities
1.6.3 Independent variables
1.7 Characteristic numbers
1.7.1 Mathematical expectation
1.7.2 Moments
1.7.3 Variance and standard deviation
1.7.4 Uncorrelated variables
1.8 Gaussian random variables
1.9 Conditional statistics
1.9.1 Conditional Distribution and density
1.9.2 Conditional expectations
1.10 Practical application of the theory
1.10.1 Bienayme-Tchebychev’s inequality
1.10.2 Accuracy of estimators and prediction theory
2 Stochastic fields and analytical tools
2.1 One point statistics
2.1.1 One point distributions and densities
2.1.2 Moments
2.1.3 Commutation of the derivatives and the expectation
2.2 N-point statistics
2.2.1 N-point distributions and densities
2.2.2 Moments, correlation and covariance
2.3 Gaussian fields
2.4 Statistically stationary or homogeneous fields
2.4.1 Stationarity
2.4.2 Homogeneity
2.4.3 Isotropy
2.4.4 Structure functions
2.4.5 Properties of the ACF of homogeneous fields
2.4.6 Correlation coefficients
2.4.7 Power spectrum
3 Ergodic theory
3.1 Motivation
3.2 Frequency interpretation and repeated trials
3.3 Ergodic theorems. Autocovariance function. Correlation length
3.3.1 Ergodic theorems and the autocovariance function
3.3.2 Correlation length
3.3.3 Integral scale
3.3.4 Average size of most correlated structures
3.4 Estimates of the autocovariance function and correlation length
3.4.1 Reliability of the estimators of the auto-covariance and the power spectrum
3.4.2 Periodic estimators
3.5 Concluding remark
4 Observations
4.1 Elements of radiative transfer
4.1.1 Specific intensity
4.1.2 Equation for radiative transport
4.1.3 Formal solution
4.1.4 Planck’s law
4.1.5 Various definitions of temperature
4.2 Spectral lines
4.3 Extinction
4.3.1 Magnitude
4.3.2 The quantity extinction
4.4 Molecular lines
4.5 Dust emission
5 Numerical simulations
5.1 Numerical set up of Federrath & Klessen (2012)
5.2 Turbulent forcing
5.3 Handling high density regions: sink particles
5.4 List of models and general evolution of the simulations
5.5 Resolution and limits for resolving the PDFs
5.6 Concluding summary
A Ergodic estimate for a general control volume
Chapter 2 Column densities as tracers of the underlying density field: the link with observation
1 Motivation
2 One and two point statistics.
2.1 Inhomogeneity and anisotropy due to integration along the line-of-sight
2.2 Column-density field in a numerical simulation box
2.2.1 Statistical homogeneity
2.2.2 Auto-covariance function and variance
2.3 Decay length of correlations
3 Column density PDFs as tracers of the underlying density PDFs
3.1 Current status of research
3.1.1 Variance in column density and density PDFs
3.1.2 Exponents in power-law PDFs
3.1.3 Shape of the PDFs
3.2 Aim of this section
3.3 Critical densities for the onset of power-law tails
3.4 Numerical test
3.5 Model with one or two power-law tails
4 Conclusion
Chapter 3 Relevance of a statistical approach
1 Introduction
1.1 Motivation
1.2 Fair-sample hypothesis
1.3 Informations available from observations
1.4 Objectives of this chapter
2 Mathematical framework for a statistical approach
2.1 Ergodic theory
2.2 Expected fluctuations in repeated trials
3 Application to astrophysical density fields
3.1 Exact results regarding the properties of the auto-covariance function (ACF) of the density field
3.2 Phenomenology of (compressible) turbulence
3.3 Autocovariance function and correlation length in astrophysical situations
3.3.1 ACF from data
3.3.2 Usual estimates
3.4 Practical assumptions regarding the ACF
3.5 Homogeneity and correlation scales in cosmology
4 Application to observations of the Polaris cloud
4.1 Filtering out large scale gradients
4.2 Estimated ACF and correlation length
4.2.1 Correlation length from the ACF
4.3 Ergodic estimate and real error bars for the observed PDF
4.3.1 Reduced integration effects at high density contrasts
4.3.2 Statistical Errorbars
4.3.3 Gaussian approximation; effective error bars on the observed PDF
5 Applications to the Orion B cloud
6 Consequences for the estimation of the total gravitational energy of a cloud
6.1 Motivation
6.2 Correlation length and gravitational binding energy
6.2.1 Isolated cloud
6.2.2 Simulations in a periodic box
6.3 Virial parameter
7 Conclusion
A Homogeneity scale
B Ergodic estimators of the CMF and PDF
B.1 Cumulative Distribution Function (CMF)
B.2 Probability Density Function (PDF)
B.3 Gaussian process
B.3.1 Integrability of the ACF and short scale analysis
B.3.2 Exponential ACF
B.4 Deterministic function of a Gaussian field
B.4.1 Log-normal fields
C Orion B cloud
C.1 ACF of the square and filament region
C.2 Correlation length from the variance of the column densities
D Computation of the total potential energy on a control volume
Chapter 4 Evolution of the statistics of the density field in star forming clouds
1 Introduction
1.1 Context
1.2 Current status of research
1.2.1 Is gravity the only cause of these observed departures from lognormal like statistics ?
1.2.2 Gravitational infall as a mechanism producing PLTs
1.2.3 Statistical approach of turbulence based on first principles
1.3 Objectives of this chapter
2 Mathematical Framework
2.1 Description of a molecular cloud
2.2 Probability distribution functions (PDF)
2.3 Model for the statistics of a turbulent cloud
2.3.1 Homogeneous clouds
2.3.2 Accepted class of flows
2.4 Transport equations for the density PDF
2.4.1 Transport equations
2.4.2 Velocity divergence – density PDF relationship
2.5 Transport equations for the auto-covariance function of the density field
2.5.1 Transport equation
2.5.2 Correlation length and conserved quantity
3 Implications for astrophysical flows in star forming clouds
3.1 Velocity divergence – density PDF relationship
3.1.1 Physical class of flows
3.1.2 Separation of variables
3.1.3 Summary
3.2 Dynamical effects on the s-PDF shape
3.2.1 Stationary solutions
3.2.2 Density threshold for a gravity induced transitions of regime
3.2.3 Non-magnetized supersonic hydrodynamics
3.2.4 Magnetized supersonic hydrodynamics
3.2.5 Modified Jeans criteria
3.2.6 Overall effects of gravity
3.3 Time evolution of the density PDF in star forming clouds
3.3.1 Transient regime and short time evolution
3.3.2 Asymptotic case: evolution in regions of « free-fall » collapse (fully developed gravity induced dynamics)
3.4 Evolution of the correlation length of the density field in star forming clouds
4 Comparison with numerical simulations
4.1 Numerical set up
4.1.1 Methods
4.1.2 Resolution and limitations on the resolution of the PDFs
4.2 Short time evolution of the transport equations
4.3 Evolution of the s-PDF
4.3.1 Effects of gravity on the low-s-part of the s-PDF
4.3.2 Departure from lognormal statistics and formation of power law tails
4.3.3 Evolution of the -PDF
4.4 Evolution of the correlation length
5 Comparison with Observations
5.1 Clouds with active star formation
5.2 « Young » clouds with little or no sign of star formation activity
5.3 Polaris
5.3.1 Localisation of regions entailing the PLTs
5.3.2 Is gravity the origin of the PLT in Polaris ?
5.4 Draco
5.4.1 Other origin for departures from lognormal statistics in Draco
6 Discussion
6.1 The scale-dependent dynamics of molecular clouds and the suggested explanations
6.2 Physical picture: towards a consistent gravito-turbulent paradigm ?
6.2.1 Dynamical regimes
6.2.2 Size of the densest correlated structures
6.2.3 The averaged correlated mass and the averaged mass to form bound stellar cores
6.2.4 Concluding remarks
7 Conclusion
Chapter 5 Towards the stellar initial mass function (IMF)
1 Introduction
1.0.1 The observable luminosity function and the two regimes of the MF
1.0.2 Present day MF and the initial mass function (IMF)
1.1 Observational features
1.2 Motivation
2 Cosmological mass functions
2.1 Gaussian and homogeneous primordial fluctuations
2.1.1 Smoothed density field
2.1.2 Delta correlated Fourier modes
2.1.3 Particular choices of window functions
2.2 Press-Schechter formalism and mass functions
2.2.1 Initial volume fraction of region exceeding some density contrast at scale R
2.2.2 Mass attributed to unstable regions and the PS mass function
2.2.3 The cloud-in-cloud problem
2.2.4 Summary of the PS formalism
2.3 The excursion-set formalism and mass functions
2.3.1 Back to the cloud-in-cloud problem
2.3.2 Random walks
2.3.3 Brownian walks
2.3.4 The first crossing distribution
2.3.5 General moving barriers
2.3.6 The golden rule and the mass function
2.4 Summary and caveats
2.4.1 Relevance of the statistical procedures
2.4.2 Gaussian simplifications
2.4.3 Caveats regarding the various steps of the procedure
3 From the cosmological to the stellar mass functions
3.1 Step 1: Density threshold in a turbulent medium
3.2 Step 2 and 3: Statistical counting and the Press-Schechter approach of Hennebelle & Chabrier (2008)
3.2.1 Statistical counting
3.2.2 The cloud-in-cloud problem and the mass assignment procedure
3.2.3 Golden rule
3.2.4 Mass function
3.2.5 The variance at scale R
3.2.6 Dynamical regimes and properties of the CMF
3.2.7 From the CMF to the IMF
3.2.8 Summary of the model
3.3 Steps 2 and 3: Statistical counting and the excursion-set of Hopkins (2012a,b)
3.3.1 The cloud-in-cloud problem
3.3.2 The Hopkins (2012a,b) model and the variance problem
3.3.3 Proper lay out of the Hopkins (2012a,b) assumptions and overlooked issues
3.3.4 Success of the model and summary
3.4 Conclusion regarding these models
3.5 Additional potential flaws of these models
3.5.1 Conversion of the CMF into the IMF
3.5.2 Departures from lognormal statistics
4 Perspectives and suggested explanations
4.1 Corrected barrier
4.2 Meaning of the counting procedure for non Gaussian and non lognormal fields
4.2.1 Deterministic function of a Gaussian field
5 Conclusion
A A Volterra equation of the second kind for the first crossing distribution
A.1 Probability that a trajectory has never crossed the barrier at resolution S
A.1.1 Conditional probability of arriving at 2 [x; x + dx[ knowing that the trajectory started at x0
A.1.2 Probability (x; S) for the sharp k-space filter
A.2 The sharp k-space filter and the Volterra equation for the first crossing distribution
A.3 Key ingredients
Conclusion 
Bibliography

GET THE COMPLETE PROJECT

Related Posts