Extreme-value statistics of fractional Brownian motion 

Get Complete Project Material File(s) Now! »

Statistical physics and disordered systems

This thesis takes place in the large context of the study of disordered systems from a statistical physics point of view. The general idea of statistical physics is to use a probabilistic description of a system and its interaction with the environment, and study the properties which emerge at large scales (typically when the number of degrees of freedom becomes very large). The most interesting properties, at least from a theoretical point of view, are the ones that do not depend on the details of the probabilistic description, but only on some global settings: symmetries, spatial dimension, decreasing of correlations, existence of large moments for a distribution, etc. A well-known example of such an emerging large-scale property is formalized mathematically with the central limit theorem. It states that the average of a large number of random variables converges (after proper rescaling) to a Gaussian variable, as soon as the variables are indepen-dent (or at least weakly correlated), and distributed with the same distribution for which the second moment exists, i.e. large fluctuations are not too probable.
A physical system is usually defined with the specification of the possible configurations, the configuration space, and the energy of the system in each of these configurations, the energy landscape. When the system has a large number of degrees of freedom (for example if the system is a gas with many particles), it is usually not possible to determine its exact configuration, or to describe its exact dynamics, due to thermal fluctuations.
In such a situation, the statistical physics approach consists in defining macroscopic states of the system as probability distributions over the configuration space, which depend only on a few parameters (e.g. the temperature of the system). This means that when the system is in a given macroscopic state, we do not know its exact configuration (the positions and velocities of all particles in gas for example), only the probability for it to be in each configuration. This allows us to makes the link between the microscopic description of a system and the phenomenological laws of thermodynamics which govern its macroscopic properties. Ideas of statistical physics have also inspired other fields of science who deal with collective phenomenon, chaos, etc.
Standard statistical physics is a very successful theory, but not all systems can be described with a simple energy landscape. Disordered system, in the context of statistical physics, means that even the energy of each configuration is described by a probabilistic approach. For example, to study the transport of electrons in a metal, one needs to take into account that the energy of a given configuration for the electrons depends on the impurities and the defects of the metal lattice. These are hard to describe exactly, but can be modeled efficiently by a probabilistic approach, stating for example that the impurities are uniformly distributed in the sample. In this case, the density of impurities remains the only parameter of the model, compared to all the spatial positions of the defects needed to define a deterministic energy landscape.
At non-zero temperature, the study of a disordered system involves two levels of randomness: the energy landscape (i.e. the energy level of each configuration) is itself a random object, and for each realisation of this landscape the state of the system is defined by a probability distribution, depending on this energy landscape. The random energy model (REM), developed by Derrida in [1], is a good toy model to understand the ideas of disordered systems.
At low temperatures, the equilibrium thermodynamics of a system is governed by configura-tions with the lowest energies, and by the energy barriers between them. In disordered systems, due to the randomness of the energy landscape, characterizing these low energy configurations is difficult. This generally leads to the existence of lots of metastable states, i.e. configurations with an energy level close to the minimal energy one (which is the equilibrium state at zero temperature), but far away in configuration space. Finding the statistical properties of the equi-librium state or the metastable states in a disordered system (which are random objects) are difficult questions, and can be viewed as a special case of extreme-value statistics, which will be defined later in this introduction.
It is important to note that sometimes, even if a deterministic description of a system is possible, it can be more interesting to try to understand it using a probabilistic approach, as the microscopic details of the energy landscape can be irrelevant to the properties studied. A probabilistic approach can lead to exact predictions with simpler derivations. It is also possible that a deterministic solution of a problem, with very strong dependence on initial conditions (or other parameters) has no practical use.
Ideas from statistical physics of disordered systems are now used in a wide range of contexts: combinatorial optimisation, neural networks, error-correcting codes, financial markets, protein-folding problems, and more.

Probability theory and stochastic processes

We now briefly introduce some concepts of probability which are used in this thesis. The main objects of probability theory are random variables. These are the mathematical formalisations of experiments with several possible outcomes, for which it is not possible to predict the result, but only give probabilities for the possible outcomes. Typical examples are the result of flipping a coin or rolling a dice. These are also historical examples, as probability theory started with the study of games of chance, and gambling.
If X designs a random variable, and A a subset of the possible outcomes, we will denote the probability that the event A happens as Prob(X ∈ A), which is a number between 0 and 1.
When X takes its values in R (for example, the temperature of the day with arbitrary preci-sion), the probability of any event depending on X can be expressed in terms of the probability density function of X, which we denote P(x) := ∂xProb(X < x). X is said to be a Gaussian variable with mean µ and variance σ2 if it has the density
1 (x−µ)2
P(x) = √ e− . (1.1)
2σ2
σ 2π
In this thesis, we denote average quantities with respect to some random variable X using a bracket notation: hf(X)i. In the case of a Gaussian variable as defined above, this gives for the first two moments
hXi := Z ∞ ∞ −∞ dx xP(x) = µ , h(X − µ)2i := Z−∞ dx (x − µ)2P(x) = σ2 . (1.2)
For more details on the mathematical formalism of probability theory, and its various applica-tions, we refer to the good introduction of W. Feller [2].
In physics, when the random variable is the microscopic configuration q of a thermodynamical system at temperature T , its law (i.e. the probability to be in each configuration which define the macroscopic state) is constructed from the Boltzmann weight wq = e− , where E(q) is the energy of the configuration q (which can also be random in the case of a disordered system). To obtain a well-defined probability law, we need it to be normalised to one (if we sum over all the possible configurations). This naturally leads to the introduction of the partition function of the system Z = X e− ET(q) , (1.3) such that the probability to be in configuration q is wq/Z. While appearing here solely as a normalisation constant, the partition function is a powerful tool to compute various quantities on a thermodynamical system. For example, in this thesis we will be able to extract extreme value statistics using a formulation of the problem in terms of a partition function.
A particular class of random variables are stochastic (or random) processes. Formally, it is a collection of random variables, indexed by a unidimensional parameter, discrete or continuous, which we interpret as a time. The existence of this parameter allows us to have an object with more structure and look, for example, at a dynamical situation. The regularity (continuity, differentiability) of a stochastic process is a subtle question and in this thesis we use derivatives even if the mathematical definition of these are sometimes unclear.
There are various ways to define a random process; stochastic differential equations, Markov chain, random walk. These can be effective models for out-of-equilibrium situations, either tran-sient or stationary, where a system evolves in contact with a random environment. It allows us to go beyond equilibrium statistical physics and study diffusion or transport phenomena, relaxation to the equilibrium and aging. As in the equilibrium case, the randomness of the microscopic dynamic can have different physical origins, and is usually only an effective de-scription which try to capture the relevant ingredients of the true microscopic dynamics. For a more detailed discussion on the possibilities and the limitations of stochastic processes to model physical situations, we refer to the book of Van Kampen [3].

Random walks and Brownian motion

The simplest stochastic process one can construct is the symmetric random walk. It is defined with a discrete time and and represents a particle, or a walker, evolving on the (vertical) real axis. At each time step, the particle either goes up by one unit, or down by the same quantity, with equal probability. Furthermore the increments are independent (as given by successive coin toss). The value of a random walk, i.e. the position of the particle at discrete time n, denoted Xn, is constructed from a sequence of independent and identically distributed (i.i.d.) variables δX1, δX2, δX3,… as Xn = δXk, and X0 = 0 . k=1
For the standard random walk described above, the distribution of the Xk is defined by Prob(δXk = 1) = Prob(δXk = −1) = 1/2. Random walks have a lot of interesting properties, as we will see through this thesis. One is the possibility to construct a continuous process from it, defined as the scaling limit √ Bt = lim 2DδtX bt/δtc . (1.5) δt → 0
We denote bxc the floor of x (the largest interger smaller than x). Bt is the well-known Brownian motion, or Wiener process, which we defined here with an arbitrary diffusive constant D. It was formalized by Einstein in [4] as an effective probabilistic description for the dynamics of a particle suspended in a gas or liquid, as observed by Brown in 1827. Physically, the movement of the particle is due to the large number of collisions with the molecules of its environment, and the diffusive constant D is then related to the molecular, discrete, nature of matter and the Avogadro number.
√ The existence of the scaling limit (1.5) , where time scales with δt while space scales with δt, allowing us to define a continuous process is very similar to the central limit theorem mentioned above. The resulting process Bt is Gaussian and does not depend, apart from the diffusive constant D, on the distribution of δXk, as long as the second moment hδXk2i is finite. The covariance of a Brownian motion is given by hXtXsi = 2D min(t, s) . (1.6)
The limit (1.5) can also be viewed as a physical experiment where the resolution does not allow us to observe the microscopic dynamics (i.e. the discretness of the evolution). This means that large-time properties of random walks falling in the universality class of the Brownian motion (i.e. its increments are independent and have a finite second moment) can be studied using Brownian motion as an effective model. We will see in the next sections some of the strong properties of Brownian motion. These are useful both to use Brownian motion as a starting point to understand physical systems, but also to construct models which go beyond Brownian motion by relaxing some of the hypothesis used in its construction.

Markov processes

One of the interesting properties of Brownian motion, which is natural from its construction as a scaling limit of a random walk, cf. Eq. (1.5), is that it is a process without memory. More precisely, the evolution of the process after time t does not depend on how the process arrived to its current value Xt. This property is called Markov property, and any process with this property is called a Markov process.
In analogy with quantum mechanics, it is common to define the propagator P (x2, t2|x1, t1) of a real valued random process, which is the probability density function for the process Xt to be at position x2 at time t2, knowing that the process was in x1 at time t1. In the case of a Markov process, this propagator is very important, as it allows us to construct any probability density of the process (and then, any observables on the process). If the process is starting from the origin, i.e. Xt=0 = 0, this gives for t1 < t2 < t3 P(Xt1 = x1, Xt2 = x2, Xt3 = x3) = P (x3, t3|x2, t2)P (x2, t2|x1, t1)P (x1, t1|0, 0) , (1.7) and the formula can obviously be generalized to an arbitrary number of points, and to processes with values in a larger-dimensional space, e.g. Xt ∈ Rd. From this, it is clear that the knowledge of the starting point, or its distribution if it is random, and the propagator completely define a Markovian process.
The propagator of a Markov process also has the following property, sometimes referred as Chapman-Kolmogorov equation P (x2, t2|x1, t1) = dx P (x1, t1|x, t)P (x, t|x2, t2) for any t ∈ (t1, t2) . (1.8)
On top of these important properties, other tools commonly used in statistical physics are restricted to Markov processes. To cite a few, we have the master equation and the Fokker-Planck equation. This last one consists in writing the evolution of the probability density function for the position of the process as a deterministic partial differential equation. All this makes Markov processes suitable for exact analytic calculations, and explains why they are so commonly used in the physics (and other science) literature. The Markov property is also very useful when dealing with numerical simulations of a random process, as it allows one to store only the current value of the process. Monte-Carlo simulations are an important example of a numerical method based on the Markov property. For more details on the properties of Markov process, as well as a lot of their applications, we refer to [3] and [5].
While we have introduced the Markov property using Brownian motion as an example, it is important to note that the independence of the increments is not a necessary condition for a process to be Markovian (but it is sufficient). For example, an Ornstein–Uhlenbeck [6] process is a Markov process with correlated increments.
In physical situations, or in other fields where stochastic processes play an important role, one may encounter situations where the history dependence is primordial [7, 8, 9]. This means that constructing a model using a Markov process is not possible. Then, many of the standard methods, both to construct and study the model, fail, and results are usually derived case by case. In the first part of this thesis we consider an important class of non-Markovian processes: the fractional Brownian motion. This is the object of chapters 2 and 4.

Anomalous diffusion and self-similarity

As we have seen in the previous section, cf. 1.2.1, a large class of stochastic processes behave at large times like a standard Brownian motion, and as a consequence the second moment of the position of the particle grows linearly with time: hXt2i ∼ t. This is the signature of a diffusion process.
This universality is due to the fact that Brownian motion is the only continuous process with stationary, independent and Gaussian increments. In physical situations, these properties of the increment are usually verified only on a large enough time scale, but this explains why this standard diffusion is very commonly seen in nature.
However, there are also interesting situations where experimental data show a non-linear growth of the second moment, hXt2i ∼ tα, which is a phenomenon usually referred as anomalous diffusion. The case where α > 1 is referred to as super-diffusion, while α < 1 corresponds to sub-diffusion. To model such a situation, and obtain an anomalous diffusive process, at least one of the three fundamental hypotheses of Brownian motion has to be removed. This gives three main classes of anomalous diffusive process:
• heavy tails of the increments (Levy-flight process) or heavy tails in the waiting time be-tween increments for continuous-time random walks (CTRW); these processes are non-Gaussian.
• time dependence of the diffusion constant, which means in the discrete settings that the distribution of the increments is time dependent: the process has non-stationary incre-ments.
• long-range correlations between increments: the process is non-Markovian.
These mathematical properties leading to anomalous diffusion can have various physical ori-gins. CTRW is a good model for diffusion on a disordered substrate, where the particle may be trapped for long times in a specific site. Long-range correlations appear naturally when dealing with spatially extended systems and trying to write an effective dynamics of a single degree of freedom. For an extensive review on anomalous diffusion from a statistical physics point of view, we refer to [14]. Other studies on anomalous diffusion can be found in [15, 16, 17, 18].
Anomalous diffusion can be a consequence of a stronger property (but equivalent in the case of a Gaussian process): self-similarity with index H, an exponent that we will refer as the Hurst exponent, cf. 1.3. A stochastic process Xt is called H-self-similar if rescaling time by λ > 0 and space by λ−H leaves the distribution of the process invariant: law Xt . (1.14)
This property is stronger than anomalous diffusion in the sense that the growth of every moment, and not only the second one, is governed by the same exponent H: hXtni ∼ tnH . The Brownian motion is self-similar of index 1/2. For an introduction to self-similar processes, we refer to [19].
Self-similarity, also known as scale invariance, is an important notion of statistical physics which appears also in other contexts [20]. Self-similar geometric objects are the well known fractals [21, 22], and critical phenomena, or phase transitions, usually lead to a divergence of the typical length scale, leaving a large window of scale invariance for the system between its microscopic scale and its macroscopic one.

Extreme-value statistics and persistence

When dealing with random objects, the first and most natural questions to ask are related to averaged quantities or typical behavior. These questions are obviously an important step in understanding and comparing stochastic models to experiments or data, but there are also situations were the interest lies in the extremes or rare events. As we already mentioned in section 1.1, the physics of disordered systems at low temperatures is governed by the states with a (close to) minimal energy in the random energy landscape. In other contexts, extreme weather conditions are of large importance in the dimensioning of infrastructures such as dams and bridges. More generally, extreme-value questions appear naturally in many optimization problems.
The simplest and first case studied for these extreme-value statistics (EVS) was the distri-bution of the maximum of N independent and identically distributed (i.i.d.) random variables, which is now well understood in the large-N limit thanks to the classification of the Fisher-Tippett-Gnedenko theorem: Depending on the initial distribution of the variables, the rescaled maximum follows either a Weibull, Gumbel or Fréchet distribution. These results are reviewed in [23] within a mathematical approach. For a physical presentation of the extreme value statis-tics, its links with statistical physics of disordered system, and more specifically the replica-trick, we refer to [24]. Another physical interpretation of EVS in the context of depinnig of a particle in disorder, where the three limiting distributions are relevant, can be found in [25].
The case of strongly correlated variables is a natural extension to this problem, as many physically relevant situations present deviations from the i.i.d. case. Lots of results were derived for random walks and Brownian motion [26], the free energy of a directed polymer on a tree [27], path length on trees [28] or hierarchically correlated variables [29]. The distribution of the largest eigenvalue is also a central question in random matrix theory [30, 31]. A very interesting case of a 1D Hamiltonian with disorder is studied in Ref. [32], where the transition between a strongly localised phase to a delocalised one corresponds in the EVS language to ”breaking of the Gumble universality class »: In the localised phase, the eigenvalues of the Hamiltonian are independent while the delocalisation induces level correlations, leading to a new result for the ordered statistics of correlated variables, obtained explicitly in Ref. [33].
As we will see in more details in section 1.3.5, Pickands and later Piterbarg were able to de-rive interesting results for the extreme-value distribution of generic Gaussian processes, leading to the introduction of the universal Pickands constants. This will be introduced in more details in section 1.3.5.
Extreme-values statistics is also closely linked to other interesting observables one can define on a stochastic process. A natural quantity one can investigate is the distribution of the time it takes to reach a certain level, known as the hitting time. If for simplicity we choose this level to be 0, the probability that a continuous process Xt did not reach 0 up to time T (i.e. the hitting time is larger than T ) is called the survival probability, usually defined with a fixed starting point x > 0: S(T, x) = Prob(Xt > 0 , ∀t ∈ [0, T ] |X0 = x) . (1.15)
The asymptotic properties of this object have motivated lots of work. In many cases of interest the large-T behavior of the survival probability has an algebraic decay, independent of x and with an exponent θ, called the persistence exponent S ( T, x ∼ T−θ . (1.16)
This exponent is non trivial and difficult to compute in many situations, even for a simple diffu-sion with random initial conditions [34]. An extensive review of these questions, in the context of statistical physics, can be found in [35]. For a mathematical approach of some other recent developments, we refer to [36]. All these objects are notably difficult to investigate in the case of non Markovian processes. Some other studies in this case can be found in Ref. [37, 38, 39, 12].
To make the link with the previous section, it is useful to note that if X(t) is a self-similar process of index H, one can construct a stationary process X(s) by defining X˜(s) := e−HsX(es) . (1.17)
This duality is at the basis of several studies of the persistence exponent, where the algebraic decay of the survival probability of the self-similar process is transformed to an exponential decay in the stationary process. This is used in [40, 41, 34, 42, 43, 44, 45].
For example, if we apply the transformation (1.17) to the standard Brownian motion Bt, we obtain a Gaussian process B(s) with correlator DB˜(s1)B˜(s2)E = exp − 1 |s1 − s2| , (1.18) which is nothing but an Ornstein–Uhlenbeck process [6].
Another natural extension of the extreme-values statistic is to look at extreme values not defined relative to a fixed threshold, but relative to the previous extremes of the process. This leads to the statistics of records, with many new interesting questions. The notion of records has taken a large importance in the last decades, and it is quite common to see sports performance or climate change to cite a few, analyzed in terms of records. Some recent studies on this topic can be found in [46, 47, 48, 49, 50].

READ  The impact of credit management on the profitability of a manufacturing firm

Fractional Brownian motion (fBm)

Figure 1.2: Two realisations of fBm paths for different values of H, generated using the same random numbers for the Fourier modes in the Davis and Harte procedure [51]. Some of the observables studied in this thesis, the maximum value m and the time when it is reached tmax, are represented.

Definition and properties

Fractional Brownian motion is a generalization of standard Brownian motion, introduced previ-ously in section 1.2.1. It has two of the main properties of standard Brownian motion, namely, it is a Gaussian process Xt and its increments are stationary, i.e. the distribution of Xt − Xs depends only on the time difference t − s. However, relaxing the condition of independence of the increments (so the process can be non-Markovian) allows the process to be self-similar of arbitrary index H (between 0 and 1), contrary to the fixed value of 1/2 for the standard Brownian case. Following the historical introduction of Mandelbrot and Van Ness in [52], the self-similarity index H of a fBm is called the Hurst exponent.
As a Gaussian process, a fBm Xt is defined via its mean hXti = 0 and covariance function (or 2-point correlation function) hXtXsi = |s|2H + |t|2H − |t − s|2H . (1.19)
This constraints the process Xt to start at 0, X0 = 0, but we can also consider a fBm Yt starting at a non-zero value y = Y0, simply defined as Yt = Xt + y, with Xt as above. As we said, the case H = 1/2 correspond to standard Brownian motion; there the covariance function (1.19) reduces to hXtXsi = 2 min(s, t) and it is the only value of H where the process is Markovian. The correlations of the increments are given by h∂tXt ∂sXsi = 2H(2H − 1)|t − s|2(H−1). (1.20)
For H > 1/2 they are positively correlated, whereas for H < 1/2 they are anti-correlated. As we can see in Fig. 1.2, this makes the paths of a fBm with H small very rough, while for large H, the paths are more smooth. Mathematically this is formalized via Hölder continuity: the fBm paths Xt are almost surely Hölder-continuous of any order less than H.
The stationarity of the increments can be checked from the covariance (1.19) by computing the second moment h(Xt − Xs)2i = 2|t − s|2H , (1.21) and as Xt − Xs is a centered Gaussian variable, it proves that its full distribution depends only on |t − s|.
In order for the process Xt to be well-defined, its covariance function (1.19) has to be a continuous and positive-definite function. This constraints the possible values of H, namely 0 < H ≤ 1. To see this, we can look at the covariance matrix of the process taken at time t1 = 1 and t2 = 2: hXiXji ij = 22H 22H+1 ! (1.22)
The diagonal terms are always positive, while the determinant 22H+2 −24H is negative if H > 1. This implies that one of its eigenvalues is negative. This is not possible for a Gaussian covariance matrix.
The limiting case H = 1 is a linear process with a single degree of freedom, its Gaussian ran-dom slope X: Xt = Xt. In the other limit, when H → 0, the correlations become logarithmic, and there is no unique definition of a limiting process corresponding to H = 0. Log-correlated Gaussian fields are an active topic of research, both in the mathematics and physics community. Recent work can be found in Refs. [53, 54].
The study of the extreme-value statistics for fractional Brownian processes started with Sinai [55], and led few years later to the derivation of the persistence exponent (1.16) for the fBm by Molchan [56] : θ = 1 − H, a very non trivial result cited by Nourdin in [57] as one of the most beautiful results about fBm. In the physical literature, this result was guessed a few years earlier using heuristical arguments by Krug et al [40, 58]. Note that this topic is not closed, as the error terms, i.e. bounds on the subleading corrections in the rigorous proofs of Eq. (1.16) for the fBm seems still far from optimal. Aurzada gave a recent improvement in this direction [59], as well as an extension to moving boundaries in [60]. In this thesis, chapter 2, various observables related to extreme-value statistics for fBm are considered, and their distributions are computed with a perturbative approach around standard Brownian motion. This allows us to check some scaling behavior involving the persistence θ and go beyond.
And finally, we note that even if it will not be used in this thesis, the links between fBm and stochastic integration is an active topics in the mathematical community. It is for example possible to define fBm as the integration of a deterministic kernel with respect to a standard Brownian motion. Stochastic equations defined with a fractional Brownian noise (the formal derivative of a fBm) are also interesting for various reasons, and difficult from a theoretical point of view, due to the lack of the Martingale property. All these directions of studies are well presented in the book of Nourdin [57].

History and applications

The history of fractional Brownian motion started with a study by Kolmogorov [61], even if the model was not clearly defined. Then, Hurst showed while studying the Nil river, as well as other hydrological systems, that long range dependence is a key factor to understand statistics of certain time series, notably via the introduction of the rescaled range observable [62]. This led Mandelbrot and Van Ness to name after him the parameter of their model of self-invariant Gaussian process with stationary increments, the fractional Brownian motion, defined in its final form in [52].
Interestingly, several processes commonly used in physics, mathematics, and computer sci-ence belong to the fBm class. For example, it was recently proven that the dynamics of a tagged particle in single-file diffussion, cf. [63, 64, 65], has at large times the fBm covariance function (1.19) with Hurst exponent H = 1/4. Experimental realisations of single-file diffusion can be found in [66, 67].
Anomalous diffusion, already discussed in section 1.2.4, is another interesting property of fractional Brownian motion. The fact that the index of self-similarity H of a fBm is a free parameter allows us to model situations where the second moment grows non-linearly with time: hXt2i = 2t2H , both for super diffusion and sub-diffusion.
Other applications of fBm can be found: diffusion of a marked monomer inside a polymer [17, 68, 69, 70], polymer translocation through a pore [17, 71, 72, 73], finance (fractional Black-Scholes, fractional stochastic volatility models, and their limitations) [74, 75, 76], telecommuni-cation and network [77], granular materials [78], in top of the historical application to hydrology [79, 80].
Let us now discuss how extended Markovian systems lead to a non-Markovian dynamics when a single degree of freedom is considered. As a simple example we consider the Edwards-Wilkison dynamics for an interface (or a polymer) parametrized by a function h of x, which evolves with time according to the dynmaics ∂th(x, t) = Δxh(x, t) + ξ(x, t) . (1.23)
This is a first-order differential equation with respect to time. The random noise ξ is uncorrelated in the t direction and Gaussian. This implies that the evolution of the whole system is Markovian. If we now choose a specific point x0, and look at the evolution of hx0 (t) = h(x0, t), the Markovian nature is lost. This can be derived by first looking at (1.23) in Fourier domain:
(∂t + q 2 ˆ ˆ (1.24)
)h(q, t) = ξ(q, t) .
hˆ ˆ 0 0 i 0 − 0
where the noise has now correlations ξ(q, t)ξ(q , t ) = 2πδ(q + q )δ(t t ), assuming that the initial noise ξ(x, t) is uncorrelated in both x and t directions, and that space is unidimen-sional. For each values of q, the equation (1.24) now reduces to the evolution equation of an Ornstein-Uhlenbeck process [6] and allows us to express the correlations of the solutions in Fourier variables, assuming we start with flat initial conditions h(x, t = 0) = 0, and t1 < t2:

Numerical simulations

The absence of Markov property makes simulations of fractional Brownian motion less straight-forward that for a standard Brownian motion. Various methods have been developed depending on the objective: exact or approximated, with a fixed total length or not, etc. For a good presentations of these, and a lot of details, we refer to [51] and [81].
In this thesis, we test most of our analytical results on fBm with numerical simulations. The nature of observables we investigate gives a fixed length to the paths we need to generate (cf. the three arcsine laws). In this situation, the Davis and Harte algorithm is the most suited, as it is exact (cf. section 2.I) and fast, path with N discrete points are generated in a time of order N ln(N).

Fractional Brownian bridges

When studying random processes Xt in a time interval [0, T ], quite generally the initial value X0 is known, and the endpoint XT is itself a random variable determined by the random process. On the other hand, there are also cases when one knows the endpoint XT . These processes are referred to as bridges. Fractional Brownian motion bridges, with an endpoint chosen as X1 = 0, are presented on Fig. 1.3 for different values of H.
Bridges are useful building blocks in constructing more complicated observables; we will see an application of this idea when looking at the positive time of a process, cf. section 3.2. They are also commonly used in constructing refinements of random walks, e.g. for financial modeling [82]. Finally, they appear as the difference from the asymptotic limit in the construction of the empirical distribution function [83].

Table of contents :

1 Introduction 
1.1 Statistical physics and disordered systems
1.2 Probability theory and stochastic processes
1.2.1 Random walks and Brownian motion
1.2.2 Markov processes
1.2.3 The three Levy’s arcsine laws
1.2.4 Anomalous diffusion and self-similarity
1.2.5 Extreme-value statistics and persistence
1.3 Fractional Brownian motion (fBm)
1.3.1 Definition and properties
1.3.2 History and applications
1.3.3 Numerical simulations
1.3.4 Fractional Brownian bridges
1.3.5 Pickands constants
1.4 Elastic interfaces in disordered media
1.4.1 Generical ideas
1.4.2 Some applications and experimental realisations
1.4.3 Functional renormalisation
1.4.4 The mean field model and the Brownian force model
2 Extreme-value statistics of fractional Brownian motion 
2.1 Presentation of the chapter
2.2 Perturbative approach to fBm
2.2.1 Path integral formulation and the action
2.2.2 The order-0 term
2.2.3 The first-order terms
2.2.4 Graphical representation and diagrams
2.3 Analytical Results
2.3.1 Scaling results
2.3.2 The complete result
2.3.3 The third arcsine law
2.3.4 The distribution of the maximum
2.3.5 Survival probability
2.3.6 Joint distribution
2.4 Numerical Results
2.4.1 The third arcsine Law
2.4.2 The distribution of the maximum
2.5 Conclusions
2.A Details on the perturbative expansion
2.B Recall of an old result
2.C Computation of the new correction
2.D Correction to the third arcsine Law
2.E Correction to the maximum-value distribution
2.F Correction to the survival distribution
2.G Special functions and some inverse Laplace transforms
2.H Check of the covariance function
2.I The Davis and Harte algorithm
3 The first and second arcsine laws 
3.1 Presentation of the chapter
3.2 Positive time of a Brownian motion
3.2.1 Positive time of a discrete random walk
3.2.2 Propagators in continuous time
3.3 Time of a fBm remains positive
3.4 Last zero of a fBm
3.4.1 The Brownian case
3.4.2 Scaling and perturbative expansion for the distribution of the last zero
3.4.3 The two non-trivial diagrams at second order
3.A Action at second order
4 Fractional Brownian bridges and positive time 
4.1 Presentation of the chapter
4.2 Preliminaries: Gaussian Bridges
4.3 Time a fBm birdge remains positive
4.3.1 Scale invariance and a useful transformation
4.3.2 FBm Bridge
4.3.3 Numerical results
4.4 Extremum of fBm Bridges
4.4.1 Distribution of the time to reach the maximum
4.4.2 The maximum-value distribution
4.4.3 Optimal path for fBm, and the tail of the maximum distribution
4.4.4 Joint Distribution
4.5 Conclusions
4.A Details on correlation functions for Gaussian bridges
4.B Abel transform
4.C Inverse Laplace transforms, and other useful relations
5 FBm with drift and Pickands constants 
5.1 Presentation of the chapter
5.2 Brownian motion with drift
5.3 Pertubative expansion around Brownian motion
5.3.1 Action with drift
5.3.2 Survival probability and Pickands constants
5.3.3 Maximum-value distribution in the large time limit
5.4 Maximum-value of a fBm Bridge and Pickands constants
5.A Derivation of the action
5.B Details of calculations
5.C Scaling
6 Avalanches in the Brownian force model 
6.1 Presentation of the chapter
6.2 Avalanche observables in the BFM
6.2.1 The Brownian Force Model
6.2.2 Avalanche observables and scaling
6.2.3 Generating functions and instanton equation
6.3 Distribution of avalanche size
6.3.1 Global size
6.3.2 Local size
6.3.3 Joint global and local size
6.3.4 Scaling exponents
6.4 Driving at a point: avalanche sizes
6.4.1 Imposed local force
6.4.2 Imposed displacement at a point
6.5 Distribution of avalanche extension
6.5.1 Scaling arguments for the distribution of extension
6.5.2 Instanton equation for two local sizes
6.5.3 Avalanche extension with a local kick
6.5.4 Avalanche extension with a uniform kick
6.6 Non-stationnary dynamics in the BFM
6.7 Conclusion
6.A Airy functions
6.B General considerations on the instanton equation
6.C Distribution of local size
6.D Joint density
6.E Imposed local displacement
6.F Some elliptic integrals for the distribution of avalanche extension
6.G Joint distribution for extension and total size
6.H Numerics
6.I Weierstrass and Elliptic functions
6.J Non-stationary dynamics
7 General conclusion
Bibliographie

GET THE COMPLETE PROJECT

Related Posts