Stochastic processes in statistical mechanics

Get Complete Project Material File(s) Now! »

Stochastic processes and simulation methods

In this Chapter I present analytical and numerical tools required to tackle the challenges of the biological models used in the Chapters that follow.

Stochastic processes in statistical mechanics

In this section I present the tools from statistical mechanics used in the following parts of this work. See [24] for a more detailed presentation.

Markov processes

We define a stochastic process as a random path that is a function of time X (t). The variable X itself can be multidimensional or even infinite dimensional. Note that the time t can either be continuous or discrete.
When X can take discrete values, the probability distribution of the process X at time t is written as P (x, t). When the process can take continuous values the probability for X to be in the window dx is p(x, t)dx.
The simplest stochastic process is a process independent in time where the joint probability distribution factorizes as: p(x1, t1; x2, t2…xn, tn) =p(xi, ti). (2.1) i=1
A generalization of independent processes, known as Markov process, is not uncorrelated to its past, but a process where all the memory is captured in the variable itself at all times, so that the dependence of the conditional probability is reduced to the last point of the past p(x, t|x1, t1; x2, t2…xn, tn) = p(x, t|xn, tn), (2.2) where t1 < t2 < …tn < t.
Markov processes are so useful because in a Markov process any joint probability can be written as a product of two-point conditional probabilities and one initial condition:
p(x1, t1; x2, t2…xn, tn) = p(xn, tn|xn≠1, tn≠1)p(xn≠1, tn≠1|xn≠2, tn≠2)…p(x2, t2|x1, t1)p(x1, t1), (2.3)
where t1 < t2… < tn. So a continuous Markov process is entirely determined by an initial (or final) fixed time probability distribution and its two-point conditional probability (or transition probabilities).
Markov processes obey the probability conservation equation known as the Chapman-Kolomgorov equation (or Smoluchovski equation): p(x1, t1|x3, t3) = dx2p(x1, t1|x2, t2)p(x2, t2|x3, t3), (2.4)
where t1 > t2 > t3.
In a time homogeneous Markov process the laws of the dynamics do not depend on time and the transition probability only depends on the time difference: p(x2, t2|x1, t1) = f (x1, x2, t2 ≠ t1), (2.5)
where again t2 > t1.
Markov processes are ubiquitous in statistical mechanics and biophysics as most seemingly non-Markovian systems can be expanded to form Markov processes in higher dimensions [2].

Jump processes and the Master equation

In this subsection, I describe jump processes for continuous time. Jump processes in discrete time are much simpler to define and Master equations can be derived from the continuous time case. It so happens that a lot of processes in statistical mechanics and in biology are not continuous, or that the most efficient way to describe them is by discontinuous processes. Most specifically discontinuous processes fall into the class of jump processes.
A jump process is a random piecewise constant function that “jumps” from state to state. In a discrete state space, the process is defined by the states {σ}σœS and the transition rates or jump rates Wσ σ = Δtæ0 5 1 , t + Δt; σt)6 , Δt P (σ Õ lim Õ (2.6)
which are assumed to be finite and defined for σ ”= σÕ . It follows from Eq. 2.6 that ∂tP (σ, t) = Õ WσσÕ P (σÕ , t) WσÕ σ P (σ, t) . (2.7) ÿ # ≠ $ σ ”=σ
This equation is called the Master equation and represents a very detailed description of a stochastic process. In the most general case, the jump rates can be functions of time, although the analyses presented in this work are always set in a time homogeneous framework.
In a continuous state space s œ S we define the jump rates from s to sÕ W (sÕ |s) as density
functions: 5 1 W (s |s)Δs = lim 6 Δt p(s , t + Δt; st)Δs (2.8) Δtæ0 and the Master equation is ∂tp(s, t) = ⁄ dsÕ # W (s|sÕ )p(sÕ , t) ≠ W (sÕ |s)p(s, t) . (2.9)
Note that the time spent in a state before jumping is exponentially distributed with the pa-rameter qσÕ WσÕ σ in the discrete case and s dsÕ W (sÕ |s) in the continuous case. It is also worth noting that all jump processes are Markov processes.

Transition matrices

In this paragraph we restrict our analysis to discrete state Markov systems known as Markov chains. The definitions and results can be extended to continuous states by replacing transition matrices with kernels.
Let Xn be a Markov chain with discrete time steps n Ø 1. We can encode the jump rates in a matrix T called a transition matrix:
TσÕσ = WσÕσ (2.10)
if σ ”= σÕ and ÿ WσÕσ . (2.11)
In its most general form the matrix T is a function of time. TσÕ σ represents the probability to be in state σÕ at time n + 1 given that the system was in state σ at time n. Let P (n) be the vector of probabilities with coordinates Pσ (n). Then the Master equation can be rewritten in a simpler form as P (n + 1) = TP (n). (2.12)
If the system is time homogeneous, then T is independent of time, and we can write that P (n + m) = TmP (n). (2.13)
T is a left stochastic matrix (each column sums to 1). It means that 0 is an eigenvalue. The cor-responding eigenvectors (once normalized) are called the stationary distributions of the system. The stationary distribution is unique under certain conditions of irreducibility of the Markov chain that are not discussed in this Dissertation.
For a continuous time system we have the equivalent of Eq. 2.12: ∂tP (t) = [T ≠ ] P (t). (2.14)
Defining U = T ≠ we get the equivalent of Eq. 2.13 for a time homogeneous system ∂tP (t) = e(T≠ )(t≠s)P (s), (2.15) where s < t.

Langevin equations

In this section, we present the Langevin formalism in the one-dimensional case for simplicity.
See [24] for complete proofs and the multidimensional case.
A Langevin equation is a random process defined by a dynamical equation for the trajectories of the form: ∂tx(t) = a(x) + b(x)ξ(t), (2.16)
where ξ(t) is a Gaussian white noise with correlation function Èξ(t)ξ(s)Í = δ(t ≠ s). (2.17)
There are many definitions of the Gaussian white noise as a limit of time-correlated noise. See [24] for a more complete discussion of these definitions and their implications on the definition of stochastic integrals.
The formalism of Langevin equations is convenient because it is close to the type of equations that physicists are used to writing for deterministic dynamics with the addition of the random Langevin force. It is often easier to derive Langevin equation from the microscopic description of trajectories in large systems and then extract information about the whole population by moving to the Fokker-Planck framework or manipulating directly the Langevin equation. This is in particular the approach followed in Chapter 4 and Appendix A.
Eq. 2.16 is ambiguous. When building Riemann integrals the choice of discretization of space for summing (of which the integral is the continuous limit) does not matter as the difference between conventions vanishes for small tilings. In stochastic integrals, however, Gaussian white noise is δ correlated, and so the point where the functions are evaluated when building the integral matters. The convention depends on a parameter traditionally named α to define the integral solution of Eq. 2.16 x(t + Δt) ≠ x(t) = a(xα)Δt + b(xα) ⁄t t+Δt ξ(s)ds, (2.18) where xα(t) = (1 ≠ α)x(t) + αx(t + Δt). (2.19)
Choices of α are related to the fine details of the model and will be discussed more thoroughly in section 2.1.8. It is enough to say here that changing α is equivalent to systematically adding an extra term in Eq. 2.16.
Langevin equations are equivalent to the mathematical formalism of stochastic differential equations, where Gaussian white noise is replaced with the Wiener process. The choice of systematically using the Itô formalism (α = 0) in mathematics is inconvenient to physicist making Langevin equations a more natural framework.

The Fokker-Planck equation: expanding the Master equation

The Fokker-Planck equation is an approximate description of a given Markov processes. It can be seen as a limit of Master equations in jump processes where jumps are so numerous and small that the system becomes continuous.
Let X be a jump process on a continuous state space described by Eq. 2.9. We rewrite the jump rate W (sÕ |s) as a function of the the initial point and the jump length W (s, sÕ ≠ s). We assume that jumps are very small (so W (s, r) is very peaked around 0 in r). We then Taylor expand W in 2.9 to get the Kramers-Moyal expansion: ∂ p(x, t) = +Œ (≠1)n 3 ∂ 4 n (a (x)p), (2.20) t n=1n! ∂x n
where the moments of the jump rates an(x) are n ⁄ drW (x, r)r n 1 ⁄ dy(y ≠ x) n p(y, t + Δt|x, t). (2.21) = Δtæ0 Δt a (x) = lim
The second right hand side in Eq. 2.21 shows how the Fokker-Planck equation emerges as a limit of jump processes. In a lot of processes used in statistical mechanics and biophysics (and, in particular, processes deriving from the Gaussian white noise or the Wiener processes), the moments vanish for n Ø 3 (even when they do not, the Fokker-Planck equation is often an efficient simplifying assumption that still describes the dynamics very well). This is equivalent to saying that the variation of the process within a time step Δt are bounded by a term of order Δt2. Under this assumption Eq. 2.20 reduces to the Fokker-Planck equation ∂tp(x, t) = ≠∂x [a1(x)p(x, t)] + 1 2
∂x [a2(x)p(x, t)] , (2.22) also known as the Kolmogorov forward equation. Eq. 2.22 is particularly convenient as it has re-duced a very high dimensional problem (the Master equation) into a low dimensional continuous equation with a very specific form. At equilibrium, the solution to the Fokker-Planck equation is simply given by K 2 x dy a1 (y) peq (x) = e s x0 a2 (y) , (2.23) a2(x) where K is determined by normalizing the distribution.
However, a lot of physical systems and most biological systems are not at equilibrium, and the Fokker-Planck formalism is very convenient in these situations. It is very easy to add a source term to Eq. 2.22 when the system is kept out of equilibrium by the introduction of new particles.
Indeed, let us assume, for instance, that p(x, t) represents the probability for a population to have x individuals at time t in an environment with insufficient resources (we ignore compe-tition). The drift term a1(x) represents the decay of the population in these harsh conditions and can be written as ≠νx (where ν is a death rate) and the diffusion term a2(x) is the result of birth death fluctuations in the population and so is proportional to Ôx. We can see that the equilibrium solution of this system is a delta function in 0 because the population is bound to go extinct at long times (which is consistent with Eq. 2.23 as Ôx is not integrable around 0). If we now keep the system out of its equilibrium by constantly adding new species at random or deterministic times with rate s with a distribution of introduction sizes θ(x), the system can still be described by a Fokker-Planck equation. Then Eq. 2.22 is modified by adding a source term 1 ∂ 2 [a2 (x)p(x, t)] + sθ(x), (2.24) ∂tρ(x, t) = ∂x [a1 (x)p(x, t)] + x and θ can even depend on time. Some details of the source of new species are not included in the equation, such as the arrival time distribution, which could be rigorously accounted for in the Master equation. At the level of a large population such details do not necessarily matter. If θ is independent of time, there exists a steady state solution to Eq. 2.24. Its properties will be discussed in 2.1.9.
There exist different variations of the Fokker-Planck equation (backward and forward) that can tackle a very wide range of problems (e. g., moments, escapes, first passage times). I will not discuss these here.

From Langevin to Fokker-Planck

The Langevin and the Fokker-Planck formalisms are equivalent, and it is very useful to have a set of rules to go from one description to the other as most problems are easier to solve (or to define) in one of the two descriptions.
From Eq. 2.16 we can extract the moments of the Kramers-Moyal expansion an(x) = lim Δxn Δtæ0 Δt by Taylor expanding Eq. 2.16 in x and t and computing the average of the powers of ξ. We find that a1(x) = a(x) + αb(x)bÕ (x), a2(x) = b2(x), (2.26) where α defines the type of stochastic integral used (Itô and Stratonovitch for instance). We immediately get the Fokker-Planck equation associated with Eq. 2.16: ∂tp(x, t) = ∂x #(a(x) + αb(x)bÕ (x))p(x, t)$ + 12 ∂x2 Ëb2(x)p(x, t)È . (2.27)
The role of α in this equation will be discussed in 2.1.8 but we can already see that changing α will simply add an extra drift term: α =” 0 means that the noise can influence itself and create systematic drift.


Boundary conditions

In many problems of biophysics the range that the stochastic process can reach is not infinite, but bounded. Species populations or numbers of proteins cannot be negative (as particle physicists haven’t yet produced anti-proteins), and many populations are bounded by a carrying capacity defined by available space or homeostatic constraints. In all these cases, it is crucial to determine what happens to the process when it reaches the boundary of the available domain. These “details” can make a lot of seemingly simple problems intractable. For continuous systems, these conditions are easier to implement in the Fokker-Planck formalism. The most common types of boundary conditions include:
• Absorbing boundary conditions: when reaching the wall, the process is stopped and stays at the wall. This is very common in population dynamics as a species reaching a population size of 0 usually cannot expand back to positive values without exterior help (since Pasteur and the end of spontaneous generation). Formally it is equivalent to imposing that p(x, t) = 0 at the boundary. The use of absorbing boundary conditions can also be used to compute the statistics of species extinctions. The mass loss in the probability distribution represents extinction rates as species hit the wall. These problems can be tackled very elegantly with the formalism of path integrals.
• Reflecting boundary conditions: when reaching the wall, the process is sent back to the bulk, usually in the opposite direction with the same speed. This type of boundary condi-tions arise in mechanics when elastic collisions happen at the walls. In population dynamics carrying capacities can be represented by reflecting absorbing conditions and the popula-tion going through the carrying capacity wall should be assumed to be 0. Technically this means that the flux of probability at the wall is set to 0.
• Periodic boundary conditions: when hitting one wall, the process reenters the domain through another wall. There is little biological motivation for using periodic boundary conditions as biological systems rarely behave that way (or any real system in general). This type of conditions has its use though in representing very large systems in numerical simulations and equation solving. This is because periodic boundary conditions usually create less problems than having the distribution vanish at the edge of a tiled space. Periodic boundary conditions also preserve some translational symmetries.

The old problem of Itô and Stratonovitch

The introduction and the choice of the discretization parameter α in section 2.1.4 can seem a bit mysterious. We have seen that the choice of α influences the form and the solutions of the Fokker-Planck equation corresponding to the Langevin dynamics and so it is clear that it has an impact on the trajectories and cannot be ignored. The two most common choices for α are Itô (α = 0) and Stratonovitch (α = 1/2) [25]. The chain rule only applies to Langevin equations in the Stratonovitch formalism, while most mathematical work is done with the Itô one. Some people advocate the use of α = 1 as a fully anticipating noise.
The choice of α is part of the model and should be based on a careful analysis of the physical or biological phenomena at hand. Attempts to determine the “right” α date back to 1974 [26] and after more than fourty years there is still no absolute answer although the consensus is that in most physical situations the Stratonovitch rule applies. What emerges is that intrinsic and extrinsic noise as defined below should not be treated the same way. I give below a summary of the main points of [25] where a detailed analysis of intrinsic and extrinsic noise is given.
α represents how much the noise anticipates the future (an important point for financial markets in particular): a non-zero value of α means that, at the microscopic level, the noise can act on itself and be self-correlated even at very small time scales. In that sense, the discretization of an extrinsic noise (such as the random external force driving a mechanical system or the introduction of new pathogens in the immune system) should lead to α = 1/2 as the noise, being produced outside of the system, does not have to “wait” for the system to evolve. There is no reason for the effect of the noise on itself not to be centered. The continuous time equation does not need to include the limit of a discrete delay of the effect of the noise on itself.
In intrinsic noise (such as birth death noise in population dynamics), the effect of the noise on the dynamics should be delayed as it is produced by the noisy system itself. In this case, the noise should not be anticipating the future and the Itô convention should work.
Thirty years later, after a lot of discussion, a pretty similar picture still holds [27], with the important addition that noise in continuous limits of discrete discontinuous processes should also obey the Itô rule (as the discontinuity and discreteness mean that events are scarce and cannot affect the noise itself without delay).
The effect of these choices will be illustrated and discussed at length in the context of population dynamics in immunology in Chapter 4 and Appendix A.

In and out of equilibrium

Thermodynamics was initially formulated at equilibrium, where all variables are well defined. Since the 1970’s, methods to deal with the difficulty of out-of-equilibrium systems have flour-ished, and we are now armed with a variety of tools to tackle these challenges. Most biological systems are set out of equilibrium including both the immune system in Chapter 4 and Ap-pendix A and the gene regulatory network of Chapter 7.
A state of equilibrium is reached when stochastic processes are at the detailed balance. In the language of 2.1.2, at any steady state we have ∂tP (σ, t) = 0 ∆ P (σ, t) Õ WσÕσ = Õ P (σÕ , t)WσσÕ . (2.28)
Beyond this, what detailed balance ensures is that “each elementary process is equilibrated by its reverse process”: P (σ, t)WσÕ σ = P (σÕ , t)WσσÕ . (2.29)
When at equilibrium, in particular, the transition matrix can be symmetrized and its eigenvalues become real. It is then much easier to directly access the equilibrium distribution. At equilibrium the system is reversible, no entropy is produced, and no energy is dissipated. In a system out of equilibrium, the forward path and backward path do not have the same probability to happen (see [28] for details), and this irreversibility is related to entropy production and energy dissipation [29].
Systems in biology are out-of-equilibrium essentially for three potential reasons: they are transient and have not yet reached a steady state or equilibrium, there is a source disturbing the system, or the system is at steady state, but going through cycles that are not reversible.
In Chapter 4, the immune system is kept out of equilibrium by the constant introduction of new lymphocytes and new antigens, as well as by the decay of antigenic concentration in the body. The bath (here the outside world and the bone marrow or thymus) creates a flux of cells and clones through the system. One of the recent ideas in the field of immunology is to use this population or fitness flux to quantify how far out of equilibrium the system is [30].
In Chapter 7, the activation of the gene state by the Bicoid protein is represented by a Markov chain with different assumptions (or a Gamma model). When the chain has only two states with exponential jumps (Fig 2.1 A), the system is reversible as it can jump from any state to any other state directly, and the steady state solution is at equilibrium. In the Markov models with a higher number of states (Fig 2.1 B), the chain is irreversible (in the specific models of Chapter 7). The cycle structure creates a flux and dissipates energy. The theoretical and biological meaning of fluxes in gene regulatory networks is studied at length in [31].

About non-Markovian processes

Almost all the processes presented in this work are Markov processes. The tools that have been developed for Markov analysis are so powerful, that in most situations it is worth going through the trouble of finding a Markov description for the system of interest. In some cases where the non-Markovian nature of the process is deep (in a sense that we will try to define later), it can prove impossible to map the problem onto a Markov case.

Table of contents :

1 Introduction 
2 Stochastic processes and simulation methods 
2.1 Stochastic processes in statistical mechanics
2.1.1 Markov processes
2.1.2 Jump processes and the Master equation
2.1.3 Transition matrices
2.1.4 Langevin equations
2.1.5 The Fokker-Planck equation: expanding the Master equation
2.1.6 From Langevin to Fokker-Planck
2.1.7 Boundary conditions
2.1.8 The old problem of Itô and Stratonovitch
2.1.9 In and out of equilibrium
2.1.10 About non-Markovian processes
2.1.11 Correlation, covariance, and autocorrelation functions
2.2 Computational methods
2.2.1 Simulating random population models: Gillespie method and fixed time step methods
2.2.2 Numerical solutions to partial differential equations (PDEs)
2.2.3 Optimization and Maximum Likelihood estimation
2.2.4 The example of least squares
3 Introduction to immunology 
3.1 The immune system
3.1.1 The role of the immune system
3.1.2 Actors of the immune system
3.1.3 B cells and T cells
3.1.4 The adaptive immune system and the immune response
3.1.5 The naive and memory pool
3.1.6 Immune systems across species
3.2 Experimental clone size distributions
3.3 Models of the immune system
3.3.1 Why model the immune system?
3.3.2 A model of antigenic stimuli
3.3.3 Current models of the immune system
4 Fitness shapes clone size distributions of immune repertoires 
4.1 Significance
4.2 Abstract
4.3 Introduction
4.4 Results
4.4.1 Clone dynamics in a fluctuating antigenic landscape
4.4.2 Simplified models and the origin of the power law
4.4.3 A model of fluctuating phenotypic fitness
4.5 Discussion
5 Random networks of immune systems: structure and selection 
5.1 Selection and fitness change with de novo mutations
5.1.1 Introduction
5.1.2 From biology to model
5.1.3 Model of a niche
5.1.4 The dynamics of the winners’ pool
5.1.5 The case of independent niches
5.1.6 Prospects and discussion
5.2 Fine structure of networks and clone size distributions
5.2.1 Modeling competition: antigens and lymphocytes
5.2.2 Dynamics of the system
5.3 Clone size distributions in limits of the niche structure
5.3.1 Degenerated cases: fully specific and nonspecific models
5.3.2 Perturbation, global effects and fitness change
5.3.3 Fokker-Planck equation
5.3.4 Interclonal and intraclonal competition
6 Development in Drosophila embryos 
6.1 Patterning in early embryos
6.2 Development in fly embryos: Bicoid and hunchback
6.3 The Berg and Purcell limit
6.4 Experimental methods
6.4.1 RNA FISH
6.4.2 Live fluorescent Imaging
6.4.3 The importance of the construct
6.4.4 On the dynamics of hunchback activation
6.5 Motivation for autocorrelation method
7 Precision of readout at the hunchback gene 
7.1 Abstract
7.2 Introduction
7.3 Results
7.3.1 Characterizing the time traces
7.3.2 Promoter switching models
7.3.3 Autocorrelation approach
7.3.4 Simulated data
7.3.5 Fly trace data analysis
7.3.6 Accuracy of the transcriptional process
7.4 Discussion
7.5 Materials and Methods
7.5.1 Constructs
7.5.2 Live Imaging
7.5.3 Image analysis
7.5.4 Trace preprocessing
7.5.5 The two state model
7.5.6 The cycle model
7.5.7 The ! waiting time model
7.5.8 Finite cell cycle length correction to the connected autocorrelation function101
7.5.9 Inference
8 Conclusions 
8.1 About models in biophysics
8.2 Future work
A Fitness shapes clone size distributions of immune repertoires: supplementary information
A.1 Simple birth-death process with no fitness fluctuations, and its continuous limit
A.2 Effects of explicit global homeostasis
A.3 Details of noise partition do not influence the clone size distribution function
A.4 Model of temporally correlated clone-specific fitness fluctuations
A.5 The Ornstein Uhlenbeck process and maximum entropy
A.6 Model solution for white-noise clone-specific fitness fluctuations
A.7 Data analysis
A.8 Cell specific simulations
A.9 Model of cell-specific fitness fluctuations, and its limit of no heritability
A.10 Model solutions for cell-specific fitness fluctuations in the limit of no heritability
A.11 Dynamics of naive and memory cells
A.12 Effects of hypermutations
A.13 Time dependent source terms and aging
B Precision of readout at the hunchback gene: supplementary information
B.1 Basic setup and data preprocessing
B.2 The two state model
B.3 Computing out of steady state
B.4 Multiple off states
B.5 Generalized multi step model
B.6 The autocorrelation of a Poisson polymerase firing model
B.7 Numerical simulations
B.8 Correction to the autocorrelation function for finite trace lengths
B.9 Correction to the autocorrelation function from correlations in the variance
B.10 Cross-correlation
B.11 Precision of the translational process


Related Posts