Get Complete Project Material File(s) Now! »

## Related and proposed work

The separation of primaries and multiples is a classical issue in seismic exploration. Most published solutions, tailored to specific levels of prior knowledge, are very dependent on seismic data-sets. Several common processing routines transform the data into a new domain in which they minimize overlap between primaries and multiples. Transformed domain filter subsequently attenuate multiples or select primaries of interest. The filtered data is finally mapped back to data domain, using an appropriate inverse transform. Many standard transform methods are used as homomorphic filtering (Buttkus, 1975), f − k (Wu and Wang, 2011), and τ − p (Nuzzo and Quarta, 2004) or alternative breeds — parabolic, hyperbolic – of the Radon transform (Hampson, 1986; Trad et al., 2003; Nowak and Imhof, 2006). See more details for these methods in (Verschuur, 2006). Among the vast literature, we refer to (Weglein et al., 2011; Ventosa et al., 2012) for recent accounts on broad processing issues, including shortcomings of standard ℓ2-based methods. The latter are computationally efficient, yet their performance decreases when traditional assumptions fail (primary/multiple decorrelation, weak linearity or stationarity, high noise levels). We focus here on recent sparsity-related approaches, pertaining to geophysical signal processing. The potentially parsimonious layering of the subsurface (illustrated in Figure 3.1 – p. 34) suggests a modeling of primary reflection coefficients with generalized Gaussian or Cauchy distributions (Walden, 1985; Walden and Hosken, 1986), having suitable parameters. The sparsity induced on seismic data has influenced deconvolution and multiple subtraction. Progressively, the non-Gaussianity of seismic traces has been emphasized, and contributed to the use of more robust norms (Kaplan and Innanen, 2008; Lu and Liu, 2009; Liu and Dragoset, 2012; Duarte et al., 2012; Takahata et al., 2012; da Costa Filho, 2013) for blind separation with independent component analysis (ICA) for the signal of interest. As the true nature of seismic data distribution is still debated, including its stationarity (Bois and H´emon, 1963), a handful of works have investigated processing in appropriate transformed domains. They may either stationarize (Krim and Pesquet, 1995) or strengthen data sparsity. For instance, (Donno, 2011) applies ICA in a dip-separated domain. In (Neelamani et al., 2010), as well as in (Herrmann and Verschuur, 2004) and subsequent works by the same group, a special focus is laid on separation in the curvelet domain.

### Primal-Dual proximal algorithm

Our objective is to provide a numerical solution to Problem (4.5). This amounts to minimizing function Ψ with respect to y and h, the latter variables being constrained to belong to the constraint sets D and C, respectively. These constraints are expressed through linear operators, such as a wavelet frame analysis operator F. For this reason, primal-dual algorithms (Brice˜nos Arias and Combettes, 2011; Chambolle and Pock, 2011; Condat, 2013), such as the Monotone+Lipschitz Forward-Backward- Forward (M+LFBF) algorithm (Combettes and Pesquet, 2012), constitute appropriate choices since they avoid some large-size matrix inversions inherent to other schemes such as the ones proposed in (Afonso et al., 2011; Pesquet and Pustelnik, 2012). As mentioned in Section 4.3.3.1, Ψ is a quadratic function and its gradient is thus Lipschitzian, which permits it to be handled by the M+LFBF algorithm. In order to deal with the constraints, projections onto the closed convex sets (Cm)1≤m≤3 and D are performed (these projections are described in more details in the next section).

#### Qualitative results on synthetic data

We choose here two filter families with lengths P0 = 10 and P1 = 14. The two filters evolve complementally in time, emulating a bi-modal multiple mixture at two different depths. They are combined with the two templates in a multiple signal depicted at the fifth row from the top of Figure 4.3 – p. 73. Data and template were designed in order to mimic the time and frequency contents of seismic signals. This figure also displays the other signals of interest, known and unknown, when the noise standard deviation is σ = 0.08. We aim at recovering weak primary signals, potentially hidden under both multiple and random perturbations, from the observed signal z at the last row. We focus on the rightmost part of the plots, between indices 350 and 700. The primary events we are interested in are located in Figure 4.3 – p. 73 (first signal on top) between indices 400-500 and 540-600, respectively. These primary events are mixed with multiples and random noise. A close-up on indices 350-700 is provided in Figure 4.5 – p. 75. The first interesting primary event (400-500) is mainly affected by the random noise component. It serves as a witness for the quality of signal/random noise separation, as it is relatively insulated. The second one is disturbed by both noise and a multiple signal, relatively higher in amplitude than the first primary event. Consequently, its recovery severely probes the efficiency of the proposed algorithm.

**Table of contents :**

Abstract

R´esum´e

R´esum´e long

Remerciements

**1 Introduction **

1.1 Seismic data and primaries

1.2 Seismic multiples

1.3 Seismic deconvolution

**2 Background **

2.1 Inverse problems

2.1.1 What are inverse problems?

2.1.2 Variational approach

2.2 Sparse representation of signals and images

2.2.1 Introducing sparsity

2.2.2 Evaluations of the sparsity

2.3 Algorithms

2.3.1 Line search method

2.3.2 Majorize-Minimize method (MM)

2.3.3 Parallel-Proximal method

2.4 Conclusion

**3 Multiple removal with a single template **

3.1 Introduction

3.2 Related and proposed work

3.3 Model description

3.4 Proposed variational approach

3.5 Filter estimation

3.5.1 Algorithm

3.5.2 Results

3.6 Filter estimation adding constraints

3.6.1 Algorithm

3.6.2 Results

3.7 Conclusion

**4 Multiple removal with several templates and noise **

4.1 Introduction

4.2 Model description

4.3 Proposed variational approach

4.3.1 Bayesian framework

4.3.2 Problem formulation

4.3.3 Considered data fidelity term and constraints

4.3.3.1 Data fidelity term

4.3.3.2 A priori information on adaptive filter

4.3.3.3 A priori information on primary signal

4.4 Primal-Dual proximal algorithm

4.4.1 Gradient and projection computation

4.4.2 M+LFBF algorithm

4.5 Results

4.5.1 Evaluation methodology

4.5.2 Qualitative results on synthetic data

4.5.3 Quantitative results on synthetic data

4.5.4 Comparative evaluation: synthetic data

4.5.5 Comparative evaluation: real data

4.6 Conclusion

**5 Multiple removal in 2D **

5.1 Introduction

5.2 Model description

5.3 Proposed approach

5.3.1 A priori information on primary signal

5.3.2 A priori information on adaptive filter

5.4 Proximal algorithm

5.5 Results

5.5.1 Comparative evaluation: synthetic data

5.5.2 Comparative evaluation: real data

5.6 Conclusion

**6 Sparse Blind Deconvolution with Smoothed ℓ1/ℓ2 Regularization**

6.1 Introduction

6.2 Sparsity measure ℓ1/ℓ2 and its surrogates

6.2.1 Motivation on the sparsity measure ℓ1/ℓ2

6.2.2 Properties of functions ℓ1/ℓ2 and its surrogates

6.3 Optimization model

6.3.1 Optimization tools

6.3.2 Proposed variational approach

6.4 Proposed alternating optimization method

6.4.1 Smoothed ℓ1/ℓ2 (SOOT) algorithm

6.4.2 Construction of quadratic majorants

6.5 Evaluation on seismic signal deconvolution

6.5.1 Problem statement

6.5.2 Numerical results

6.6 Evaluation in blind image deconvolution

6.7 Conclusion

**7 Contributions and perspectives **

7.1 Contributions

7.2 Perspectives

A Auxiliary proofs 139

A.1 Proof of Proposition ??

A.2 Proof of Proposition ??

A.3 Proof of Proposition ??

List of Figures

List of Tables

**Bibliography**