Overview of accelerated seismic applications

Get Complete Project Material File(s) Now! »

Seismic acquisition

Seismic acquisition is the act of gathering data in the field, and making sure that it is of sufficient quality (this requires pre-processing such as noise attenuation and filtering). In seismic acquisition, an elastic or acoustic wavefield is emitted by a seismic source at a certain location at the surface. The reflected wavefield is measured by receivers located along lines (2D seismics) or on a grid (3D seismics). We refer to this process as a shot experiment. After each shot the source is moved to another location and the measurement is repeated. Figure 2.3 distinguishes between the land seismic acquisition (onshore) and the marine seismic acquisition (offshore). In land surveys, the seismic source can be a vibroseis or dynamite, the receivers are called geophones and are towed by trucks. In marine surveys, the source is often an air gun and the receivers are designated as hydrophones and are towed by vessels.
In order to collect data, many strategic choices have to be made. They are related to the physics and the location of the survey area, to the geometry of the acquisition 1 This categorization is becoming more and more obsolete as technologies, that repeatedly iterate through those three stages, are emerging [179].

Onshore seismic acquisition. (b) Offshore seismic acquisition.

FIGURE 2.3: Seismic acquisition steps at land (a) and at sea (b): 1) the seismic source emits controlled energy; 2) the seismic energy is transmitted and reflected from the subsurface layers; 3) the reflected energy is captured by receivers placed on the surface; 4) the acquisition systems record the data and pre-process it. From Sercel [193].
FIGURE 2.4: Seismic acquisition geometries: from left to right, Narrow Azimuth Towed Streamers, Multi-Azimuth, Wide Azimuth Towed Streamers. From PGS [174].
and to the accuracy of the targeted geophysical properties. These choices are often driven by economic considerations, since the cost of a survey may vary from $18.000 to $45.000 per square mile [21]. For example, specific acquisition parameters such as energy source effort and receiver station intervals, together with the data recording or listening time, have to be carefully defined. In addition, in the old days 2-D seismic reflection (see figure 2.5a) was the only tool for exploration because cost effective. Today, conventional 2-D seismic is only able to identify large structural traps while 3-D seismic (see figure 2.5b)2 is able to pinpoint complex formations. Therefore, 3-D reflection has entirely replaced 2-D seismology in the O&G industry, albeit expensive. Furthermore, the acquisition geometry determines the coverage azimuth range and the consistency level of the illumination of reservoirs. Figure 2.4 represents schematic diagrams of the common acquisition geometries used in the O&G industry. The reader can find more detailed descriptions about the most common seismic acquisition geometries in [115]. Further, one can learn about cutting edge technologies in terms of seismic surveys, such as coil shooting in [101].
The basic principle of the seismic reflection is explained in figure 2.5. We differentiate between the 2-D seismic acquisition and the 3-D seismic acquisition, but the principle remains the same in the two cases. We activate a source (S) to send artificially-generated seismic waves into the subsurface. The waves get reflected off layer boundaries (called reflectors in the seismic literature). We record the arrival times and amplitudes of the reflected waves on the surface and detected by the receivers (R0..n).
The size and scale of seismic surveys has increased alongside the significant concur-rent increase in compute power during the last years. The collected data, i.e. seismic traces (see figure 2.6), is often humongous and was stored, in the past, in tapes and was very hard to process by computers. Each seismic trace corresponds to a seismic signal detected by one receiver throughout time. A wide variety of seismic data formats were proposed to digitize the seismic data and standardize its manipulation; the most famous ones in the industry are SEGY [191], SEP [200], SU [66] and RSF [148], to name a few. So far, the choices of seismic survey parameters such as the shot position (the position of the seismic source), the shot interval (the distance between two successive seismic per-turbations), the receiver interval (the distance that separates two successive receivers situated in the same streamer), the shooting frequency (the frequency of activating the seismic source), etc. are of prime importance as they make immediate impact on the 2 This is only a simplified illustration of the 3-D seismic reflection. In the industry, more than one seismic source is required to conduct a 3-D survey.
FIGURE 2.5: Seismic surveys in 2-D (a) and in 3-D (b). The seismic source is the red sign. Receivers are the orange triangles. Dotted black lines are basic representations of the subsurface reflectors. Green lines represent the covered area. Dashed gray lines illustrate the wave energy paths. The blue lines (in b) are called streamers.
generated seismic traces which are used in the following stages of the seismic exploration cycle.

Seismic processing

In the seismic processing stage, we want to manipulate the gathered data, after acquisi-tion, such that we generate an accurate image of the subsurface. A long run separates the raw data from being transformed into structural pictures. Processing consists of the application of a chain of computer treatments to the acquired data, guided by the hand amplitude, as well as the wave travel time, as a function of the “offset” (in meters) throughout time (in seconds) as measured by a given receiver. The offset is the distance between each receiver and the seismic source. Source Drijkoningen, TU Delft [78].
of processing geophysicists. There is neither a standard classification nor an order to define theses operations because they depend on the nature of the collected data, in the one hand, and because processing is a subjective manipulation, in the other hand. We try, throughout this section, to describe the most relevant processing routines and leave the opportunity to the reader to dive into the geophysics literature [59, 127, 143, 181], in order to learn more about seismic processing.
To begin with, the reflected seismic response can be a mixture of the seismic source pulse, the effect of the Earth upon that pulse, and background noise, all convolved together. The data is usually cleaned up from those spurious signals that might have been accumulated during seismic surveys. For instance, the seismic source may introduce signals, into the Earth, to which the underlying structures remain irresponsive because they do not depend on the signal put in. Those signals have to be removed. This is considered as pre-processing or data conditioning, and usually includes signal processing techniques, such as signal deconvolution and anti-aliasing filtering. Figure 2.7a shows an example of a seismic trace after applying a signal deconvolution.
Besides, seismic traces are usually sorted and those that share the same geometry properties are stacked, i.e. the signals are summed, to attenuate the background noise and thus increase the signal-to-noise ratio. The more seismic traces we can stack together into one seismic trace, the clearer is the seismic image. Stacking can be done by putting together traces from the same reflecting point (Common Reflection Point (CRP) stacking or CRP gather ), from the same shot position (Common Shot Gather (CSG)), from the same midpoint (Common Midpoint (CMP) stacking) or from the same depth point (Common Depthpoint (CDP) stacking)3 etc. [59]. Figure 2.7b emphasizes the noise attenuation after the CRP stacking of six seismic traces.
Furthermore, before arriving at the receivers the seismic energy may be reflected a number of times: this is known as the multiple reflections phenomenon (see figure 2.8) as opposed to primary reflections. For example, during offshore surveys, the energy bouncing back-and-forth within the water produces false reflections and obscures the real data. Multiple attenuation is needed to remove multiples embedded in the data without interfering with primary events. This is referred to as Demultiple in the literature, and many advanced numerical algorithms are proposed to do so, such as Surface-Related Multiple Elimination (SRME) [144]. Note that some research, such as Guitton [99], focus on imaging the multiples and integrating them to the primary reflections rather 3 In the case where the reflectors are horizontal, CDP is equivalent to CMP stacking.
than removing them. Another seismic processing is seismic traces interpolation. This manipulation is used to enhance the energy and highlight the areas close to the subsurface reflectors. Any missing seismic trace is filled in by signal interpolation.
At this point, the data is ready to more advanced processing operations such as seismic imaging or inversions [44]. The main goal of seismic imaging is to transform the pre-processed seismic traces to the most accurate possible graphical representation of the Earth’s subsurface geologic structure. A key point in imaging is that the reflected wave is proportional to the amplitude of the incidence wave. The proportionality coefficient is called the reflection coefficient. Imaging has the objective of computing this reflection coefficient. Hence, the final image is a representation of the reflection coefficient at each point of the subsurface. This can be performed by means of seismic migration.
Migration is using the two-way travel time, amongst other attributes provided by seismic traces, to place (or migrate) the dipping temporal events in their true subsur-face spatial locations. Processing these reflections produces a synthetic image of the subsurface geologic structure. We show in figure 2.9 an example of a seismic processing chain. The traces in 2.9a are subject to a water bottom multiple reflection (arrowed). In 2.9b, it is removed by demultiple and the image shows the result of suppressing the water bottom multiple. The seismic traces are, then, enhanced by interpolation in 2.9c. Finally, the image, in 2.9d most closely resembles the true sub-surface geology.
Depth Migration (PSDM), can significantly improve seismic imaging, especially in areas of complex geology. Finally, we recall the seismic processings are numerous and require advanced mathematical algorithms. Those are often applied to 3D seismic data which require enormous computing resources. Not to mention the massive volumes of data involved.

Seismic interpretation

The final stage of the seismic exploration cycle is seismic interpretation. The purpose of interpretation is to interpret the processed seismic images and integrate other geo-scientific information in order to make assessments of where the O&G reservoirs may be accumulated and to learn about their characterization. Interpreters or interpretation geophysicists, are involved at this stage to analyse the seismic data. Relevant information consist of structures and features which can be related to geological phenomena such as faults, fractures, anticlines etc. This can deliver valuable insights about the nature of rocks, about which time they were formed and about their environment.
Computer algorithms are used to help interpret seismic data. For instance, numeri-cal algorithms are used for the calculation of seismic attributes such as amplitude, phase and frequency based on the migrated seismic image. In practice, the seismic attributes (especially the amplitude) are related to the subsurface reflectivity which in turn pro-vides information about the rock and the pressure-formation. Other seismic attributes are used in interpretation, namely coherence, dip and azimuth, and gradient correlation cube. For instance, the coherence is an attribute that measures the continuity between seismic traces in a specified window, applied on a seismic section. Figure 2.10 shows a composite of a section of a 3D seismic cube and a section of the corresponding coherence cube. For other examples of attributes calculation used in the interpretation stage we refer the reader to Abdelkhalek [22].

Seismic migrations and Reverse Time Migration (RTM)

In section 2.1 we have mentioned that seismic migration is classified as a final processing step in order to generate structural pictures of the subsurfaces. It is in fact the most important routine of the whole processing flow. In this section, we give a short overview of seismic migrations in general. We particularly insist on the Reverse Time Migration (RTM), where we describe the components of its workflow along with its advantages compared with the conventional migration techniques.

Description and overview of migration methods

The purpose of migration is to reconstruct the reflectivity distribution of the buried structures on Earth, from the seismic data recorded at the surface. For that to do, reflections events (especially non zero-offset reflections) are collapsed and moved, i.e. migrated, to their proper spatial location. Schustler [188] explains how seismic traces are migrated, and enumerates the challenges that might be related to migration such as diffraction, out-of-plane reflections and conflicting dips.
Migration relies upon pre-processed input data (seismic traces) and an accurate velocity model. Synthetic velocity models were proposed (see figure 2.11) by the O&G community in order to validate migration algorithms and display their potential power for imaging complex structures. However, in the case of real data, the velocity model of the subsurface is unknown. As a matter of fact, migration relies on various velocity estimation procedures, e.g. iterative prestack migration [42], to aid in imaging. In other words, migration is also a velocity analysis tool. Conceptually, migrations can be categorized with respect to different parameters. From a dimensionality perspective, migration is either 2D or 3D. 3D migration requires data to be acquired in 3D and presents rich azimuth content.
From data stacking standpoint, migration can be prestack or poststack. In poststack migration, the seismic traces are stacked in bins, each of which is reduced to only one seismic trace. This is much less expensive to process but is also less accurate. In prestack migration, traces are not stacked and every single trace is processed which require huge computational effort.
Furthermore, we can categorize migrations upon wether they support or not lateral velocity variations. Time migration is insensitive to lateral variation of the velocities and is more appropriate to constant and depth dependent velocities. In the contrary, depth migration can handle strong variations of the velocities and is thus more appropriate for complex geological structures.
Mathematically, we can split migrations into two categories. The first one is Ray-based migrations, such as Kirchhoff migration and Beam migration [105]. The second is Wave-field extrapolation migrations, such as One-way migration and Two-way migration [159].
Historically, migration was achieved by graphical methods in the 1960’s [194]. This was followed by diffraction summations. In the 1970’s, several important developments took place. Based on the pioneering work of Jon Claerbout [59, 60], migration methods based on wave theory were developed. Claerbout derived migration as a finite-difference solution of an approximate wave equation. Kirchhoff wave-equation migration (Schnei-der [186], Gardner [89]), and frequency-wavenumber migrations (Gazdag [90] and Stolt [202]) appeared shortly thereafter. Those were initially time migration methods, but due to the pressing need for more accuracy they were changed into depth migrations. In the early 1980’s, Baysal et al. [37] along with Whitmore [221] and McMechan [151], proposed the Reverse Time Migration, based on the exact wave equation. The last twenty years have seen extensions of these methods to three dimensions and to prestack migration, and enhancements of their efficiency and accuracy. For further reading about migrations, we refer to [42, 94].

READ  KESAM: A Tool for Knowledge Extraction and Semantic Annotation Management

Reverse Time Migration

RTM is a two-way wave equation based pre-stack or post-stack depth migration. RTM is becoming more and more important as a tool of seismic imaging in the O&G industry. If the velocity model is complex or is subject to strong velocity gradients, such complexities will produce turning (or diving) rays and multiples when using conventional migration techniques (detailed in [188]). The RTM addresses these issues by directly using the two-way wave equation without any approximations or assumptions. The workflow of the RTM technique is depicted in the flowchart 2.12. Note that we do not mention in the figure that RTM also needs a velocity model as an input and that this workflow is repeated for each shot experiment. First, the source wavefield, i.e the wavefield whose origin is the seismic source, is propagated forward in time (we refer to this stage as forward modeling or also seismic modeling). Then, the receiver wavefield, i.e. a wavefield that is incident from the receivers, is then propagated back in time (this phase is called backward modeling or retro-propagation). Finally, the imaging condition is applied with respect to Claerbout’s [58] imaging principle: ”a reflector exists where the source and the receiver wavefields coincide in time and space”.
As a matter of fact, the source wavefield and the receiver wavefield are cross-correlated throughout time. We denote I(x, y, z) the reflectivity coefficient of the subsurface, i.e. the resulting seismic image, at the coordinate (x, y, z). The source wavefield is presented by a (R3, N) → R function S(x, y, z, t) and the receiver wavefield by a similar function R(x, y, z, t), each at the coordinate (x, y, z) and at time t. We can identify the RTM as However, in some cases especially for large impedance contrasts and complex geological structures, the source and receiver wavefields can not be serrated efficiently. In these cases, the cross-correlation described in equation (2.1) leads to low frequency artefacts and illumination effects [123].
In order to eliminate the illumination effects, the image is often divided, after cross-correlation, by the source illumination (see equation (2.2)), or by the receiver illumina-tion (see equation (2.3)), or even better by a combination of both source illumination and receiver illumination (see equation (2.4)). This calculation corresponds to the imaging condition of the RTM algorithm.
(a) Snapshot at t = 1.20 s of the source wavefield (left), the receiver wavefield (middle) and the image progression (right). No reflector is imaged yet.
(b) Snapshot at t = 0.75 s of the source wavefield (left), the receiver wavefield (middle) and the image progression (right). The bottom reflector is almost fully imaged.
(c) Snapshot at t = 0.30 s of the source wavefield (left), the receiver wavefield (middle) and the image progression (right). All reflectors (the bottom and the shallow) are fully imaged.
FIGURE 2.13: A Reverse Time Migration example: the source and receiver wavefields are correlated, at three subsequent time-steps, in oder to image two reflectors. Source: Biondi [41].

Numerical methods for the wave propagation phenom-ena

Most differential equations are much too complicated to be solved analytically, thus the development of accurate numerical approximation schemes is essential to understand the behavior of their solutions. The wave equation, being a Partial Differential Equation (PDE), is no exception. This section presents an overview of the state-of-the-art numer-ical methods used for seismic modeling and seismic imaging. Given that RTM is based on the wave equation, we present the general equations that govern the propagation of waves in elastic and acoustic media. These methods were widely studied for seismic imaging and one can find more details in Virieux et al. [215] and in Carcione et al. [50].

The wave equation

Seismic waves and propagation media

Before introducing the theory that governs the wave propagation phenomenon, we briefly recall the type of seismic waves and the nature of propagation media. A wave propa-gation is called elastic when the traversed medium can change in shape as a result of a deforming force otherwise the propagtion is acoustic. If the medium has constant density, we call it homogeneous, heterogeneous if it has not. Besides, we call a medium isotropic if it has the same physical characteristics independently of directions. In the contrary, the medium is called anisotropic.
The seismic waves are either body waves, that is they travel through the interior of the Earth, or surface waves if they travel along the Earth’s surface. We distinguish two types of body waves: Compressional waves, also referred to as Primary (P) waves4, and Shear waves, also called Secondary (S) waves. Figure 2.14 illustrates the propagation directions of P and S waves for small elemental volumes (particles). P waves propagate in parallel with the particle motion whereas S waves propagate perpendicularly to the particle motion.

The elastic wave equation

The general wave equation is established using the Newton’s second law of motion and Hook’s law, with some constraints considered: the media is elastic, isotropic and subject to infinitesimal displacements in order to satisfy the elasticity condition. For the sake of simplicity the motion of the wave is initially presumed to be one dimensional, the wave equation will be later derived to the three dimensional case. We denote the particle displacement η, the dimension of the wave motion Z, and the particle position is given by the z coordinate. Newton’s law (2.5), for small elemental volumes, states that the acceleration (γ) of a particle when multiplied by its mass (m) is equal to the sum of forces applied on it (f ).
Considered that the pressure (p) is the force on an object that is spread over a surface area and given that the particles are infinitesimal (we consider the unit surface), the 4 Note that there are other types of wave, i.e. Love waves and Rayleigh waves, which are surface waves that we deliberately ignore here.

The acoustic wave equation

The acoustic approximation states that shear effects in the data are negligible and that the dominant wave type is a compressional wave. Thus the shear modulus µ(x, y, z) is null. The equations (2.9) and (2.10) are simplified as follows:
This is the acoustic wave equation that we tend to solve numerically in the rest of this section. It is also the equation used to simulate the wave propagation in the seismic modeling and in the seismic imaging, i.e. in the Reverse Time Migration.

Numerical methods for wave propagation

Numerically, the solutions to the wave equation can be approximated using a wide va-riety of numerical methods. Depending on the targeted accuracy and on the available computational resources, one can consider a spectral formulation, a strong formulation or a weak formulation. One can also adopt a time-domain approach or a frequency-domain approach. The spectral formulation produces efficient results for simple geo-logical structures whereas the strong formulation via finite-difference methods can give a good compromise between the quality of images and the computational costs. On the other hand, weak formulation via finite-elements, e.g. continuous or discontinuous Galerkin methods, are more suitable for areas with complex subsurfaces. For a thorough overview of the most common numerical methods used in resolving the wave equation we recommend the following two readings [50] and [215]. In this section, we briefly introduce the methods that we find most relevant to the acoustic wave equation solver.

Table of contents :

I State of the art 
2 Geophysics and seismic applications
2.1 Introduction to seismic exploration
2.1.1 Seismic acquisition
2.1.2 Seismic processing
2.1.3 Seismic interpretation
2.2 Seismic migrations and Reverse Time Migration (RTM)
2.2.1 Description and overview of migration methods
2.2.2 Reverse Time Migration
2.3 Numerical methods for the wave propagation phenomena
2.3.1 The wave equation
2.3.1.1 Seismic waves and propagation media
2.3.1.2 The elastic wave equation
2.3.1.3 The acoustic wave equation
2.3.2 Numerical methods for wave propagation
2.3.2.1 Integral methods
2.3.2.2 Asymptotic methods
2.3.2.3 Direct methods
2.3.2.3.1 Pseudo-Spectral Methods
2.3.2.3.2 Finite Difference Methods
2.3.2.3.3 Finite Element Methods
2.3.3 Application to the acoustic wave equation
2.3.3.1 Numerical approximation
2.3.3.2 Stability analysis and CFL
2.3.3.3 Boundary conditions
3 High performance computing
3.1 Overview of HPC hardware architectures
3.1.1 Central Processing Unit: more and more cores
3.1.2 Hardware accelerators: the other chips for computing
3.1.3 Towards the fusion of CPUs and accelerators: the emergence of the Accelerated Processing Unit
3.2 Programming models in HPC
3.2.1 Dedicated programming languages for HPC
3.2.1.1 Overview
3.2.1.2 The OpenCL programming model
3.2.2 Directive-based compilers and language extensions
3.3 Power consumption in HPC and the power wall
4 Overview of accelerated seismic applications
4.1 Stencil computations
4.2 Reverse time migration
4.2.1 Evolution of RTM algorithms
4.2.2 Wave-field reconstruction methods
4.2.2.1 Re-computation of the forward wavefield
4.2.2.2 Storing all the forward wavefield
4.2.2.3 Selective wavefield storage (linear checkpointing)
4.2.2.4 Checkpointing
4.2.2.5 Boundaries storage
4.2.2.6 Random boundary condition
4.2.3 RTM on multi-cores and hardware accelerators
4.2.3.1 RTM on multi-core CPUs
4.2.3.2 RTM on GPUs
4.2.3.3 RTM on other accelerators
4.3 Close to seismics workflows
5 Thesis position and contributions 
5.1 Position of the study
5.2 Contributions
5.3 Hardware and seismic material configurations
5.3.1 The hardware configuration
5.3.2 The numerical configurations of the seismic materials
5.3.2.1 The seismic source
5.3.2.2 The velocity model and the compute grids
6 Evaluation of the Accelerated Processing Unit (APU)
6.1 Data placement strategies
6.2 Applicative benchmarks
6.2.1 Matrix multiplication
6.2.1.1 Implementation details
6.2.1.2 Devices performance
6.2.1.3 Impact of data placement strategies on performance
6.2.1.4 Performance comparison
6.2.2 Finite difference stencil
6.2.2.1 Implementation details
6.2.2.2 Devices performance
6.2.2.3 Impact of data placement strategies on performance
6.2.2.4 Performance comparison
6.3 Power consumption aware benchmarks
6.3.1 Power measurement tutorial
6.3.1.1 Metrics for power efficiency
6.3.1.2 Proposed methodology
6.3.1.3 Hardware configuration
6.3.1.4 Choice of applications and benchmarks
6.3.2 Power efficiency of the applicative benchmarks
6.4 Hybrid utilization of the APU: finite difference stencil as an example
6.4.1 Hybrid strategy for the APU
6.4.2 Deployment on CPU or on integrated GPU
6.4.3 Hybrid deployment
6.5 Directive based programming on the APU: finite difference stencil as an example
6.5.1 OpenACC implementation details
6.5.2 OpenACC performance numbers and comparison with OpenCL
7 Seismic applications on one compute node
7.1 Seismic modeling
7.1.1 Description of the algorithm
7.1.2 Accelerating the seismic modeling using OpenCL
7.1.3 Performance and power efficiency
7.1.4 OpenACC evaluation and comparison with OpenCL
7.2 Seismic migration
7.2.1 Description of the algorithm
7.2.2 Accelerating the seismic migration using OpenCL
7.2.3 Performance and power efficiency
7.2.4 OpenACC evaluation and comparison with OpenCL
7.3 Conclusion
8 Large scale seismic applications on CPU/APU/GPU clusters 
8.1 Large scale considerations
8.1.1 Domain decomposition
8.1.2 Boundary conditions
8.2 Seismic modeling
8.2.1 Deployment on CPU clusters: performance issues and proposed solutions
8.2.1.1 Implementation details
8.2.1.2 Communications and related issues
8.2.1.3 Load balancing
8.2.1.4 Communication-computation overlap
8.2.1.4.1 Problems of non-blocking MPI communications
8.2.1.4.2 Proposed solutions
8.2.1.4.3 Performance results
8.2.2 Deployment on hardware accelerators
8.2.2.1 Implementation details
8.2.2.2 Performance results
8.2.2.2.1 Strong scaling tests
8.2.2.2.2 Weak scaling tests
8.3 Seismic migration
8.3.1 Deployment on CPU clusters
8.3.1.1 Implementation details
8.3.1.2 Performance results
8.3.1.2.1 Strong scaling tests
8.3.1.2.2 Weak scaling tests
8.3.2 Deployment on hardware accelerators
8.3.2.1 Implementation details
8.3.2.2 Performance results
8.3.2.2.1 Strong scaling tests
8.3.2.2.2 Weak scaling tests
8.3.3 Performance comparison
8.3.3.1 Comparison based on measured results
8.3.3.2 Comparison based on performance projection
8.4 Conclusion
9 Conclusions and perspectives 
9.1 Conclusions
9.2 Perspectives

GET THE COMPLETE PROJECT

Related Posts