Get Complete Project Material File(s) Now! »
According to the different situations and accessibilities to the geothermal energy several technologies were developed. Among those technologies, one can refer to the Hot Dry Rock (HDR) and to the Enhanced (or Engineered) Geothermal Systems (EGS) methods. Compared to the HDR technique, which uses hydraulic fracturing to create fractures in the sub-surface to form a good network for fluid propagation, the EGS takes advantage of the existing content in fractures of the sub-surface. To choose between HDR and EGS one commonly looks at the permeability of the initial sub-surface. The exploitation principle is shown in Figure 1-2. Cold water is injected in a first well – also called “injector well” – down to a system of fractures in the sub-surface. While propagating in the fracture network, the fluid is heated by friction with the surrounding rocks until reaching a second well (the production well). Rocks and fractures together form the so-called “geothermal reservoir”. The hot fluid is pumped in the production well and it then moved through the Heat Exchanger where its heat is extracted. An improvement of the connectivity between the fractures in the reservoir and/or between the wells and the reservoir is often necessary, explaining the term “enhanced” in the name EGS. Chemical and/or thermal and/or hydraulic stimulations are the common ways used to this improvement. In the case of a successful stimulation, the deep reservoir can be exploited and one can extract the heat from the hot natural brine from a power plant at surface. The natural brine is then reinjected into the reservoir at a lower temperature [ES Geothermie].
The first HDR was developed in the 1970’s in Fenton Hill (USA). The success of the experiment led other countries all over the world to try it. To give a few examples, Australian people created an EGS in the Hunter Valley whereas European people developed geothermal energy production in Rosemanove (Cornwall, UK), in Bad Urach (Germany), in Le Mayet de Montagne (France) or in Fjällbacka (Sweden). Still in Europe, many geothermal sites were installed in the Upper Rhine Graben (URG), a geological region extending from North Switzerland to Frankfurt (Germany) and briefly described in chapter 3. The first geothermal site in the URG was developed in Soultz-sous-Forêts (France) and is used as a pilot site in this region. The measurements and experiments performed there enabled the development of other geothermal fields in areas such as Basel, Riehen (Switzerland), in Landau, Insheim and Bruchsal (Germany) and newly in Rittershoffen, a village located in the proximity of Soultz-sous-Forêts.
Due to a modification of the stress field in the sub-surface during the propagation of water, seismicity may be induced during the stimulation of or circulation tests in the fracture networks of an EGS. The term “induced seismicity” is commonly used to define the whole set of the earthquakes due to these operations and more generally to human activity. This induced seismicity is used for many different purposes. For example, faults and fractures geometry and orientation in reservoirs were obtained from induced earthquakes location and focal mechanisms [Sausse et al., 2010; Frietsch et al., 2015]. The seismic risk zones may be defined by an accurate study of earthquake location and magnitude among other parameters [Bromley and Majer, 2012; Gaucher, 2016; Gaucher et al., 2015; Giardini, 2009]. To interpret the results in a reliable manner, it is necessary to quantify the errors associated with these seismic parameters (earthquake location, focal mechanism,…).
Two notions are grouped in the term “error”. The uncertainties can be defined as the range of acceptable values for a given result. For example, if we ask several persons to measure the dimensions of an object, they may provide different results. The uncertainty may then be defined by the range of the different values obtained. In many domains the uncertainties are provided with the result. On the contrary, the inaccuracy is a bias. It defines how far the obtained result is from the reality and cannot be determined without a priori knowledge. In our previous example, the inaccuracy is the difference between the real size of the object and the size measured by a badly calibrated ruler. An inaccurate but precise measure will lead to misinterpret the result whereas an accurate but uncertain measure may lead to loose interpretation. According to those definitions, it is far better to obtain accurate and uncertain results than inaccurate but certain ones. The term “data processing” covers all ways to pass from the raw dataset into the parameters we are looking for.
Proposed methodology to get earthquake location error
As explained before, errors can be divided into two main kinds: uncertainty and inaccuracy. In the case of earthquake hypocentres, uncertainties are quite often discussed whichever the scale is: for tele-seismic events as well as for local events. On the other hand, location biases (inaccuracies) were often not taken into account in the previous studies. Usually they are not worked out because of a lack of a-priori knowledge concerning the sub-surface. The a-priori knowledge of the sub-surface allows the determination of the ray path. Hence the waveforms recorded at the different sensors and the travel-times needed to reach them from the hypocentre are computable. Therefore we can create a basis of event locations.
Here we developed a methodology to get both location uncertainty and location inaccuracy for earthquakes occurring at a local scale such as geothermal reservoirs in our examples. This method can be divided into three main steps: a synthetic modelling step, a location step and a comparison step. A schematic representation of the methodology is shown in Figure 2-2.
In the synthetic modelling step a seismic cloud is firstly created. Earthquake locations are then defined either in a coordinate system relatively to a chosen point or in absolute coordinate depending on the studies. Positions of seismic sensors are also defined in the same coordinates system. This synthetic network is created with the locations of real sensors. Then the body-waves (P- and S-waves) travel-times between each earthquake forming the synthetic cloud and each station of the seismic network are computed. Those calculations are performed using the eikonal solver developed by Podvin and Lecomte . This method combines the Huygen’s principle with a finite-difference approximation method in order to compute seismic waves’ travel-times in any kind of velocity model (3D as well as 1D). The hypotheses made in this step are assumed to be representative of the reality. Hence, the obtained traveltimes are then used as the observed ones for the next steps.
In the second step, also called “location step”, some modifications are applied to the hypotheses used in the prior synthetic modelling step. By the way, we mimic the common biases and uncertainties introduced when processing seismic inversions for earthquake location. For example, picking uncertainties are added to the travel-times. The choice of the velocity model used during this inversion process to locate the earthquake depends on the knowledge of the sub-surface. The use of a 1D velocity model instead of a 3D model is a common practice when the sub-surface is poorly known or in order to simplify the inversion process of earthquake location. By using the NonLinLoc code [Lomax, 2011] with a gridsearch method, the earthquake locations are worked out as well as their probability density function (PDF). This PDF provides the uncertainty of the location associated with a given confidence level [Lomax et al., 2000], here of 68,3%. The uncertainty is defined as the halflength of its longest axis, assuming that this PDF is Gaussian and hence is represented by an ellipsoid.
Finally, in the last step called the comparison step, the locations outputs from the inversion step are compared to the locations input in the synthetic modelling step. The inaccuracy is defined as the discrepancy between both locations. Global inaccuracy as well as vertical and horizontal inaccuracies can be worked out by the methodology.
Determination of earthquake focal mechanism
The expression “focal mechanism” designs a geometrical or mathematical representation of the way the energy of an earthquake is released, distributed at its hypocentre (the point where the earthquake occurs, its focus) [Brumbaugh, 1979]. The rupture one can observe on a fault, a fracture, is also called “fault mechanism” in several references. Since it refers to a plane, it can be defined by two angles. A third angle is added to define the movement on the plane.
The strike (or azimuth) is the angle made by the horizontal line separating the hypocentre and the North and the fault plane. It is counted clockwise and therefore has values in the range 0- 360°. The dip angle characterizes the slope of the fault plane according to the surface of the Earth. Hence a 90°-angle fault plane dips vertically. The third and last angle defining a focal mechanism is the rake, the slip direction of the hanging wall relative to the foot wall on the fault plane. It is define from the horizontal of the plane in the direction of the azimuth. The strike and dip angles defining a rupture on a fault are also used to describe a fault plane (Figure 2-3).
Determination of focal mechanism
Determining the focal mechanism of a seismic event is also solving an inverse problem. The observations in d are commonly the wave polarities and/or the wave amplitudes of the signal. The searched parameters in m can be the strike, dip and rake angles or the components of the moment tensor for example. To better understand the different methods, one should remind that the motion of the soil due to P-wave is parallel to the P-wave propagation. On the contrary, S-waves induce a soil motion perpendicular to their propagation, either transverse (SH waves) or perpendicular to the plane wave (SV). Hence stations located on the nodal or on the auxiliary plane describing the focal mechanism do not record P-wave and for the same distance source-to-station, the amplitude ratios SV/P and SH/P are higher than for other stations. Therefore, by combining amplitude ratios from several sensors, expecting a good seismic coverage, one can determine the focal mechanism of a seismic source. Several papers show also the possibility to determine the focal mechanism with few sensors [Godano et al., 2009a] but they improved the basic technique.
Amplitude and amplitude ratio methods
To complete the polarity method, new ones were developed. One of the most common is about the amplitude of direct P- and S-waves and/or their ratio [Rau et al., 1996; Hardebeck and Shearer, 2003; Snoke, 2003; Godano et al., 2009b]. If the choice is made to use only the direct P- and S-waves amplitudes, one can combine these absolute amplitude values with their polarity (the sign of the amplitude), which add a piece of information. Nonetheless, the method presents a serious drawback since wave amplitudes are particularly affected by a mismodelling of the Earth structure as well as the site effects or the instrumental response [Julian and Foulger, 1996]. To decrease the impact of these parameters, one can use the amplitude ratios SV/P, SH/P and SV/SH but then we lose the piece of information provided by the polarity of the amplitude and the polarity of the P-wave is required to compute.
Other methods, combinations
Due to the lack of data for small magnitude events, and to the errors associated with the concerned earthquake locations, other methods than the simple use of P-wave polarities were developed. A Bayesian approach was computed to determine focal mechanism in volcanic area [Natale, 1994]. In that method, the probability density function (PDF) is graphically represented for the whole space where the source parameters are defined by the method of Zollo and Bernard . To determine the PDF, either P-waves polarities S-waves polarization or S/P amplitude ratios of direct waves can be used. Another Bayesian approach was developed to determine focal mechanism with the help of the distribution of the Matrix Fisher [Walsh et al., 2009]. As written by the authors, it was initially developed for P-waves polarities but can be extended to other kind of data necessary to retrieve focal mechanisms. Several combination of the different pre-cited techniques were also applied many times [Zollo and Bernard, 1991]. The most common combination uses both P-wave polarities and S, P amplitudes or S/P amplitude ratios [Rabinowitz and Hofstetter, 1992; Hardebeck and Shearer, 2003].
Table of contents :
1.2 Aim of the Ph. D.
1.3 Plan of the manuscript
2.1 Inverse Problem
2.2 Earthquake location
2.3 Determination of earthquake focal mechanism
2.4 Seismic waveforms modelling
3 Geological settings: the Upper Rhine Graben (URG)
3.1 Brief history of the URG
3.2 Temperature anomalies in the URG
3.3 Geothermal sites in the URG
4 The combination of surface and down-hole seismic networks for earthquake location at the Soultz-sous-Forêts geothermal site (France)
4.3 The Soultz-sous-Forêts geothermal site
4.4 Results and discussion
4.5 Conclusion and outlooks
5 Modelling earthquake location errors at a reservoir scale: a case study in the Upper Rhine Graben
5.3 Case study: Rittershoffen geothermal field
5.4 Results and discussions
5.5 Conclusions and outlook
6 Modelling focal mechanism errors of seismicity induced at Rittershoffen geothermal field (Alsace, France)
6.3 The Rittershoffen geothermal site
6.4 Results and Discussion
7 Discussion, conclusion
7.1 Gaussian distribution of the hypocentre PDF
7.2 Impact of a calibration shot on earthquake location
7.3 Impact of the seismic coverage on focal mechanisms
7.4 Impact of the use of amplitude ratios S/P on focal mechanisms
7.5 Main discussion and conclusion
10 Modelling seismic event location errors at the reservoir scale: application to the geothermal site of Soultz-sous-Forêts (Alsace, France)
10.3 The Soultz-sous-Forêts geothermal site
10.4 Results and discussion
10.5 Conclusion and outlooks