Geotechnical investigations of hydraulic conductivity and soil stratigraphy

Get Complete Project Material File(s) Now! »

Hydraulic conductivity

Hydraulic conductivity is defined as a measure of the ease with which water flows through a porous media (Jones & Mulholland, 2000). A variety of physical properties affects the hydraulic conductivity of a soil or rock (Todd & Mays, 2005). Porosity, particle size and shape of particles are some of the most important properties.
In the saturated zone, the hydraulic conductivity depends on the material properties, and is therefore constant if the material properties are homogenous (Todd & Mays, 2005). However, in case of unsaturated flow, the hydraulic conductivity is a function of the negative pressure head and volumetric water content. Furthermore, the volumetric water content is itself a function of negative pressure head during unsaturated conditions. The general shape of a hydraulic conduc-tivity function is shown in Figure 2.
As Figure 2 shows, the hydraulic conductivity decreases rapidly with the negative porewater pressure (Todd & Mays, 2005). The curve is steeper for coarser material and flatter for finer material such as clay soils. The flow of water through unsaturated soil is therefore much more complex than it is though saturated soil.

Properties of soil and bedrock

Soil is commonly classified by its composition according to grain size, see Table 1 (Todd & Mays, 2005). The composition has a major impact on the geotechnical and geohydrological properties of the soil (Svenska Geotekniska Föreningen, 2016).
A rough classification of soil into cohesive soil and non-cohesive soil is commonly used in ge-otechnology as it is based on the strength of the soil (Svenska Geotekniska Föreningen, 2016). Non-cohesive soils are defined as the soils where the shear strength depends on friction between the soil particles while cohesive soils are those where the shear strength also depends on cohesion.
Non-cohesive soils consists of granular material, including the sand and gravel fractions in Table 1 (Svenska Geotekniska Föreningen, 2016). Hydraulic conductivity in these soils is relatively high, generally above 1E-05 m/s. Saturated water content for non-cohesive soils is typically between 28% and 43%, but most commonly around 30% to 35% (Todd & Mays, 2005). In cohesive soils, clay is the major soil fraction (Svenska Geotekniska Föreningen, 2016). Its hy-draulic conductivity is relatively low, generally around 1e-09 m/s. Saturated water content for cohesive soils is typically around 40%, but can reach up to 50% (Todd & Mays, 2005).
Hydraulic conductivity for unfractured metamorphic and igneous rock is extremely variable and can range between 1e-06 to 1e-11 (Todd & Mays, 2005). Saturated water content for these rocks is limited to the fracture systems of the rock and is generally around 0% to 5% (Freeze & Cherry, 1979).

Geotechnical investigations of hydraulic conductivity and soil stratig-raphy

In order to determine the hydraulic conductivity of a soil in field, a slug test is a common method to use (Svenska Geotekniska Föreningen, 2013). The test involves changing the groundwater level and then studying its recovery.
Investigation of soil stratigraphy can be performed with a method called probing (Svenska Geotekniska Föreningen, 2013). Probing is a group of geotechnical investigation methods where a steel cone is vertically driven into the ground while the effort of the process is recorded. There are many different types of probing methods, of which weight sounding and soil-rock sounding are two commonly used methods. Weight sounding is more sensitive than other methods and is often used in lose to middle had soil types in order to determine the soil stratigraphy. In soil-rock sounding, the cone is usually driven at least 3 m into the rock in order to determine where the bedrock starts. Type of probing termination is recorded according to the standard for all methods, see Table 2 (Svenska Geotekniska Föreningen and Byggnadsgeologiska Sällskapet, 2001).

Finite element method

Many physical processes in engineering are modelled by differential equations which are too complex to solve with classical mathematical methods (Ottosen & Pettersson, 1992). The issue of solving these differential equations can be addressed by a numerical method called finite ele-ment method (FEM). In FEM, general differential equations can be solved approximately with 6 dividing the investigated region into smaller parts, called finite elements, and then carrying out approximations over each element. When a solution is obtained for all the elements, these can be put together and an approximate solution for the entire body can be found, see Figure 3 (Ottosen & Pettersson, 1992).
In two-dimensional groundwater modelling, the partial differential equation to be solved is equa-tion 2.1-2 (GEOSLOPE International Ltd, 2015). This differential equation can be approxi-mated into the finite element equation bellow. [ ]{ } = { } (2.3.1-1) [K] is a matrix that includes material properties and area of the element, {H} is a vector of total head and {Q} is a vector of nodal flow (GEOSLOPE International Ltd, 2015). The matrix [K], as well as either of the vectors {H} or {Q} must be specified in the model while the other vector is computed.

Spatial data interpolation

Due to the labor intensive nature of sampling and limited financial resources, data in geological investigations is collected at a limited number of sample points which are then used to represent the whole study area (Kitanidis, 1997). Interpolation means using the known values of the sample points in order to statistically estimate values at the unknown points. Thus, through interpolation it is possible to estimate a continuous surface of the values in the study area from an original discrete dataset of measurements, see Figure 4. This method can be applied to many different types of data and is a commonly used on geographical and geological data (GIS Resources, 2014a). Methods for spatial interpolation is built upon Tobler’s first law of geography which states that everything is related to everything else, but that near things are more related than distant things (GIS Resources, 2014b). This principle is also called spatial correlation. In a hydrogeological example, spatial correlation would mean that it is more likely that the hydraulic conductivity at an unknown point is more similar to the hydraulic conductivity a sample point close by than to a sample point further away (Kitanidis, 1997).
Weighted average algorithms, built upon the principle of spatial correlation, is the foundation of most interpolation methods (Kitanidis, 1997). The linear estimator, see Equation 2.4-1, is a gen-eral formula for a weighted average algorithm. ẑ = ∑ ( ) (2.4-1).
ẑ0 is the estimated value at the unknown point, ( ) is the measured value at the i-th location and is the weight for ( ) (Kitanidis, 1997). The weight ( ) represents the probability that the value at the unknown point is equal to the value of the i-th measurement. The principle of spatial correlation can be represented with higher weights for the measurement ( ) closer to the unknown point.
Uncertainties in spatial interpolation of subsurface geology is illustrated in Figure 5. The three lines shows three different suggestions of interpolating the elevation of the geological layers. Without further knowledge in a case like this it is impossible to know which interpolation is the best representation of the reality (Janakiraman, Masao, & Noboru, 2000).


Kriging is one of the most commonly used interpolation methods in the field of geology and hydrology (Li, Wan, & Shang, 2020). It is generally considered to be one of the more flexible and accurate interpolation methods (Golden Software LLC, 2019a). Further, Kriging is consid-ered to produce good fitting surfaces for a variety of data sets (Kitanidis, 1997).
Kriging is an exact interpolator, which means that if a data point is located exactly at a grid node, the interpolated value should be equal to the known value of the data point (Kitanidis, 1997). The weights in equation 2.4-1 are determined according to minimum variance and to unbiasedness, where unbiasedness means that the sum of the weights must be equal to one. Kriging takes into account that datapoints located close to each other carries similar information with giving less weight to clustered data points. The flexibility of Kriging comes from that it, before performing the interpolation, also accounts for the spatial relationships between the meas-ured data points with variogram modelling. A variogram is used for fitting a function describing the spatial dependence of the variable to a plotted dependence of the measured data (Kitanidis, 1997).
One of the disadvantages to Kriging is that it can be slower for large data sets than other inter-polation methods (Golden Software LLC, 2019a).

Radial basis function

Radial basis function is a type of artificial neural network, which is a system intended to imitate the human brain in order to find implicit patterns within a data set (Rusu & Rusu, 2006). It is generally constructed with a three layers principle: an input layer, an output layer and a hidden layer which processes the input to something usable for the output layer. Arti-ficial neural networks are useful for recognizing patterns that are too complex for a human brain to recognize. As well as Kriging, Radial basis function is a flexible method which can be useful to interpolate most types of data sets. The interpolation method is an exact inter-polator and the precision of the interpolation is similar to Kriging but are considered slightly better for small data sets (Rusu & Rusu, 2006).
There are various types of Radial basis functions which uses different mathematical functions to define the optimal set of weights for the data points when interpolating a grid node (Golden Software LLC, n.d.). Thus, the different functions generate slightly different inter-polated surfaces. Among the group of Radial basis functions the Multiquadric function, which is the default in Surfer, is generally considered to generate good fitting surfaces for many of data sets.

READ  Kiwifruit Results and Sector Comparisons

Inverse distance weighting

Inverse distance weighting is one of the faster and more mathematically simple interpolation methods (Babak & Deutsch, 2009). The weights in the linear estimator are unbiased and calcu-lated by an equation where the value of the weights decreases exponentially with distance. A value of the exponent is manually select by the user and is commonly set to 2 (Babak & Deutsch, 2009), which is the default in Surfer (Golden Software LLC, 2019a). As the exponent increases, the weight decreases for the measures further away.
Inverse distance weighting is an exact interpolation method (Golden Software LLC, 2019a). The interpolation method is sensitive to characteristics such as to outliers and is prone to creating “bull’s eyes” around existing data points (Babak & Deutsch, 2009). Thus, Inverse distance weighting is more suitable for interpolating regularly spaced data than clustered data (Li & Heap, 2014)

Minimum curvature

Minimum curvature interpolation method attempts to generate the smoothest possible surface while aiming to pass through the measured data points (Golden Software LLC, 2019a). Mini-mum curvature is not an exact interpolator and the surface does not have to pass exactly through the data points. The interpolation is an iterative process where equations are applied over the grid until the successive changes are below a limit value or the maximum number of iterations is reached. Minimum curvature interpolation works well for retaining small-scale features of the data (Li, Wan, & Shang, 2020).

Evaluating the accuracy of interpolation methods

Following steps are suggested by Golden Software (2019b) in order to evaluate which interpo-lation methods are suitable for a dataset.
1. Visual evaluation: Plot the interpolated data from all twelve interpolation methods into contour maps. The best representations of the original data are plots showing smooth contours and otherwise realistic shapes in relation to the interpolated physical property. The choice of interpolation method is narrowed down by eliminating the interpolation methods that can visually be assumed not to be good fits to the data.
2. Residuals: When it is not possible to visually determine which interpolation method suits the data best, grid residuals can be calculated to evaluate which interpolation is closest to the initial data set. The sum of the absolute value of the residuals for each interpolation can then be used to evaluate the remaining interpolation methods among each other.
The residual is the deviation between the observed data and the predicted value (Golden Software, LLC, 2019b). The interpolation method with the lowest sum of residuals can be as-sumed to be the best fit to the initial data set. An expression for sum of absolute value of residuals is shown in equation 2.4.5-1 (Kitanidis, 1997). = ∑| ( ) − ẑ | (2.4.5-1).
( ) is the value at the i-th measurement and ẑ is the estimated value at the same point (Kitanidis, 1997). In research, interpolation methods are commonly evaluated according how accurately the modelled data set fits to the measured data (Li & Heap, 2014) However, no rele-vant research was found in the literature study of this thesis work regarding what impact the difference among interpolated surfaces has on the results from groundwater models that they are imported into.

Effect of sample size on data interpolations

In a review of interpolation methods used in environmental science, Li and Heap (2014) states that that most interpolation methods produce similar results when the sample size is high. How-ever, when sample size is low, the choice of interpolation method has a significant impact on the resulting interpolated surfaces. The accuracy of interpolation methods is generally increasing with an increased sample size. The effects of sample size vary with interpolation methods as some require more data than others in order to obtain a satisfactory accuracy. Li and Heap (2014) furthermore suggests that there is a threshold where an increased sample size does not increase the accuracy of the interpolation significantly.
There is a variation of rules of thumb regarding sample size when interpolating data (Li & Heap, 2014). Webster and Oliver (2006) states that at least 100 data points are needed to produce a stable variogram, which is used in for example Kriging interpolation. Further Webster and Oliver states that 50 data points is an absolute minimum. However, sample sizes as low as 28 data points have also been proposed to perform an accurate interpolations (Li & Heap, 2014). Further, in a study of methods for interpolation of soil texture, it was concluded that a minimum of 80 was needed for interpolation of spatial data in a study area of 11352 km2 (Li, Wan, & Shang, 2020). In practice however, it is usual to have significantly lower sample sizes than stated above due to limited resources (Li, Wan, & Shang, 2020). Especially in smaller geohydrological projects it is not uncommon to only have around three or four data points to work with (Lewis, personal communication, April 2020).
Apart from sample size, the spacing of samples has an impact on the accuracy of the resulting interpolation (Li & Heap, 2014). Spacing of samples should be done with consideration of the scales of variation of the variable in the investigated area as too widely spaced samples are gen-erally found to significantly reduce the information in the interpolated maps (Li & Heap, 2014).

Recent developments of interpolation methods

During the last few decades, several methods of interpolation have been developed from the field of machine learning (Li & Heap, 2014). Although these methods are rarely used, Lin and Heap (2014) suggest that they might become more widely used in the future due to their high predic-tion accuracy. In research from Janakiraman et al. (2000) an interpolation method called fuzzy-MLP neural network was developed and studied. It was evaluated in a geological context and judged to be more accurate than other comparasive interpolation methods. The method has the advantage, compared to the classical interpolation methods, of being based on some geolocical principles and being able to provide and indication of the reliability of the interpolation.
Although these recently developed methods for interpolations are not studdied further in this thesis work, it can be useful to keep in mind that there are alternatives to the more classical interpolation methods.

Table of contents :

1.1 Background
1.2 Aim
2.1 Groundwater flow
2.1.1 Hydraulic conductivity
2.2 Properties of soil and bedrock
2.2.1 Geotechnical investigations of hydraulic conductivity and soil stratigraphy ….
2.3 Groundwater modelling
2.3.1 Finite element method
2.4 Spatial data interpolation
2.4.1 Kriging
2.4.2 Radial basis function
2.4.3 Inverse distance weighting
2.4.4 Minimum curvature
2.4.5 Evaluating the accuracy of interpolation methods
2.4.6 Effect of sample size on data interpolations
2.4.7 Recent developments of interpolation methods
3.1 Case study description
3.2 Overview of the approach
3.3 Primary processing of data
3.3.1 Generating data sets with reduced sample size
3.4 Data interpolation
3.5 Profile creation
3.6 Model simulation
3.6.1 Geometry
3.6.2 Material properties
3.6.3 Mesh settings
3.6.4 Calibration of boundary conditions
3.6.5 Interpretation of model results
4.1 Generated data sets
4.2 Interpolated contour maps
4.2.1 Altering interpolation method
4.2.2 Altering sample size
4.3 Modelled profiles
4.3.1 Altering interpolation method
4.3.2 Altering sample size
4.4 Calibrated total heads
4.4.1 Altering interpolation method
4.4.2 Altering sample size
4.5 Modelled groundwater flow rates
4.5.1 Altering interpolation method
4.5.2 Altering sample size
5.1 Altering interpolation method
5.2 Altering sample size
5.3 Suggestions for management
5.4 Limitations and uncertainties
5.5 Further work recommendation


Related Posts