Hydrographic Survey Techniques

Get Complete Project Material File(s) Now! »

Literature Review

Overview

Discussed in this chapter are previous Atlantic Ocean channel studies for determining shoaling rates. A discussion of hydrographic survey data is included supported by USACE collection methodology and accuracy standards. Also discussed are the spatial interpolation methods used in this research and USACE methods in producing sediment volume estimates.

Previous Atlantic Ocean Channel Studies

The Atlantic Ocean Channel was formed in 1985 in response to the Norfolk Harbor Channel and Deepening project. It serves as the main channel that leads all major shipping traffic into the Chesapeake Bay. Ships exit the Atlantic Ocean channel and then take course to the Thimble Shoal Channel in route to Norfolk, Virginia or Cape Henry Channel in route to Baltimore, Maryland or other destinations north.
A deepening study was conducted in 1985 when the Atlantic Ocean channel was being formed (Berger, Heltzel, Athow, Richards, & Trawle, 1985). USACE scientists needed to determine at what rate the channel would shoal or deepen at its planned location. The report used analytical and empirical formulas to predict shoaling rates caused by tide and wave induced sediment movement. The report also included analysis of old bathymetric surveys.
It was concluded in this study that the Atlantic Ocean channel would shoal at a rate of 200,000 cubic yards annually. Other conclusions stated that tidal and wave effects only produced minimal shoaling effects. The strongest conclusion made in the study was the comparison of bathymetric surveys. It was stated that long term scour was present and that the channel was relatively stable (Berger, Heltzel, Athow, Richards, & Trawle,1986).

Hydrographic Survey Techniques

Guidelines and specifications for creating bathymetric surveys are listed in a document produced by the USACE known as the “Engineering and Design Hydrographic Surveying.” Within this document is a full description of survey collection methodology, survey planning, sediment volume estimation, accuracy standards and design engineering concepts. This document, last published in 2002, was created to insure consistency of all survey projects maintained by the USACE.
Hydrographic surveying is performed to determine the underwater characteristics of a project site. In this application, the underwater topography of a navigation channel is needed. Surveys are critical to the planning and design of a channel and are extremely critical to the payment for sediment removal (USACE, 2002). Survey projects of all types exist that include: payment surveys, plans and specifications surveys, before and after dredging surveys, project condition surveys, river stabilization, underwater obstruction, coastal engineering and wetland surveys. Each survey type has a specific duty that aids in the planning and implementation of sediment removal, shoreline stabilization, or sediment calculations. Price estimates for contracts that are awarded to projects are based from surveys.
A hydrographic survey is essentially a topographic map of an underwater surface (USACE, 2002). The USACE collects data with the same methodology as most construction and survey companies. Surveys are essentially locations taken in a three dimensional space as X-Y-Z points. Cross-sections of navigation channels are taken in the same manner as for highway construction. The surveys are parallel transects that run perpendicular to the channel or highway direction (USACE, 2002). Locations of survey points are recorded using a differential GPS aboard a survey vessel. All survey data are stored electronically in a digital terrain-mapping format.
The main characteristic that separates hydrographic surveys from land surveys is the datum. Land surveys can readily reference coordinates to a datum because land is a surface that maintains a constant datum. Water levels change two to four times a day, requiring correction to achieve a constant survey plane. The survey plane in hydrographic mapping applications is the water’s surface. To achieve a water surface datum, the water survey plane must be referenced to a known ground point. Surveys are corrected with tidal factors and adjusted to provide depths at lower low mean water depth. These factors are produced by the National Ocean Service (USACE, 2000). Error and uncertainty are key components in fully addressing a survey’s accuracy. Error causing components consist of the following: Measurement method (acoustic or manual), sea state, water temperature, salinity, transducer beam width, bottom irregularity, and heave-pitch-roll motions (USACE, 2002).
Vertical and horizontal accuracy must be addressed to properly record a depth at a known point. Overall accuracy is dependent on physical conditions along with systematic and random errors that are associated with hydrographic surveying. Because of this, comparing survey points at the same location from each year to the next is subject to inaccuracy and criticism (USACE, 2002). Error is measured in a three – dimensional ellipsoid, which can be seen below in figure 2.3-1. Most error found in hydrographic surveys is in the X, Y plane due to more accurate depth recording equipment in comparison to the differential GPS recorders used. The total allowable error for singlebeam and multibeam surveys used in this research is +/- 0.9 – 2.0 feet. This error tolerance is standard for coastal and offshore channels with a tide range of 4 – 8 feet (USACE, 2002).

Interpolation

Overview

Section 2.4 includes a discussion of all the spatial interpolation methods used in this research. Spatial interpolation is used to determine phenomena over a continuous space. In this application, spatial interpolation is used to convert point data of depth recordings into continuous surfaces. These surfaces will be later used to determine sediment volume levels. The first form of spatial interpolation consisted of contour mapping, a two – dimensional representation of a three – dimensional surface (Collins, 1995, Lam, 1983). Spatial interpolators differ in methodology, perspective (assume local or global influence) and their deterministic or stochastic nature. Deterministic interpolators provide error measurements produced by testing known point values through cross validation and validation. Stochastic interpolators produce error statistics on estimated values produced in the interpolation process. Stochastic interpolator error statistics forecast possible error that will be produced in the interpolation process (Collins, 1995). The performance of each interpolator will depend on the arrangement, density and variability of the point data. According to Collins (1995), the choice of spatial interpolation depends on the following:

  • The planimetric (x, y) and the topographic (z) accuracy of the data;
  • The spatial arrangement and density of the data;
  • Prior knowledge of the surface to be interpolated;
  • Accuracy requirements of the surface being interpolated; and
  • Computation and time limitations to calculate each surface.
READ  Seasonal rainfall predictability over the Lake Kariba catchment area 

If data measurement error is not addressed or if the data is insufficiently sampled, differences between interpolation techniques would be unjustifiable. Data density and observational error play the largest role in spatial interpolation performance (Collins, 1995). All of the equations listed in this chapter are from the Using ArcGIS Geostatistical Analyst book. Geostatistical Analyst is an ArcGIS 8.2 extension used in this research to perform all interpolation testing.

Inverse Distance Weighting

Inverse distance weighting (IDW) is a deterministic interpolation method in which values at unsampled points are calculated from known points using a weight function in a search neighborhood. Known values are used to determine unknown values surrounding each data point. With this in mind, points closer to the predicted area have more influence than points of further distance (Johnston, Ver Hoef, Krivoruchko and Lucas, 2001). IDW is one of the simpler interpolation techniques in that it does not require pre-modeling like kriging (Tomczak, 1998). The formula and the weighting function used for IDW can be seen in the equations listed below.
As the distance increases, the weight is reduced by a factor of p.
The quantity di0 is the distance between the predicted location, s0, and each of the measured locations, si.
The power function used in this thesis automatically sets itself by using a sample of neighborhood search points. Using this sample, weights are adjusted to produce the lowest root mean square predicted error from the sample search points.
IDW is an exact interpolator that produces surfaces similar to a bull’s eye shape. Because more influence is put on known observations closest to the predicted point, circular rings are present (Tomczak, 1998, and Collins, 1995). This visual effect is more obvious when interpolating sparse datasets over a large spatial extent.

Splines

Splines, also known as radial basis functions (RBFs), are deterministic interpolators that attempt to fit a surface through each observation of a dataset while also minimizing the total curvature of a surface (Johnston, Ver Hoef, Krivoruchko and Lucas, 2001, Cressie, 1993, Davis, 1986, Collins, 1995). Splines are well suited for calculating surfaces from a large set of data points that have a gently sloping surface like elevation, or in this application, sea floor depths. Splines are inappropriate for producing surfaces from datasets that are prone to error and also datasets with large or rapid changes in the surface values. Splines were popular in the 1970’s and 1980’s because they were computationally efficient (Collins, 1995).
Spline functions force the surface to pass through each point but do not provide estimate of error like most kriging methods (Cressie, 1993). Splines are also very effective for producing surfaces from regularly spaced data. Unlike IDW, splines can estimate surface values above and below the maximum and minimum values. Splines follow a curvature that is produced by the values of the observations. IDW does not produce values higher than the maximum value because influence on observations decrease with distance from a known point, thus creating a bull’s eye effect. Splines calculate surfaces by producing weighted averages of neighboring locations while passing through known locations (Johnston, Ver Hoef, Krivoruchko and Lucas, 2001).
Five different spline functions (Completely Regularized Spline, Spline with Tension, Multiquadratic Spline, Inverse Multiquadratic Spline, and Thin Plate Spline) were used in this thesis. Though little difference exists among spline equations, each was tested because of spline function’s documented success using large, uniform datasets like the hydrographic surveys present in this research. A smoothing parameter is used to determine the smoothness of the calculated surface. With the exception of inverse multiquadratic spline, the higher the parameter, the smoother the surface. The smoothing parameter function σ is found by minimizing the root-mean-square prediction errors using cross-validation (Johnston, Ver Hoef, Krivoruchko and Lucas, 2001).

Kriging

Kriging is synonymous with “optimal prediction” as kriging attempts to make inferences on unobserved values (Cressie, 1993). Kriging is a stochastic technique that uses a linear combination of weights at known points to estimate the value at an unknown point (Collins, 1995). Kriging builds these inferences and estimates using a semivariogram, a measure of spatial correlation between two points. Weights are given to points that have similar directional influence and distance. A semivariogram bases these predictions by the level of spatial autocorrelation, that is, dependence between sample data values which decrease as the distance between observations increase (Lam, 1983).
Kriging was invented by the aid of a South African mining engineer named D.G. Krige. Krige used empirical observations of weighting to estimate the ore content of mined rock by comparing known values sampled from early mining explorations (Cressie, 1993, Collins, 1995, Lam, 1983). Later, G. Matheron, a Frenchman, incorporated Krige’s methodology into a technique he dubbed as “Kriging”, the theory of the behavior of spatially distributed variables (Collins, 1995).

1 Introduction 
1.1 Statement of the Problem
1.2 Spatial Interpolation
1.3 Research Methodology
1.4 Justification for Research
1.5 Research Scope and Objectives
1.6 Organization of this Thesis
2 Literature Review
2.1 Overview
2.2 Previous Atlantic Ocean Channel Studies
2.3 Hydrographic Survey Techniques
2.4 Interpolation
2.5 Validation and Cross Validation
2.6 Previous Research using Hydrographic Survey Data
2.7 Shoaling
2.8 Volume Estimation
3 Methodology 
3.1 Overview
3.2 Data Gathering and Preparation
3.3 Interpolation
3.4 Prediction Error Gathering
3.5 Interpolation Methods – Initial Testing
3.6 Interpolation Testing – Sensitivity Analysis
3.7 Grid, TIN and Volume Analysis
4 Results
4.1 Overview
4.2 Initial Testing
4.3 Sensitivity Analysis
4.4 Sensitivity Analysis Results
4.5 Highest and Lowest Error Producing Interpolators
4.6 Full Cross Validation of Singlebeam and Multibeam Data
4.7 Final Results
5 Volume Analysis 
5.1 Overview
5.2 Volume Calculations
5.3 Interpolation and Volume Error
6 Conclusion
6.1 Summary
6.2 Comparison of Interpolator Effectiveness by Survey Type
6.3 Volume Analysis
6.4 Further Research and Recommendations
References
GET THE COMPLETE PROJECT
Evaluating the Impacts of Military and Civilian Overflights and Human Recreation on Least Terns, Common Terns, Gull-billed Terns, and Black Skimmers at Cape Lookout National Seashore, North Carolina

Related Posts