Statistical Pattern Recognition for Structural Health Monitoring

Get Complete Project Material File(s) Now! »

Statistical Pattern Recognition for Structural Health Monitoring


Statistical pattern recognition is a topic of research in engineering, statistics, computer science and even the social sciences. The topic covers the design of systems that recognize patterns in data. In the area of Structural health monitoring (SHM) this process typically involves recognizing patterns that differentiate data collected from a structure in its undamaged and damaged states. Statistical pattern recognition appeals to researchers in the area of SHM because of the ability to quantify and automate the decision making process in the presence of uncertainty. This ability leads naturally to the design of integrated hardware and software systems that can continuously monitor a structure’s health. There are four basic steps to the SHM statistical pattern recognition paradigm: operational evaluation, data cleansing and normalization, feature extraction, and feature analysis (Farrar, 2000).Operational evaluation is the process of defining the damage that is to be detected and providing justification, typically either economic or safety related, for performing the SHM. The operational evaluation portion of the statistical pattern recognition paradigm is not covered in this chapter because of the dependence upon the sensing hardware and the structure tested. Operational evaluation will be briefly covered in the hardware and experimental chapters. Data cleansing, normalization, compression and fusion can all be part of the last three steps in this process. During data collection, a sensor array measures the system response such as an acceleration time history at points on the structure. These data are then cleansed (e.g. filtered) and normalized (e.g. subtract the mean and divide by the standard deviation) before features are extracted from the data. Data cleansing and normalization are performed in an effort to separate operational and environmental variability from system response changes caused by damage. For example, a building’s air handling unit may produce unwanted acoustic signals at a know frequency that can be detected by the SHM sensing system and mistaken for the onset of damage. Data cleansing filters are then applied to remove these acoustic signals in an effort to reduce false indications of damage. Next, a feature that is sensitive to the onset of damage in the system is extracted from the data. If, for example,the structure typically behaves in a linear fashion, then a feature that indicates the transition to nonlinear behavior may be extracted and used as an indication of damage.Feature analysis is performed to identify when changes in the measured response are statistically significant and indicate damage as opposed to changing operational and environmental conditions. Almost all of these statistical methods employed for SHM provide an indication of damage when the damage feature exceeds a statistically determined threshold. To establish the statistical thresholds, the process must go through training on baseline data collected from the structure in an undamaged state. This chapter describes the various data cleansing, data normalization, feature extraction, and statistical discrimination techniques that are included in the software developed as part of this study.

Data cleansing and normalization

Know effects that alter the data must be removed to allow accurate modeling of the underlying system response. These know effects can be environmental or operational such as thermal changes throughout the day or a DC offset in the measuring equipment. Such patterns are not of interest in the SHM problem because they are not related to damage in the structure. Often one or more data cleansing and normalization techniques are applied to the data.Data that are collected from natural environments can often display exponential growth, seasonal drift, or cyclic patterns. In some cases only certain frequency ranges may be of interest, or the data may need to be differenced to remove polynomial trends. These techniques all fall under data cleansing, the process of eliminating unwanted data. Possible influences that would require data cleansing include temperature drift, known inputs from machinery, or other environmental influences. Data that display a shifting of the mean from zero, such as DC offset, or a scalar change in amplitude from one data set to the next can be normalized. Data normalization is the process of scaling the data to facilitate a more direct comparison of different data sets. By subtracting off the mean of the entire data set and dividing by the standard deviation, all of the data sets will be re-scaled to zero mean and common amplitude. Data may also display a logarithmic increase in amplitude that can be removed by a log transformation. More complicated changes in the data may require more sophisticated normalization techniques such as neural networks (Sohn, 2003).The following example demonstrates the data cleansing and normalization process. The original data are numerically generated from a Gaussian white noise process (Figure 1). The data shown in Figure 2 are the original data with simulated environmental trends, a simple linear trend, added


1 Introduction
1.1 Motivation
1.2 A Review of Selected literature 
1.2.1 Wired transmission
1.2.2 Wireless transmission
1.3 Scope and overview
1.4 Contributions
2 Statistical Pattern Recognition for Structural Health Monitoring
2.1 Introduction
2.2 Data cleansing and normalization
2.3 Feature extraction 
2.3.1 Time series models
2.3.2 Automation of model order selection
2.3.3 Damage sensitive feature
2.4 Statistical modeling for feature discrimination
2.4.1 Extreme value statistics Methodology Numeric example Lognormal distribution
2.4.2 Control charts
2.4.3 Sequential hypothesis tests
2.4.4 Sequential probability ratio test
2.4.5 Application to extreme value distributions Numerical Examples Lognormal parent distribution
2.5 Summary 
2.6 Contributions
3 Client Side Software Environment 
3.1 Introduction
3.2 Development of GLASS Technology
3.2.1 Developing with an object oriented approach
3.2.2 Graphically prototyping algorithms
3.3 Summary 
3.4 Contributions:
4 Node software integrated with sensing and processing hardware for inline monitoring
4.1 Introduction
4.2 Hardware
4.2.1 Single board computer
4.2.2 Sensing board
4.2.3 Transmission board
4.3 Software
4.3.1 Embedding overview
4.3.2 GLASS node software
4.3.3 Client integration
4.3.4 Communication and interaction
4.3.5 Execution of a process
4.3.6 Hardware integration
4.3.7 Communication of results
4.4 Summary 
4.5 Contributions
5 Experimental Application 
5.1 Introduction
5.2 Experimental setup
5.2.1 Test structure
5.3 Benchmarking
5.4 Structural health analysis
Unclassified: LA-UR-04-5697 Page iv
5.4.1 Operational evaluation
5.4.2 Data acquisition
5.4.3 Training feature extraction
5.4.4 Training statistical modeling
5.4.5 Testing process
5.5 System Performance
5.6 Summary 
5.7 Contributions
6 Summary
6.1 Contributions


Related Posts