Convergence of Information Theory, Communications Theory, Estimation Theory and Control Theory

Get Complete Project Material File(s) Now! »

Network-aware Control

By Network-aware control we mean that the control system will take measures to optimize the utilization of resources for control information exchange among sensors, controllers and actuators via shared communication network which has the constraints like Delay, Jitter, Packet Loss, Out-of-Sequence packet etc. Modify control algorithms to cope with communication imperfections — e.g., control with time-stamped sensor data. Communication imperfections in single control loop include — Delay and jitter, Bandwidth limitations, Data loss and bit errors, Outages and disconnection. To realizea network-aware controller we need to modify the conventional controller to cope with communication imperfections in the following ways :
1. Control under varying network delay
2. Control under data loss
3. Control under bandwidth limitation
4. Control under topology constraints
Network-aware control architecture is needed to be designed and analyzed which will look like Fig.(3.2) which is basically an architecture for control over communication networks. Here, we have to estimate network state in presence of network delay, data loss probability and bandwidth.

Quantized Control Systems

The literal definition of quantization is the division of a quantity into a discrete number of small parts, often assumed to be integral multiples of a common quantity. The oldest example of quantization is rounding off. Any real number x can be rounded off to the nearest integer q(x), say, with a resulting quantization error e = q(x) 􀀀 x so that q(x) = x+e. More generally, we can define a quantizer as consisting of a set of intervals or cells S = fSi; i 2 Ig, where the index set I is ordinarily a collection of consecutive integers beginning with 0 or 1, together with a set of reproduction values or points or levels C = fyi; i 2 Ig, so that the overall quantizer is defined by q(x) = yi for x 2 Si, which can be expressed [Gray and Neuhoff, 1998] concisely as  q(x) = P i yilSi(x).
where the indicator function lS(x) is 1 if x 2 S and zero otherwise. Due to quantization technique used in the communication channel used for networked control system there exists quantization errors which lead to control performance degradation. A quantized measurement q(x) of a real number x as input can be treated like a partial observation of x or as an entity containing a limited quantity of information about x [Delchamps, 1990]. Thus this explanation seems to be more qualitative than quantitative. Due to finite bandwidth we have to restrict the number of bits assigned to the quantization which has got a direct bearing on the quantization accuracy. Information-theoretic source coding technique reduces the number of bits to represent x based on entropy. Entropy puts the lowest limit on the number of bits assigned to represent x so as to successfully reconstruct the original value of x within specified tolerable error probability or bit-error rate which is called the fidelity. The quality of a quantizer can be measured by the goodness (fidelity) of the resulting reproduction in comparison to the original. One way of accomplishing this is to define a distortion measure d(x; ^x) that quantifies cost (performance cost) or distortion resulting from reproducing x as ^x and to consider the average distortion as a measure of the quality of a system, with smaller average distortion meaning higher quality. The most common distortion measure is the squared error d(x; ^x) = kx 􀀀 ^xk2. In practice, the average will be a sample average when the quantizer is applied to a sequence of real data, but the theory views the data as sharing a common probability density function (pdf) f(x) corresponding to a generic random variable X and the average distortion D(q) [Gray and Neuhoff, 1998] becomes an expectation as follows:

Information and Entropy

Ideally, what we would like is for the message to have low entropy by having prior information so that there is less information for the decoder to try and extract from the signal. The intuition involved is that low entropy implies better predictability. Better predictability means that our prior knowledge is quite strong.  However, to get the message across intact, we would like the messenger to have highenergy so that the signal-to-noise ratio is favourable (high mutual information = informative likelihoods). The intuition for the case of signaling is that we want to reduce the effect of the noise. We do this by having a large mutual information between the input and output of the channel. Unfortunately, when we restrict ourselves to affine controllers for this problem, these two objectives are in direct opposition. An affine controller implies Gaussian state and for a Gaussian random variable, high energy implies high entropy and low entropy implies low energy.
There is one important difference between the continuous and discrete entropies. In the discrete case, the entropy measures in an absolute way the randomness of the random variable. In the continuous case, the measurement is relative to the coordinate system. If we change coordinates the entropy will in general change. Since the mutual information is the difference between the two entropies, it is co-ordinate independent due to common terms getting canceled by subtraction; whereas the entropy is co-ordinate dependent.
In spite of this dependence on the coordinate system the entropy concept is as important in the continuous case as the discrete case. This is due to the fact that the derived concepts of information rate and communication channel capacity depend on the difference of two entropies and this difference does not depend on the coordinate frame, each of the two terms being changed by the same amount. Hence, we would like to use mutual information related to entropy in defining and correlating some important control parameters of the system.

READ  Problems and Challenges related to Material Handling Systems

Table of contents :

1 Introduction 
1.1 Literature Overview
1.2 Overview of Thesis
1.3 Outline and Publications
1.3.1 Outline
1.3.2 Publications
2 Basic Concepts Used 
2.1 Introduction
2.2 Information Theory
2.3 Estimation Theory
2.4 Control Theory
2.5 Conclusion
3 Control under Communication Constraints 
3.1 Introduction
3.2 Control over Networks
3.2.1 Control-aware Network
3.2.2 Network-aware Control
3.3 Quantized Control Systems
3.4 Conclusion
4 Information-theoretic Analysis of Quantized Control 
4.1 Introduction
4.2 Information and Entropy
4.2.1 Relation between Differential Entropy and Discrete Entropy
4.2.2 Entropy and Estimation Error Relation
4.3 On the Value of Information
4.4 Extracting State Information from Quantized Output
4.5 Information-Theoretic Rate Requirement under Asymptotic Quantization
4.5.1 Justification for the Equation (4.3) of Mutual Information
4.6 Certainty Equivalence Principle
4.7 Quantization and Mean Square Error
4.8 Conclusion
5 Convergence of Information Theory, Communications Theory, Estimation Theory and Control Theory 
5.1 Introduction
5.2 Convergence of Information, Communication, Estimation and Control
5.3 Achievable Rate of Information Transfer and Bode’s Integral Formula
5.4 Relation among Entropy, Unstable Poles and Bode-Integral
5.4.1 Relation between Entropy and Bode-Integral
5.4.2 Relation between Bode-Integral and Unstable Poles
5.5 Entropy-Theoretic Control
5.6 Mutual Information and Optimal Estimation Error
5.7 Conclusion
6 Information-Theoretic View of Control 
6.1 Introduction
6.2 Problem Formulation
6.3 Preliminaries
6.3.1 Mutual Information
6.3.2 Shannon Entropy
6.3.3 Bode Integral
6.4 Information Induced by Controllability Gramian
6.5 Conclusion
7 Relation of FIM and Controllability Gramian 
7.1 Introduction
7.2 Problem Formulation
7.3 Preliminaries
7.3.1 Fisher Information Matrix
7.3.2 Fisher Information as Metric for Accuracy of a Distribution
7.4 Information Induced by FIM and Controllability Gramian
7.5 Conclusion
8 Degree of Reachability / Observability as a Metric for Optimal Integrated Control and Scheduling of Networked Control Systems 
8.1 Introduction
8.2 Problem Formulation
8.2.1 State Representation of Resource-Constrained Systems
8.2.2 Lifting of Periodic Systems
8.2.3 Reachability / Observability of NCS
8.3 Optimal Off-Line Scheduling of Control / Sensor Signals
8.3.1 An Example
8.3.2 Results Obtained
8.4 Optimal On-Line Control and Scheduling of Control Signals
8.5 Conclusions and Perspectives
9 Conclusion 
9.1 Summary
9.2 Future work
Appendices
A Analysis of First-Order Information 
B Receding Horizon Optimal Control with Constraints 
B.1 Introduction
B.2 The Receding Horizon Control Principle
B.3 Stability of Receding Horizon Optimal Control
B.4 Conditions for Stability
B.5 Terminal Conditions for Stability
B.6 Conclusion
Bibliography

GET THE COMPLETE PROJECT

Related Posts