Extensions of the two-wheeled robot problem

Get Complete Project Material File(s) Now! »

Controller Design

This work focuses on the two-wheeled robot with limits in system resources, specifically with limited traction with the ground surface. Other system limits such as motor constraints also exist. The controller objectives are, first, to prevent the robot from toppling, and second, to track a velocity reference. The aim of this research is to expand the two-wheeled robot’s range of operation to a greater variety of terrain, specifically, ground surfaces with limited traction. Assuming that the robot does not topple, a controller’s performance is determined by how well it tracks the reference velocity. Whenever possible, the effect of traction limits on tracking performance should be minimized.
There is generally a trade-off between design complexity, performance and implementation complexity, giving rise to different design choices. For example, to meet the first objective when traction is limited, a linear controller with low gain can be used, or the reference velocity can be limited. However, such a simple solution affects reference tracking performance. The two main types of controllers described here are the LQR and MPC controllers.
Past research has used a range of controllers, either as a proof-of-concept on a two-wheeled robot, or to show certain advantages in performance, design process or stability. Of the non-linear controllers, the most common are the sliding mode controller used for its robustness to uncertainties, while a back-stepping controller is often used to prove Lyapunov stability of the non-linear system. Linear feedback controllers are used for their simplicity, although they differ in their design process. Yet, the controllers that have been used by other researchers do not account for constraints in the system. The presence of constraints affects the stability of nearby states. While a region of stability can be found around the origin, the reference velocity is a source of excitation which has the potential to destabilize the robot. Although the controller gain can be decreased to reduce excitation from the reference, this can affect performance. A reference governor can be used to prevent excessive excitation. A
sophisticated reference governor might involve using a region of Lyapunov stability, found by analysing the control system to determine how quickly the reference velocity can change.
Instead of combining several components, an MPC controller is proposed as an integrated approach which explicitly accounts for system constraints. As a comparison, the LQR controller was chosen as a baseline, since it is simple, but still an optimal controller that can operate with the same cost function as the MPC controller. A reference governor is used with LQR to prevent excessive excitation in low-traction situations. The stable region of state-space may be underestimated by an analytical proof of Lyapunov stability, and a theoretically stable region can be compromised by model uncertainties and sensor noise anyways. As an alternative to an analytically proven region of stability, a simple set of rules was used for the reference governor in combination with trial and error. The purpose of the LQR controller is to provide a simple baseline against which to compare the performance of the MPC controller.

Reference-tracking Linear Quadratic Regulator

For the experiments on low-traction surfaces, the integration rate of the position in the reference vector was limited, preventing excessive overshoot due to integral wind-up (this is described in Chapter 6).
The LQR controller finds the matrix which optimizes a cost function for a linear system. Since the linearized model is not exact, this is only locally optimal around equilibrium. The process of designing an LQR controller is simple – linearize the system and solve the algebraic Riccati equation.
A discrete-time LQR controller was designed to match the model predictive controller (MPC). By choosing the same cost function, the performance of both controllers is the same around equilibrium. Thus, the effect of the MPC’s handling of constraints is isolated for comparison purposes.
Given the linearized system, the LQR controller also requires a cost function. The most important ratio is the cost of state vs input error which determines the overall gain of the controller. Normally, a higher gain is desirable because the system will track faster. However, there is a practical limit, which depends on the sensor dynamics, and the overall lag time in the system. Due to lag, as gain increases, high frequency noise is amplified, until it becomes unstable. A suitable set of gains was generated by trial and error, to find the highest gain which does not result in high frequency vibrations.The reference governor is placed between the raw reference velocity and the LQR controller as shown in Figure 4.2. Reference signals which potentially cause LQR to violate system limits are subject to saturation limits, by limiting the maximum velocity reference, and the maximum slew rate (reference acceleration limit). There may be other more complex schemes with better performance, however, this would not be trivial, and would require a deeper understanding of the underlying system dynamics. Although its implementation is simple, the choice of the ideal limits is not as simple, and depends on the controller gains chosen.
Due to motor voltage limits, the absolute maximum robot speed is 1.7 ms-1. However, if travelling at this speed, the robot would not be able to decelerate. The maximum speed of the MPC controller, which is implicit in the constraints, is 1.5 ms-1, which leaves a margin of controllability allowing deceleration. Through some trial and error in simulation, the maximum reference velocity = 1.2 ms−1 was chosen for the reference governor. The additional margin provides extra room for the LQR controller to decelerate because it is not explicitly aware of the robot’s maximum speed. MPC has a higher speed limit because it is aware that it must decelerate more slowly when close to the maximum speed.
The reference acceleration limit is chosen depending on the friction coefficient of the ground surface. The ideal maximum for a surface   ̅ may be found by trial and error. In this work, results were obtained on each surface for a range of to find the ones for which the robot does not topple.

Theory of Model Predictive Control

Model predictive controllers (MPC) are optimal, and normally discrete-time, controllers that predict how a proposed control sequence will affect the system based on an internal model of the system. Like a linear quadratic regulator (LQR), an MPC controller optimizes a cost function, but it does this by predicting the future state using an internal model of the plant. An MPC controller has some advantages compared to LQR. By predicting future states, it is possible to guarantee stability [120] with a suitable terminal constraint. An MPC controller also has the inherent ability to explicitly accommodate constraints, which are present in any real system.
In this work, the focus is on using a linear constrained MPC. Non-linear MPC controllers not only increase computation time and but have significantly greater design complexity.

READ  Finite element model for three-dimensional compressible turbulent flows

Advantages and disadvantages

A significant aspect of this work is the use of MPC to handle the constraints that apply to the two-wheeled robot, particularly in a low-traction environment. While ad-hoc methods such as reference governors can be applied to other controllers to avoid constraint violation, including traction constraints. Compared to linear controllers, MPC also has greater implementation complexity and computation time, which leads to a longer sample time. In spite of these disadvantages, there are significant advantages, which make it a common controller in the process industry. In contrast, PID and other linear controllers is by far the most common controller in robotics for its simplicity, making it accessible to people without formal training.
MPC provides a formal framework in which to specify the internal model of the system, along with constraints, including both input and state constraints. While ad-hoc methods such as the reference governor may require an understanding of the underlying system to develop a suitable input filter which avoids constraint violation, the MPC framework provides a standard and direct method to formulate constraints, leading to two specific advantages. Firstly, because the MPC cost function and constraints are specified independently, they can be designed as separate concerns, and the controller can be updated with minimal changes. In contrast, the stability imparted by a reference governor is dependent on the cost function of an LQR. The separation is such that a locally unstable cost function can still be BIBO stable in an MPC controller. Secondly, MPC provides a direct method to formulate constraints. When compared to the reference governor, constraining the reference may reduce the usage of multiple resources or dimensions in state-space, even if only one resource is subject to system limits. More intelligent rules can be designed for the reference governor, but this requires analysis and understanding of the system.
The flexibility and independence of various elements of the MPC controller also enables automated and real-time changes in constraint limits and the cost function as necessary, for example, as the traction limit changes. Similar transitions with a reference governor may require re-design for each constraint, leading to more interaction between the various constraints which complicates any updates to the design of the controller.
Finally, in some cases, there may be knowledge of the future reference input or disturbances. This information can be incorporated in the control horizon of the MPC controller to improve the performance of the system. Other actuators mounted on the two-wheeled robot which are separately controlled may be a cause of known disturbances.
Although MPC usually leads to a longer sampling time, which reduces its ability to control high frequency dynamics, those dynamics may be managed with a stabilizing inner loop such as the pre-damping applied here (described in §2.3).

Linear constrained Model Predictive Control

Linear constrained MPC controllers, which have a linear internal model and linear constraints, can be formulated as quadratic programs (QP), for which solvers are readily available. A simple formulation for MPC does not account for disturbances, but for a real system, stability may depend on being robust to disturbances or uncertainties. There are a number of ways to design a robust MPC controller in literature, including min-max MPC, tube MPC and constraint tightening [121-123]. These provide a theoretical guarantee of stability for a bounded disturbance.

Constraint tightening

It has been stated that robustness is a key concern with uncertainty and constraints [122]. Persistent disturbances can cause constraints to be violated in nominal MPC controllers. A nominal MPC may generate a plan which touches the edge of a feasible region, but a small perturbation is enough to cause push the system outside. Constraint tightening involves artificially tightening the constraints into the future, reserving a margin of control authority for later use. If there is a bounded disturbance over one sample period, part of the reserved margin can be used to regulate the disturbance to zero within the control horizon. As shown in Figure 4.3, the constraint is monotonically tightened in each predicted sample time.
At each time-step, a fixed portion of the margin is released, which is always sufficient to control the disturbance. This guarantees continuity – a feasible solution in one step ensures a feasible solution also exists in the next time-step. It is only necessary to ensure that sufficient margin to ensure stability is released for the expected disturbance, and this can be pre-computed. Finally, with feasibility ensured, finding a robustly control-invariant terminal constraint set can ensure stability of the system. A robustly control-invariant set is any region in state-space in which it is possible to stay within indefinitely without violating any constraints, subject to a bounded disturbance.
Compared to other methods, constraint tightening imposes minimal additional online processing, with no extra constraints or decision variables, which is its main advantage. Min-max MPC is also very conservative in open-loop, although closed-loop prediction can be used to mitigate this [124]. MPC with constraint tightening allows performance and stability concerns to be designed separately, and integrated into a single robust controller.

Table of Contents
Abstract 
Acknowledgements 
Table of Contents 
List of Figures 
List of Tables 
Nomenclature
Glossary 
1 Introduction
1.1 Two-wheeled robots
1.2 Extensions of the two-wheeled robot problem
1.3 Control of two-wheeled robots
1.4 Traction control
1.5 Research objectives and contributions
1.6 Outline
2 Modelling of the Two-Wheeled Robot 
2.1 Standard two-wheeled robot models
2.2 Model for 2D motion without yaw
2.3 Model discretization
2.4 Drive-wheel motors
3 Friction and Traction Modelling and Control 
3.1 Models of friction
3.2 Characterisation of friction for the two-wheeled robot
3.3 Slip-limiting control
4 Controller Design 
4.1 Reference-tracking Linear Quadratic Regulator
4.2 LQR reference governor
4.3 Theory of Model Predictive Control
4.4 Design of the Model Predictive Controller
5 Results with Motor Constraints 
5.1 Controllers
5.2 Experimental methodology
5.3 Results
5.4 Discussion
6 Results with Traction Constraints
6.1 Simulation with different friction models
6.2 Tests of stability and performance
6.3 Dynamically changing ground traction
7 Conclusion
7.1 Reflection on Objectives
7.2 Reflection on Contributions
7.3 Summary
7.4 Recommendations on Future Work
References
GET THE COMPLETE PROJECT
Traction Control of Two-Wheeled Robots with Model Predictive Control

Related Posts