Purpose and Aim
The aim of this project is to create a hybrid indoor localisation system for AGV:s based on triangulation with IR-beacons fused with measurements from an inertial measurement unit (IMU). The two sensor systems will cooperate where the IR-sensors provides absolute positioning at a low update rate and the IMU estimates the movement at a high update rate. As a result, a high probability estimation of the target’s position can be made.
The system will be evaluated in the perspective of cost, positional accuracy, range, eﬃciency and scalability. In the end, the evaluation will be used to determine weather or not the system is applicable for usage in larger scale indoor environments.
The evaluation will be concluded with a number of tests referred to as system tests.
From the result of the system tests the following question should be answered:
1. Is the system reliable and can it operate for longer periods without faults?
2. Is the system prone to noise?
3. Is the positioning within desired margin of 3 cm?
4. Is the update frequency rapid enough for highly mobile targets?
5. What is the range of the IR transmitter and receiver?
6. How is a Kalman Filter implemented and can it be used accurately estimate position using sensor data from the two localisation systems?
An indoor positioning system is a system that continuously determines the position of an object in real-time. Indoor environments are complex in the sense that there are multiple objects and obstacles that reflect signals, which lead to multi-path and delay issues. Furthermore, indoor position suﬀers from interference from surround-ing systems, such as wireless and mobile devices, that can cause signal instability and signal strength fluctuations .
Contrarily, there are some aspects that aid indoor positioning in comparison to outdoor positioning. For example, there are rarely any humidity and temperature fluctuations and vehicles confined to a restricted area usually move at a slower pace. Moreover, the layout and infrastructure is usually predetermined or known, which can be utilized. 
Throughout the years, several methods have been presented to provide a solution to various indoor localisation problems – using dead reckoning, Wi-Fi signal strength triangulation, Bluetooth beacons, camera systems, ultrasonic sensors and more so-phisticated systems such as light detection and ranging (LIDAR). Each technique varies tremendously in terms of cost, accuracy, precision, technology, robustness, security and scalability. Table 1.1 shows a comparison between diﬀerent indoor localisation technologies :
The number of units in this prototype system will be limited to a maximum of two IR-scanners and receivers. This serves the purpose to reduce cost while still ensuring that there are enough units for proof of concept. The testing area will be a maximum of 8 x 8 meters.
The thesis is divided into seven chapters, starting with the current chapter ”Intro-duction”, which explains the reason behind prototyping this system. The second and third chapter, ”System Attributes and Concepts” and ”System Implementa-tion”, will explain the theoretical foundation on which the project relies and how it has been implemented in the system. The fourth chapter, ”System Tests and Result” will present the tests and results used to evaluate if the system is applicable for implementations in larger indoor environments, such as in warehouses. Chapter five, ”Discussion”, will present all test results and evaluation of these results. Lastly, everything is summarized and concluded in the final chapter ”Conclusion”.
System Attributes and Concepts
As previously mentioned, the nature of the application environment has a major impact in determining the most optimal indoor localisation solution. To create a cheap yet scalable and highly precise solution for large indoor environments, a hybrid approach can be used. Utilizing a beacon sensor network and merging it with inertial sensor-systems have two primary advantages. The beacon system increase scalability in the sense that unlimited users can utilize the beacons. Merging of the two sets of sensor readings can account for uncertainties and yield a high probability estimation of a targets position . In this chapter, each sensor system will be described in terms of their attributes and working principles. Furthermore, the issues of both systems will be addressed followed by an introduction to a Bayesian filter and how it can be used to estimate a targets position.
The IR-positioning system is based on triangulation using IR-laser beacons. A sin-gle beacon will be denoted as an IR-Scanner. Similar to a system introduced by S. Hern´andes, J.M. Torres, C.A. Morakes and L. Acosta , IR beacons scan the whole area. In their system, the beacons were mounted onto the wall at the same horizontal level as the receivers. Each receiver was made of an array of photo cells. As the laser from the beacon is swept in a single horizontal plane across the receivers, the po-sition is obtained by measuring the time it takes for the laser to cross each photo cell.
While the system in  was limited to sweeping the laser in a single plane, this proposed system will utilize lasers that distribute the beam into lines in both the vertical and horizontal plane. These sheets of light are swept angular and vertically by discrete rotational steps, see the red lines in Fig 2.1a and 2.1b. For each step, the angle, beacon-ID and sweep direction will be transmitted with the lasers. The horizontal sweep enables the receiver – in this case an AGV – to presume that it is positioned along the line projected onto the surface by the laser, see the blue line of Fig 2.1a. From the angular sweep, the AGV can presume it is positioned along another line, see the blue line in Fig 2.1b. As the AGV receives information about the angle for the two diﬀerent sweeps, the intersection of the two lines pinpoints the exact position, see Fig 2.1c.
To calculate the position of the intersection a number of trigonometric calculations are required. First, the position will calculated relative to the transmitter position. Thereafter, the result can be converted to fit a coordinate system, such as a XY-plane on the floor.
From the two lines projected by the lasers and the height of IR scanner, a few triangles can be extracted, see Fig 2.2. Triangle A and B are enough to obtain the position.
Figure 2.3: Triangles and variables used in trigonometrical calculations
(2.1.1a-c) are used to calculate the position of the AGV.
xt = Z ∗ tan(θ) (2.1.1a)
h = Z2 + xt2 (2.1.1b)
yt = h ∗ tan(φ) (2.1.1c)
To convert the coordinates relative to the IR scanner to coordinates relative to the indoor environment, the angle and position of the transmitter must be known. Figure 2.4 display a system in 2D with a located AGV at (xt, yt) relative to the scanner. The scanner sits at an angle α to the x-axis.
Figure 2.4: Distances and angles when calculating the position relative the system
Equations (2.1.2a-d) present the final coordinates of the AGV relative to the whole system. For x1, x2, y1 and y2 see Fig 2.4.
x = x1 − x2 = xt ∗ cos(α) − yt ∗ cos(90° + α) (2.1.2a)
y = y1 − y2 = xt ∗ sin(α) − yt ∗ sin(90° + α) (2.1.2b)
Finally, the position is obtained by adding the positing of the IR scanners, (IRx, IRy).
px = IRx + x (2.1.2c)
py = IRy + y (2.1.2d)
If a 3 dimensional position is desired, two beacons have to be used.
The IMU positioning system estimates the receivers position by dead reckoning, using data from an IMU. In this case, the IMU consists of an accelerometer, a gyro-scope and a magnetometer. One way to estimate the displacement from acceleration is by using the euler method. From the accelerometer data, a, the velocity of the AGV can be calculated, see (2.2.1). Thereafter, the displacement from the AGVs starting position, (px i, py i), can be calculated from the velocity, see (2.2.2). As the IMU works with samples and not continuous signal, the time Δt is the time since the last sample.
n n n−1
vn = Δvi = aiΔti = anΔtn + aiΔti = anΔtn + vn−1 (2.2.1)
n n n−1
s = Δsi = viΔti = vnΔtn + viΔti = vnΔtn + sn−1 (2.2.2)
The nth sample is the last and/or current sample and ith sample is any sample between the first and the last. Previous samples are denoted n − 1, n − 2 and so on. The Δv is the diﬀerence in velocity between two samples and Δs is the diﬀerence in displacement. Combining (2.2.1) and (2.2.2) the total displacement can be calculated using the current acceleration data.
s = vΔt + sn−1 = (aΔt + vn−1)Δt + sn−1 = sn−1 + vn−1Δt + a Δ2t(2.2.3)
The matrix form of the equations of the displacement and velocity is shown in (2.2.4) and the position of the AGV according to the IMU follows (2.2.5)
s = 1 Δt 2 Δ2t vprev (2.2.4)
v 0 1 Δt a
px = px i + sx = px−1 + Δsx (2.2.5)
py py i sy py−1 Δsy
Fusion of Data
Neither the IR-based- nor the IMU positioning system have all the attributes re-quired for an optimal solution to the indoor localisation problem at hand. One issue with the IMU is that only displacement can be calculated, not actual position. To calculate a position, a initial position is required. In this proposed system, this is provided by the IR-system. The current position is then estimated according to (2.3.1), where the displacement was calculated with (2.2.3). Px/y i is the initial position,Px/y n is the current position, Px/y (n 1) was the last position, s the total displacement and Δs is the displacement since the last measurement. As the dis-placement is time dependent, (2.2.4), the initial position is at 0s and the current time is the sum of n samples where the each time sample is Δti.
P = Px n = Px i + sx = Px (n−1) + Δsx (2.3.1)
Py n Py i + sy Py (n−1) + Δsy
However, even with a given initial position, new issues arise. If the IMU had no im-perfections, the accelerometer alone could be used to accurately calculate the AGV’s location. Nonetheless, since the acceleration sensor data has to be integrated twice to calculate distance, any error present in the measurements will accumulate and increase exponentially over time. To compensate for this error, the IR-Scanners can be used to consistently reset the accumulated error by providing an accurate and precise absolute position.
On the other hand, the IR-Scanner has a set of imperfections of its own. Although it can provide a highly accurate position, it always requires line of sight with the observer. Furthermore, the time required to take discrete steps and transmit the associated angle results in a rather slow update rate. If the AGV moves with a high velocity or if an object blocks the line of sight, the displacement in-between readings could be far and untracked. Another issue arises when the distance between the IR-Scanner and AGV becomes far. Because the IR-Scanner is transmitting the angle with discrete steps, Δϕ, there is a dead-zone between each step. At long range, this dead-zone will have a exponentially increasing delta distance, Δd, in-between each step, see Figure 2.5. To account for these various issues, an IMU is an ideal supplement.
The most common technique to fuse a set of diﬀerent sensor readings is by applying a Bayesian Filter. Section 2.3.1 will give an introduction to one of the most common and simple of those filter types – the Kalman Filter.
Proposed by R.E. Kalman in 1960, Kalman Filter is an optimal linear status es-timation method which takes a series of data observed over time, including noise and inaccuracies, and makes a prediction that yields high probability estimation. Kalman Filters quickly became a favoured solution for optimization because of its fast calculations, eﬃciency and real-time applicability. Most commonly, it is found in systems that are continuously changing, such as navigation systems or systems that track maneuvering or orbiting objects. Furthermore, it is an excellent tool in the fields of dynamic positioning where sensor data fusion is required for increased accuracy. 
In general, Kalman Filters are divided into three main types; Kalman Filter (KF), Extended Kalman Filter (EKF) and Unscented Kalman Filter (UKF). The latter two are used in nonlinear systems by incorporating linearisation techniques into the standard KF solution. Since the system in this project can be modelled as a linear system, only the standard KF will be introduced. 
The filter works on a two-phase principle – update and predict.  Fig 2.6 shows a block diagram of the working principle. State refers to the state of the tracked object. This could be any or several physical vector quantities, such as velocity, position or temperature.
The purpose of each stage of Fig 2.6 are as follows;
1. Provide the system with prior knowledge of the systems state. The state could be temperature, position velocity or any other physical quantity.
2. Input information from the first sensor that can be used for physical models, i.e. throttle input or accelerometer data.
3. Using a physical model, predict what the result should be in accordance with the provided information.
The following equation represent the standard object state prediction equation of a Kalman Filter predict phase: xˆ = A xˆ k−1 + Bu +w k
Ak is the status transition matrix, xˆk−1 is the previous state of the tracked object, Bk is the control matrix,u k is the is the control vector andw k is the system noise vector.
4. Fetch sensor measurements from a second sensor that should be merged with the previous prediction.
5. Compare the predicted values with the sensor measurement and update the estimation in accordance with which is most probable.
The following equation represent the standard object state update equation of a Kalman filter update phase. xˆ′k = xˆ + K(zk − Hkxˆk)
K is the Kalman gain that weighs the values and zk is the second sensor input. Hk is the observation matrix, which is used to convert the matrix or vector form of the measurements to fit the model and state vector.
6. Retrieve the updated estimated state of the tracked object.
7. Set current state to the previous, k →− k − 1
A preferable way to further understand the working principle of a KF is by using a simple example.
Linear Kalman Filter Example – Prediction Phase
This project has a miniature robot that has a statex k, with position and velocity, see 2.3.2. As previously mentioned, this vector could include any physical vector quantity. x k = v(t) (2.3.2)
Moreover, lets assume that the robot is equipped with a GPS or other similar posi-tional tracker. These type of sensors are usually accurate up to about 1 meter and have a slow update rate. In many systems, this sensor alone would be insuﬃcient to accurately and reliably provide the robot with its position and avoid incidents.
Besides the information given by the sensor, some information about the robots movement might be known. Commands to the drivers and motors generate move-ment in a specific direction, which can be used to predict where the robot will be located in the next instance. However, the robots course can deviate from influence of external forces and interference. The wheels could slip and the movement could be buﬀered by inclines or wind. In other words, there is no guarantee that each wheel turn will precisely represent how far and in which direction the robot might have travelled. Both the sensor readings and the predictions based on physical properties have inaccuracies. By implementing a KF, all the information at hand can be used to-gether to more accurately estimate what the most likely outcome will be .
The KF will assume that all the variables in the vector (2.3.2), in this case the robot’s position and velocity, are Gaussian distributed. Hence, each variable has a mean value µ, which is the center of the random distribution. Furthermore, each variable has a variance σ2, which describes the uncertainty.  This is illustrated in Fig 2.7. Note that position and velocity are uncorrelated.
Table of contents :
1.2 Purpose and Aim
2 System Attributes and Concepts
2.1 IR-Positioning System
2.2 IMU Positioning
2.3 Fusion of Data
3 System Implementation
3.2 IR Scanner
3.3 Control System
3.4 Implementation of Kalman Filter
4 System Tests and Result
5.2 Improvements and Future Development
A Header files
B TCP Protocol
C Detailed Economics