The electronics in the service of navigation and safety

Get Complete Project Material File(s) Now! »

AR Display

In our application context, we want to let the user looking at the real world and display information without masking it. So, Virtual Reality devices are not suitable, we focus on see-through devices only. Three kinds of AR displays exist, the rst one is a head mounted display with optical see through, it’s usually an AMOLED or special LCD screen combined with an optical device to project virtual objects in the the user’s eld of view as it is presented overlay objects on it. The last one is a handled display with a at panel LCD or LED, the device displays frames from a camera like the previous one (it could be a smart-phone, a tablet and so on). Handled display with a at panel LCD (or AMOLED) display in addition of a camera is not comfortable because there is a parallax error due to camera being mounted away from true eye location.
We want our display to meet following constraints:
The user has to see the real world.
The OST display has to be in the user’s eld of view.
The device must has enough luminosity for outdoor applications.
We want the biggest eld of view available.
We want a higher display’s resolution available.
One of the major problem on AR display is the weak luminosity. A majority of devices couldn’t be used in the outdoor and marine context because the luminosity is not sucient. Another drawback is the resolution of the screen, high resolution is needed to see a lot of details in a little screen just in front of eyes, a resolution under VGA is not enough for little objects (far buoys) or texts (buoy names). Also, eld of human eyes is 180 degrees but displays have a eld of view about 30 or 40 degrees for the best monocular systems.
Binocular systems would be perfect to enlarge the usable eld of view but we observed that current solutions are not mature and robust enough to be used in real conditions.
So, few manufacturers propose real OST devices with more or less the constraints listed before: Laster Techologies [Technologies, 2013], Optinvent [Optinvent, 2014], Lumus [LUMUS, 2011] and Vuzix [VUZIX, 2011]. A representation of the devices or prototypes is visible in Fig.3.2d, 3.2a, 3.2c, 3.2b. At the beginning of the thesis HMD systems were usually based on a wired or wireless connection with Smartphones or laptop that compute and provide data to be displayed. Table 3.1 gives a list of all main specications of the glasses or prototypes accessible at the end of 2011. There are no devices with all necessary embedded chips (CPU, GPU, IMU, GPS) to run AR application, only the Vuzix Star 1200 has got MEMS and Laster technologies a GPS embedded. The other main drawback of these solutions in our context is the lack of luminosity, the brightness was obviously under 3000 candela/m2, which is currently the average value. Most of the glasses were prototypes, this is why they were very expensive, price was about few thousands Euros against few hundred Euros today. This evolution conrms our initial intuition.

Head tracking

In our outdoor context, ecient markers-based solutions [Herout et al., 2012] can’t obviously be used. Relevant markerless solutions based on image processing [Comport et al., 2006, Karlekar et al., 2010] could be proposed instead, the solution described in [Yang and Cheng, 2012] is for instance promising on mobile platform. However it is not applicable in our outdoor conditions and other issues such as the important distance of targets and boat motions disqualify this approach. For outdoor positioning and orientation, some solutions based on GPS data are proposed on Android and Apple devices but are based on handled devices and are not accurate enough to place an object at the right place. A solution, based on a dierential GPS was proposed in [Feiner et al., 1997] with inclinometer and magnetometer to display text information on campus buildings. The authors noticed the acceptable inaccuracy with regards to the application requirements, but they also indicated issues with visibility in sunny conditions and diculties with the inertial sensors. Sensors and related signal processing have been improved since, but visibility and integration still need to be solved.
As it was provided in the previous chapter, a possible solution must be based on an embedded system able to track head movements with MicroElectroMechanical systems technology, which provides integrated and low cost Inertial Movement Unit (IMU) combined or not with a GPS. MEMs can track until 9DOF according to the three axis of the accelerometer (gravity and linear acceleration), magnetometer (earth magnetic eld) and gyroscope (angular velocities). This kind of chips are currently available for robotic and Smartphones/tablets applications.
Our constraints are listed as follow:
9 Degree Of Freedom (9DOF), 3 axis for each components (accelerometer, magnetometer and gyroscope).
The lowest footprint.
An interface easy to connect with the main processor.

READ  Turbulence and geodesic acoustic modes in fusion plasmas 

Computing architecture

Hardware architecture has to be embedded with the AR Display for mobility reasons, so power consumption and miniaturization are of the utmost importance. The embedded system must support the following functions: head tracking, object orientation and drawing, data acquisition must be implemented through a wireless network for sea-mark data, boat GPS, AIS data, stream and wind Gribs, nally the video has to be sent to the AR Display.
Our constraints are listed as follow:
Compute pose estimation from IMU with i2c bus.
Acquire data from a wireless connection (BlueTooth and/or WiFi).
Load Electronic Navigational Charts and gribs data from a memory.
Compute 2D and 3D objects position and orientation.
Send frames to a display.
Compute application functions such as collision risks and so on.
Head tracking based on inertial sensors is a complicated task but considering the sensor acquisition rate (e.g. 30Hz), it can be performed [Diguet et al., 2013] with 32 bit low-power processors (e.g. ARM Cortex or Intel Atoms) available on mobile SoC. The main computing task are graphic ones when complex objects are considered, the availability of GPU combined with Open- GL framework simplify the programming of this part of the application. It is also important to observe the use of see-though device oer interesting opportunities to reduce the computing complexity, rst ergonomics means few and simple object and secondly there is no background. It means that the computation requirements are far from the complexity of video games, this is an interesting opportunity to save power. Some solutions have been proposed to simplify the programming of AR applications for instance for Android OS [Kurze and Roselius, 2010], but beyond programming productivity power consumption is not really addressed as it should be. Based on the previous observations, some dedicated architectures have been proposed [J-Ph. Diguet and Morgére, 2015] as an aggressive solution to save power and area but they require the costly design of ASICs or FPGA-based embedded systems and can’t barely take benet of existing OS and programming environment. Mobile platform such as tablets are the best trade-o between productivity and power eciency considering the active developer community and the growing set of available APIs. Indeed this device provides accesses to sensors such as inertial sensors, light or pressure for instance. They implement SoC, based on GPGPU (general-purpose processing on graphics processing units) architectures, which combines multiprocessor architectures and graphic accelerators (e.g. Two Cortex A9 and PowerVR in TI OMAP4 for instance). This device is also able to communicate with wireless connectivities, store/load data and display 2D and 3D objects on a display. But they usually implement other devices such as Image and Video processor, audio chips and so on, which are not necessary in our case.

Table of contents :

1 Mots clefs
2 Résumé
3 Key words
4 Abstract
1 Main introduction 
1.1 Marine navigation
1.2 The electronics in the service of navigation and safety
1.3 Observation
1.4 Issues & Contributions
1.5 Adopted plan
2 Application Design 
2.1 Introduction
2.2 State of the art
2.3 From user to application
2.4 Needs analysis
2.5 Conclusion
3 Prototype 
3.1 Introduction
3.2 State of the Art
3.3 Hardware prototype
3.4 Software architecture
3.5 Implementation on main stream AR system
3.6 Conclusion
4 A 3D chart generator 
4.1 Introduction
4.2 State of the art
4.3 Data Filter
4.4 Chart generator
4.5 Conclusion
5 General conclusion 
5.1 General context of the thesis
5.2 Summary of Contributions
5.3 Perspectives
Bibliography

GET THE COMPLETE PROJECT

Related Posts