Asynchronous Event Based 2D Microsphere Tracking 

Get Complete Project Material File(s) Now! »

Reliable Haptic Optical Tweezers

Optical tweezers now enter new era for micro/nanomanipulation in the biological or medical field. The Gaussian beam profile of optical tweezers generates gradient forces that are strong enough to trap a microsphere. The sphere will then serve as a « handle » to manipulate other samples, e.g. DNA stretching. Remember that in Section 1.7.3.1, we show that conventional force estimation routine concerns to apply a simple spring model on the optically trapped sphere. In this way, locations of particles with respect to the laser center become valuable information for in situ force measurement.
Equipped with this type of force sensing technique, optical tweezers becomes an even more powerful tool for micromanipulation. The trapped microsphere can now serve as a force « probe » to examine the physical characteristics of a target object, such as the stiffness of cells, thus enabling a broad variety of potential applications, especially in biology or medical science. In this thesis, « probe » always refers to the microsphere trapped by optical tweezers. Fig.2.1 shows a concept application of surface probing. Forces can be felt by the user when he/she conducts the probe to skim over the surface structure of the target object. The force direction felt by the user is perpendicular to the tangent plane of the target surface, thus allowing one to acquire very intuitive interactions with the form and the stiffness of the exploring micro-object. As can be seen, the key issue to open this type of applications lies in the precise position detection of the trapped probe.
The position detection faces two major challenges. Firstly, force feedback through a haptic interface requires a high sampling frequency of 1 kHz to achieve a realistic tactile sensation. Lower sampling frequency will artificially increase damp-ing into the system thus degrading the transparency of the haptic feedback. Sec-ondly, complex bio-medical manipulation environment further demands robust mea-surement methods. Immunity to environmental noise is an important factor in order for the interactions be reliable regardless of external perturbations (obstacles, defo-cusing, impurities, shadows…). Therefore, high speed and meanwhile robust position detection of microparticles becomes a fundamental issue in optical tweezers based micromanipulation and is the major contribution of this chapter.
In this chapter, we concentrate on resolving the issue of the high speed tracking of microspheres. Several experiments, such as Brownian motion detection, will be conducted to validate the algorithm and in the meantime demonstrate the tracking capability. Finally, the force feedback will be applied on optical tweezers. The force is temporarily limited to two dimensional space since an existing 2D experimental setup is reused to quickly show the plausibility and the potentiality of the employed method. A more powerful and complete optical tweezers system will be the subject of the next chapter.

State-of-the-art of High Speed Microparticle Track-ing

The state-of-the-art will be introduced from two aspects. Different tracking devices will be firstly compared and then the tracking algorithms.
In optical tweezers sensing techniques, few vision sensors allow high speed posi-tion detection. Quadrant photodiodes (QPDs) achieve more than 1 MHz detection bandwidth (Fig.2.2.1(a)). Initially, a calibration step is required to center the bead on the photodiode plane so that the output current generated from each quadrant cancels each other. During measurement, the slight movement of the bead within the range of QPDs will change the current in each quadrant. The difference of current between the left and the right halves gives the position of the bead on the x axis, while the difference between up and down halves gives the position on the y axis, see Fig.2.2.1(b). This method thus requires the scene to be highly clean. Any asymmetry condition, e.g. obstacles and impurities, during the measurement process will influence the output current thus is strictly not tolerated. Furthermore, QPDs limit the working distance and require strict sensor alignment.
Multiple object tracking with complex robust processing is only possible with video cameras. Microparticle position measuring with video cameras now gradu-ally replaces QPD in the most recent publications. However, modern high speed cameras suffer the same constraints in microparticle tracking as in general tracking tasks that have been explained in detail in chapter 1. As the state-of-the-art of microparticle tracking techniques, ultra high speed cameras remain an offline so-lution due to large amounts of data stored in a local memory during acquisition [S.Keen 2007]. Rapid CMOS cameras have been developed allowing a reduction of the region of interest. They can transmit the image in real-time, but the entire frame online processing remains time consuming [Gibson 2008]. Dedicated image processing hardware (e.g. FPGA) [Saunter 2005] or « smart » CMOS cameras with on-chip processing [Constandinou 2006] [Fish 2007] or GPU accelerated technology [Lansdorp 2013] are used, but these cost extra expenses, elongate development cy-cles and require specific hardware knowledge. Current particles tracking software is mainly based on centroid.
Table 2.1 gives a summary of the prices and experimental performances of the vision systems described above for micro particle tracking; it also shows the clear advantage of the use of asynchronous event based silicon retinas.

Candidate Algorithms

To find circles in images from a video camera, we mean to extract the parameter pair (xc, yc)T or triplet (xc, yc, R) describing the center and the radius of circles. Typical ways of searching circle parameters will be studied in this section, which are Centroid, Histogram, Template Correlation and Hough Transform, etc.
The centroid method is the simplest method which has already been explained in Chapter 1, Equation 1.1. In histogram methods, one calculates the sum of the pixels’ gray levels in the y axis for each location of x. The same applies for each coordinate of the y axis. So the circle center will appear to be the peak location in histograms of both the x and the y axes. The idea of cross-correlation is to use a template image to match a part of the image being processed. The maximum of the correlation indicates the interested location where the template and the image in question have the most similarity. In [Gosse 2002], to measure the bead displacement since the last frame, a small image centred on the previous particle position is used. To maximize a cross-correlation function gives the bead position, which allows sub-pixel resolution.
It appears frequently in real-world scenes that the target circles overlap or collide with other objects. To be able to tackle noise and outliers, Hough transform is such a well validated and the most used circle detection algorithm in image processing [Hough 1959].
The conventional Hough transform consists in two main steps. For easier illus-tration of the concept, let us suppose that the radius is known to users and only the circle center (xc, yc)T is under quest. The input to the algorithm should be edge images. Hough transform requires to transform the unknown parameters in the original image into a local maximum in another space. To illustrate, each point on the edge of a circle votes an entire circle in an accumulation space, also called Hough space. Each full circle in Hough space thus corresponds to each valid pixel in the edge image, see Fig.2.3(a). Once the accumulator space is constructed, the second step includes finding peaks by looking for local maxima. Fig.2.3(b) shows the hough space generated from synthetic image data. The strong peaks correspond to multiple circle centers and are isolated by setting a threshold. The thresholding results in small blobs of pixels in the peak area. The x and y values of the pixel locations are then averaged to approximate the circle center to achieve sub-pixel resolution.
If the circle radius is unknown then a supplementary parameter should be intro-duced. The Hough space can theoretically be extended to 3D. In practice however, this direct modification is too time consuming and is avoided by separating the procedure into two successive lower dimensional procedures. [Ioannou 1999] has presented a method by first applying 2D Hough transform for center localization and then a 1D radius histogramming for radius validation. This chapter focuses only on two dimensional circle tracking because the force feedback is constrained to the xy plane in the first step.
Centroid and Histogram are fast, but very sensitive to noise and outliers. Fig.2.4 compares the speed (in number of calculation steps for processing a 128 × 128 gray level image with circle radius of 25 pixels) and robustness of the above-mentioned algorithms. The first row are tested images superposed by the detection results. Centroid and Histogram methods are two orders faster than Correlation or Hough, but they are not able to track multiple adjacent objects (Fig.2.4, column two) or adapted to contaminated scenes where obstacles lie (Fig.2.4, column three). On the contrary, the other two methods are able to handle the difficulties by significantly sacrificing the processing speed. In circle detection, Hough is usually more robust than correlation because only circular forms are taken into account and tracked. As a result, we choose to use Hough transform as a starting point for microsphere position detection.

Microsphere Tracking Using DVS

The traditional circle detection process is computationally expensive since it first involves an edge detection procedure and then Hough transform. We propose a novel event based Continuous Hough transform that uses DVS’s events as input to track microspheres, a type of tool immensely used in optical tweezers systems. DVS’s sparse input opens a different point of view for algorithm design and enables high speed and meanwhile robust tracking method.
Two groups of experiments are conducted in order to validate the algorithm, to demonstrate the tracking capability and to analyse the performance. They are: (1) to track multiple microspheres simultaneously transferred by a microstage; (2) to detect Brownian motion of microsphere as a live demonstration of high dynamic motion detection in the microworld. Brownian motion can eventually feedback to users or be further processed to calibrate the stiffness of the optical tweezers.

READ  MODELING THE HYDROLOGICAL RESPONSE OF LEBANESE CATCHMENTS .

Event Based Continuous Hough Transform

The sparse data produced by DVS and their temporal asynchrony allow to run the dedicated algorithm in real-time at extreme high speed. The mathematical derivation of Continuous Hough Transform is as follows.
Let us start from the case where the radius R is fixed or known to users, so only the center coordinates (xc, yc)T must be determined. Remember that St represents the set of events active at time t and are stored in a First In First Out (FIFO) data structure. The event-based Hough transform using asynchronous events can then be defined as: H : Stk → Ak = H(Stk ) (2.1)
where Ak is, in the case of two parameters search, an integer matrix satisfying : Ak(i, j) = = 1 if (x − i)2 + (y − j)2 = R2 f or ∀i, j ∈ [0, 128], (x, y) ∈ Stk = 0 otherwise (2.2)
The Hough space voted by all the incoming events in the FIFO has been expressed as H(Stn−l+1,tn+1 ) , in which tn+1 is the current time and l is the FIFO size. The general algorithm is given using the following recursive formula, under which lies the main advantage of Continuous Hough over conventional ones: H(Stn−l+1,tn+1 ) = H(Stn−l,tn ) + H(Stn+1 ) − H(Stn−l ) (2.3) with the convention that H(Stn−l,tn ) is the sum of the vote in Hough space by events generated during (tn−l, tn). The update of location by event based Hough is continuous. When the newest event is coming into effect, the last one in the FIFO is dropped correspondingly.
Afterwards, the circle center (xc, yc)T can be detected simply by selecting the local maximum in Hough space. In applications where the object movement is small, like the Brownian motion detection, the resolution is generally too low for a precise analysis. In order to achieve sub-pixel resolution, a center-of-mass algorithm (or Centroid) within the neighbourhood of the peak in Hough Space has been calculated instead of choosing only the local maximum.
More practical issues may be considered in real applications. A region of interest (ROI) can be set for one particle if the scene presents multiple circles. Moreover, to dynamically track all the moving particles, their appearance or disappearance should also be detected by setting validation thresholds and ensuring a boundary checking during each iteration. The event-based Continuous Hough circle transform is summarized in Algorithm 1.
Algorithm 1 Event-based Hough Circle Tracking with fixed radius
1: Initialize the parameters
2: while an incoming new event e(p, tn+1) do
3: Apply equation (2.3)
4: if (x, y) ∈ one of the ROIs then
5: Update the center of the particle.
6: else if (x, y) ∈/ ROIs then
7: Detect a new local maxima close to (x, y) for possible new particle.
8: Centroid in Hough space for sub-pixel resolution.
9: Threshold validation and boundary checking.
2.3.2 Multiple Microsphere Tracking
In this section, preliminary experiments of detecting multiple microspheres displaced by a piezoelectric microstage have been conducted to validate the tracking algorithm. Using DVS’s input, the asynchronous continuous Hough Transform can achieve an information refreshing rate upto 30 kHz.

Setup

The experimental setup is plotted in Fig.2.5. The light beam is projected from above and the microscope objective (×20, NA.0.25 or ×100, NA.0.8, Olympus) is placed upside down to observe samples in the Petri dish. The Petri dish is placed inside a chamber equipped with thermal-coupled temperature controller. The platform is a controllable micro-stage driven by a DC-Servo Motor Controller (C-843, Physics Instrument). The light beam is split into two rays and redirected separately to DVS and to a classic CMOS camera (Basler). The two cameras are aligned to produce the same view of the microscope. The CMOS camera is used to provide a visual feedback to the user for system control. Polystyrene microspheres (Polysciences) of sizes 6µm, 10µm and 20µm are placed in pure water for tracking.

Experiments

Experiments are separated into two parts. The first part demonstrates the capability of simultaneously tracking multiple microspheres by Continuous Hough while the second part analyses the real-time processing speed.
In the first part of the experiment, thousands of microspheres with a constant diameter of 10µm are tracked under ×20 magnification. The field of view contains a mean value of 50 microspheres moving simultaneously. DVS sensibility is adjusted manually to our experimental condition for a better detection.
In Fig.2.6(a), a screen view shows a frame obtained by summing events over a time window. The FIFO size is set to a maximal size of 2000 events. As shown in Fig.2.6(a), the algorithm is capable of detecting several dozens of moving mi-crospheres simultaneously. It is important to point out that only the activities of pixels are taken into account, therefore the polarity does not have an impact on the computational process. The ground truth envelope corresponding to the num-ber of microspheres manually counted every 5 seconds (Fig. 2.7) shows that the method does not suffer any loss. A frame-based Hough using accumulation maps of events from the silicon retina has been implemented to provide a comparison.
Figure 2.6: (a) A multiple microspheres (10µm) detection example. Black dots are the events accumulated in the FIFO. The red circles are the detected microspheres. (b) An image of one microsphere taken by a conventional camera.
The sudden drop of circle number by frame-based Hough is due to the irregular motions of the microspheres. When the micro-stage stops or accelerates, it causes a sudden burst of noise in images. As a result, the frame-based Hough technique cannot handle without a robust tracking filter. The event-based Continuous Hough overcomes this problem thanks to its « update-from-last-position » mechanism, which acts intrinsically as a tracking filter.
Further experiments are fulfilled to examine the time consumption of the algo-rithm. The program implemented in C++ under Windows is running in real-time. The microspheres used were of a known radius of 6µm while the magnification was set to ×100. The results of real-time experiments are shown in Fig.2.8 where three microspheres are moving in the water current. The position refreshing rate is shown in Fig.2.9. It shows that the average time for processing an event is around 32.5µs, namely a 30kHz frequency. If the events are distributed over three circle edges, the detection of each microsphere’s position can then be computed at a mean frequency close to 10kHz.
It is important to emphasize that no specific hardware was used except for DVS. The experiment relied on an off-the-shelf 2.9GHz Dual core desktop PC for the whole system including the microstage control, with a total CPU load little more than 50% of its power and a memory consumption of about 4MB. If compared to frame-based techniques, in terms of rapidity, memory consumption and hardware price, event-based vision outperforms frame-based vision by far. It is also important to point out that the used event-based method is a much more complex algorithm than the classical center-of-mass method. Compared to pure tracking of events that have been reported in [Delbruck 2007], the computational load of the Hough method is higher (5 times) as it is aimed to detect specific shapes while in Delbruck et al, the algorithm is tracking blobs of events regardless of shapes. Introducing a constraint on the tracked shapes ensures that in the case of several moving objects in collision, only spheres will be correctly located. This introduces robustness in the measurements as is usually needed in real-world micromanipulation, notably, in the application of teleoperated optical tweezers.

Table of contents :

Acknowledgements
General Introduction 
1 High Speed Vision – From Frame Based To Event Based 
1.1 High Speed Cameras
1.1.1 Image Data Acquisition
1.1.2 Image Data Transmission
1.1.3 Image Data Processing
1.2 Silicon Retinas
1.2.1 Neuromorphic Engineering
1.2.2 Dynamic Vision Sensor (DVS)
1.2.3 Asynchronous Time Based Image Sensor (ATIS)
1.3 The Advantages of Asynchronous Event-based Vision
1.3.1 Frame Based Methodology
1.3.2 Event Based Acquisition
1.3.3 Event Based Processing
1.4 The Fundamentals of Event-based Computation
1.5 State-of-the-art of Silicon Retina Applications
1.6 High Speed Vision in Robotics
1.6.1 Examples
1.6.2 Difficulties
1.7 Necessity of High Speed Vision in Microrobotics
1.7.1 Automatic Control of A Microrobot
1.7.2 Teleoperated Micromanipulation
1.7.3 Two Concrete Applications
2 Asynchronous Event Based 2D Microsphere Tracking 
2.1 Reliable Haptic Optical Tweezers
2.2 State-of-the-art of High Speed Microparticle Tracking
2.2.1 Position Detection Devices
2.2.2 Candidate Algorithms
2.3 Microsphere Tracking Using DVS
2.3.1 Event Based Continuous Hough Transform
2.3.2 Multiple Microsphere Tracking
2.3.3 Brownian Motion Detection
2.4 2D Haptic Feedback Micromanipulation with Optical Tweezers
2.4.1 Strategy of Haptic Coupling with Optical Tweezers
2.4.2 Haptic Feedback Optical Tweezers System Setup
2.4.3 First Experiments on Force Sensing in the Microworld
2.4.4 A Comparison of Frame Based and Event Based Vision in Micromanipulation
3 Asynchronous Event Based 3D Microsphere Tracking 
3.1 3D Sphere Tracking Methods
3.1.1 Defocus
3.1.2 Intensity Average on Frame Based Images
3.1.3 Polarity Integration
3.1.4 Extension of Continuous Hough Transform
3.1.5 Robust Circle Fitting
3.1.6 Summary of Different Methods
3.2 3D Haptic Feedback Teleoperation of Optical Tweezers
3.2.1 Configuration and Method
3.2.2 Z Axis Force Feedback
3.3 Haptic Feedback on Multi-trap Optical Tweezers
3.3.1 Time Multiplexing Multi-trapping by Galvanometer
3.3.2 Events-Trap Correspondence
3.3.3 Multi-trap Experimental Results
3.4 Marketability
Conclusions and Perspectives 
A Publication List
Bibliography

GET THE COMPLETE PROJECT

Related Posts