Event-Based Segment Detection Algorithm

Get Complete Project Material File(s) Now! »

Event-Based Line Detection

Let us consider a stream of events generated by a neuromorphic camera. Let us remind the reader that an event is dened as a quadruplet ek = [uTk ; tk; pk]T , which represents a change of luminance (increase, pk = 1, or decrease, pk = 􀀀1) occurring at time tk at the position uk = [xk; yk]T on the focal plane. Here, we will be looking for the set of lines that best ts the stream of recent past events, in the sense of minimizing the sum of the squared distances from the events to the lines.
As the rst events arrive, we initialize some line models passing through them. Then, as more events are detected, we can either assign them to some pre-existing model or generate a new one. This process results in the generation of a certain number of line models, many of which will not be reliable as they are estimated from few events. These line models are initially marked as inactive, meaning that we keep updating the models but the number of events supporting them has not reached a sucient high value yet.
A line model switches from label inactive to active if, within a short period of time, a sucient number of events has contributed to the update of the model. We impose two conditions for an event to be assigned to the update of a line model:
The distance between the event and the line has to be smaller than a threshold.
The visual ow of the event [32] must be perpendicular to the line.
The rst condition is trivial, since the event is spatio-temporally related to the line. The second condition is also a logical one, because the visual ow is the event-based equivalent of the optical ow estimated in frames, i.e. the pattern of apparent motion of objects in a visual scene, caused by their motion relative to the camera (the specic naming is to emphasize its event-based nature: the visual ow is not estimated from the standard luminance consistency criterion). Thus, the visual ow estimation is subject to the same aperture problem explained in [87]: when the ow is computed locally, only the normal component to the local contours of the object can be obtained. Consequently, the direction of the ow is encoding the direction of rectilinear edges, i.e. lines on the focal plane. The proposed event-based line detection technique then ts line parameters according to these two constrains upon arrival of each event.

Event-Based Visual Flow

The rst step to track lines from events is to compute the visual ow of the incoming events. To that end, we apply the technique described in [32], where the visual ow is obtained from the normal to a 3D plane locally approximating the spatio-temporal surface described by the incoming events. In the standard plane parametrization of Ax + By + t + C = 0, the normal to a plane dened by data [x; y; t]T is directly related to the parameters A and B. To estimate these parameters we apply least squares tting. This is done on the centered data to avoid ill conditioned systems [88]. The plane equation is then equivalent to: et = Aex + Bey; (3.1)
where the tilde means that the average values are subtracted from each variable. Here, we consider the m most recent events in a given spatial neighborhood of the current event ek (as in [81]), and we denote Ik the set of indices identifying these events (i.e. if i 2 Ik then ei is one of the m most recent events in a spatial neighborhood of ek). We obtain the following linear system.

Event-Based Least-Squares Line Fitting

With the ow, next comes the event-based algorithm for least squares line tting. This method is an iterative event-based adaption of the classical least squares tting of lines with perpendicular osets [89].
Lines in 2D space are dened by two parameters. We choose here the – parametrization [90], which avoids the innite or close to innite slope that one can meet with the slopeintercept parametrization [91]. Line models are identied by their index i and denoted L(i) 􀀀 (i) k ; (i) k , where (i) k is the distance from the line to the origin and (i) k the angle between the normal n(i) k to the line and the horizontal (see Fig. 3.1). Following the standard notation in this document, the subindex k relates the line to the time, i.e. (i) k is the angle of line L(i) at time tk, etc. The equation of the line is then: x cos((i) k ) + y sin((i) k ) 􀀀 (i) k = 0.

Event-Based Segment Detection

In this section we expand the line detection algorithm to a segment detection. Segments, from a line, are detected by localizing discontinuities which are actually the segments’ endpoints. Hence, we add to the line detection algorithm some procedure to output the positions of these endpoints at the same time the line is detected and tracked. In our implementation, the algorithm produces a pair of endpoint events for each segment. These events signal the position in space and time of the endpoints. As such, the output of this segment detector is a stream of endpoint events that can be used as feature for matching and learning tasks.

READ  Implanted materials and biosensors

Generation of Endpoint Events

Pixels of a line are labeled as active if their activity is greater than a predened threshold P(up) (where we choose the symbol P(up) in order to distinguish it from the activity threshold for the lines A(up)). The search for neighboring active pixels is then performed on the x or y projection, depending on the orientation of the line (see Fig. 3.2) Roughly speaking, if j(i) k j > =4 􀀀 or cos((i) k ) < cos(=4) we can say that the line is closer to being horizontal than vertical: in that case, each pixel of the line corresponds to a unique x coordinate, and we consequently perform our search on X(i) k . Otherwise we employ Y(i) k .
When an oriented event ok is assigned to a line L(i) we look for active pixels in its neighborhood, starting at the position of the event. Thus, if j(i) k j > =4 we look for recent pixels in X(i) k starting at xk. Otherwise we perform our search in Y(i) k starting at yk. The endpoints are then given by the furthest active pixel in each direction, as shown in Fig.
3.2, where we impose a minimum length of l=2 to the segments. Two endpoint events are nally generated, giving the endpoints positions in space and time after a smoothing process using a blob tracker similar to the one introduced in [67].

3D matching Once « (nm)

k has been determined, we look for the point on the edge that has generated the event. The high temporal resolution of the sensor allows us to set this point as the closest to the line of sight of the incoming event. Performing this matching between an incoming event in the focal plane and a physical point on the object allows to overcome issues that appear when computation is performed directly in the focal plane. The perspective projection on the focal plane is neither preserving distances nor angles, i.e. the closest point on the edge in the focal plane is not necessarily the closest 3D point of the object. The camera calibration parameters allow us to map each event at pixel uk to a line of sight passing through the camera’s center. The 3D matching problem is then equivalent to a search for the smallest distance between any two points lying on the object’s edge and the line of sight.

Table of contents :

1 Introduction 
2 Asynchronous Part-based Shape Tracking 
2.1 Introduction
2.2 Stream of Visual Events
2.3 Part-Based Shape Tracking
2.3.1 Gaussian Blob Trackers
2.3.2 Spring-Like Connections
2.3.3 Using the energy as a matching criterion
2.3.4 Remarks
2.3.5 Global algorithm
2.4 Results
2.4.1 Tracking a planar grid
2.4.2 Face tracking
2.4.3 Computational Time
2.5 Discussion
3 Event-Based Line and Segment Detection 
3.1 Introduction
3.2 Event-Based Line Detection
3.2.1 Event-Based Visual Flow
3.2.2 Event-Based Least-Squares Line Fitting
3.2.3 Optimization strategy
3.2.4 Event-Based Line Detection Algorithm
3.3 Event-Based Segment Detection
3.3.1 Activity of each pixel
3.3.2 Generation of Endpoint Events
3.3.3 Event-Based Segment Detection Algorithm
3.4 Results
3.4.1 Controlled scene
3.4.2 Urban scenes
3.4.3 Computational time
3.5 Discussion
4 Event-Based 3D Pose Estimation 
4.1 Introduction
4.2 Event-based 3D pose estimation
4.2.1 Problem formulation
4.2.2 Rotation formalisms
4.2.3 2D edge selection
4.2.4 3D matching
4.2.5 Rigid motion estimation
4.2.6 Global algorithm
4.3 Experiments
4.3.1 Icosahedron
4.3.2 House
4.3.3 2D matching using Gabor events
4.3.4 Fast spinning object
4.3.5 Degraded temporal resolution
4.3.6 Computational time
4.4 Discussion
5 An Event-Based Solution to the Perspective-n-Point Problem 
5.1 Introduction
5.2 Event-based solution to the PnP problem
5.2.1 Problem Description
5.2.2 Rotation formalisms
5.2.3 Object-space collinearity error
5.2.4 Optimal Translation
5.2.5 Rotation
5.2.6 Global algorithm
5.3 Results
5.3.1 Synthetic scene
5.3.2 Real recordings
5.3.3 Computational time
5.4 Discussion
6 Conclusion and Future Perspectives 
6.1 Conclusion
6.2 Future Perspectives
Appendices
A Event-based Weighted Mean 
A.1 Standard Weighting Strategy
B Solution to the damped harmonic oscillator equation 
C Mathematical development yielding the optimal ,  
D Optimization strategy for the computation of the line parameters 
E Iterative expression for the auxiliary parameters 
F Disambiguation of cos(), sin() 
G Solutions to the system of equations of the 3D matching step 
G.1 Singular case
G.2 General case
H Mathematical development yielding the optimal translation 

GET THE COMPLETE PROJECT

Related Posts