Get Complete Project Material File(s) Now! »

## Mesh Histograms of Oriented Gradients

Mesh Histograms of Oriented Gradients or MeshHOG by Zaharescu et al. [143] relies on the normal of the point % to construct a local reference frame (LRF). A scalar function 5 on the mesh is also defined. 5 : R3 ! R The descriptor consisted in a histogram computed thanks to r 5 , the gradient of 5 on the neighbourhood f&8g8 of %. For each point &8 , r 5 ¹&8º is projected on the three orthonormal plane of the LRF. Each plane has been divided into 4 polar regions and 8 subregions are generated according to the orientation of r 5 ¹%º for each of them. Finally, the three histograms of each plane are concatenated to get the MeshHOG descriptor. The algorithm is effective for rigid and non-rigid deformations; however, it relies on mesh structure.

### Fast Point Feature Histograms

FPFH for Fast Point Feature Histograms provided by Rusu et al. [105] is a descriptor relying on pair relationship between % and it local neighbourhood f&8g8 .

For each &8 , the vector ri = PQi is computed, then measurements are calculated between n and ri. These measurements, using angles between the two vectors and their distance are accumulated into a histogram: FPFH descriptor. FPFH is an enhancement version of PFH [104] motivated by the computational efficiency of the latter. In PFH, the number of bins in the histogram is higher and more measurements are computed. However, the main alteration is in the way the pairs of points are generated. Indeed, this time the pair of points on which the measures are computed is not only between % and &8 but between all pair in the neighbour, i.e. &8 and &9 (with &8 possibly being %). The number of pairs being significantly smaller in FPFH, the processing time is also even if the algorithm is still slow.

#### Curvature of 2D plane curves

In the Euclidean 2D space, a curvature measures how much a curve deviates from a straight line. It is an evaluation of the variation of the direction of the tangent of the curve: the bigger the variation, the higher the curvature. To make it simpler, the curvature gives an idea of how much the handwheel must be turned to take a turn: moderate if the curve is small, sharply if it is large. Formally, if is a curve, twice continuously differentiable ie. 0 is continuous and differentiable with 00 continuous, then can be locally approximated by a parametric pair of functions W (cf. Deschamps et al. [32]). 8C 2 R W¹Cº = ¹G¹Cº H¹Cºº .

**Multi-Scale Principal CurvaturePoint Cloud Descriptor**

The premise is the same: use histogram curvature to characterize objects. However, we incorporate information from several scales.

1. As previously, we compute the _, but we do not pick only one value for _, we compute the curvature histogram for several of them. In our case _ 2 f2 4 10g%, so we get _=2%, _=4%, etc.

2. Once more, we concatenate the histograms: is defined as the concatenation of all f_8 g8 sorted by increasing value of _8 . The use of a multi-scale approach provides several advantages. First of all, we avoid to select a value for _: among the computed values, some inevitably fit for the object. Lowest scale gives pieces of information on –very– local curvatures like rough spot or noise while greater scales inform about the global flatness of the shape. With the multi-scale approach, the curvature of the shape at different scales can be seen, and each provides information for a different range.

**Global and Local Point Cloud Descriptor**

The second approach aims to preserve the best of SPC and balances its weakness with the help of local descriptors algorithm hence the name: Global and Local Point Cloud descriptor. Preliminary tests show that SPC with bigger scale (_ 8%) performs very well with resampling and noise but poorly with occlusion. To adopt a global/local approach, we need to find a method which can reflect the local properties. For instance, the latter must provide excellent results in the case of occlusion. With _ = 10% for instance, we study the global properties of the object, its global shape and flatness. As a result, the recognition rate is low with the presence of occlusions, which can be detected using local properties. Finally, a scheme to combine the two must be chosen: which one should be selected in case of divergence of outcome between the local and the global features.

**Robustness to Common Recording Errors**

This dataset aims to test the robustness of the algorithm with respect to basic geometric transformation or data corruption. It is made up with standard 3D models dataset like: The Stanford 3D Scanning Repository [125, 28, 67], A. Mian dataset [86] and McGill 3D Shape Benchmark [114]. They can be seen in Figure 3.4 and represent animal toys, sculptures, decorative objects, etc. The meshes were removed, and the models centred and normalized. The dataset is, in the end, a high-resolution dataset. The objects are holding an average of 150000 with no perceptible noise. However, there is a significant gap between the smallest object, made by only 13000 points (hand) while the biggest hold more than 230000 (angel).

**Table of contents :**

Acknowledgments

Abstract

Résumé

Contentsi

List of Figures

List of Tables

**Chapter 1 Introduction **

1.1 Motivation

1.1.1 3D Data and Application

1.1.2 3D Data Acquisition

1.1.3 Distinctiveness of 3D data

1.1.4 The Point Cloud

1.2 Research Goal

1.3 Thesis Contribution

1.4 Thesis Outline

**Part I Instance Retrieval of 3D Point Cloud **

**Chapter 2 3D Descriptor and Geometrical Curvature **

2.1 3D Descriptors

2.1.1 Spin Image

2.1.2 Mesh Histograms of Oriented Gradients

2.1.3 Fast Point Feature Histograms

2.1.4 Signature of Histogram of Orientation

2.2 Geometrical Curvature

2.2.1 Curvature of 2D plane curves

2.2.2 Curvature of 3D space curves

2.2.3 Cuvature of curves on surfaces

**Chapter 3 Curvature Based Descriptor **

3.1 Curvature Based Descriptor

3.1.1 Principal Curvature Point Cloud Descriptor

3.1.2 Multi-Scale Principal Curvature Point Cloud Descriptor

3.1.3 Global and Local Point Cloud Descriptor

3.2 Robustness to Common Recording Errors

3.2.1 Dataset description

3.2.2 Results

3.3 Validation with Deformable Objects

3.3.1 Deformable Object

3.3.2 Results

**Part II Classification of 3D Point Cloud **

**Chapter 4 Deep Learning with 3D Point Cloud **

4.1 Supervised Machine Learning

4.1.1 Definition

4.1.2 Loss Function

4.1.3 Overfitting

4.1.4 Classification Algorithms

4.2 Support Vector Machine Algorithm

4.2.1 Linear SVM

4.2.2 Non Linear SVM

4.2.3 Soft Margin

4.2.4 Multiclass SVM

4.3 Multi-Layer Perceptron

4.3.1 The Perceptron

4.3.2 Multi-Layer Perceptron

4.4 3D Models Datasets

4.4.1 KITTI Dataset

4.4.2 ShapeNet Dataset

4.4.3 PartNet Dataset

4.4.4 ModelNet Dataset

4.5 3D Classification Algorithms

4.5.1 Deep Learning on Descriptor

4.5.2 Deep Learning on 3D Data Projection

4.5.3 Deep Learning on RGB-D Data

4.5.4 Deep Learning on Volumetric Data

4.5.5 Deep Learning on Multi-View Data

4.5.6 Deep Learning on Point Cloud

**Chapter 5 Classification of 3D Point Cloud **

5.1 Classification with 3D Descriptor

5.1.1 First Approach: SVM with MPC2 Descriptor

5.1.2 CurvNet

5.1.3 Experimental Results

5.1.4 Discussion

5.2 Robustness of Deep Learning Algorithm to Noise

5.2.1 Algorithm and implementation

5.2.2 Experimental Results

5.2.3 Discussion

5.3 Multi-scale PointNet

5.3.1 Algorithm

5.3.2 Implementation

**Chapter 6 Conclusion and Perspectives **

List of Publications

**Bibliography **