Gaussian random projections for general membership problems 

Get Complete Project Material File(s) Now! »

Lower bounds for projected dimension

It is interesting to know whether we can obtain a lower value for the dimension k if we use a smarter construction of a map T (such as nonlinear ones). It turns out that the answer is negative, i.e. the value k = O(« 􀀀2 log n) is almost optimal. Indeed, Noga Alon shows in ([5]) that there exists a set of n points such that the dimension k has to be at least k = O( log n « 2 log(1= ») ) in order to preserve the distances between all pairs of points. Moreover, when the mapping T is required to be linear, then Larsen and Nelson [16]) are able to prove that k = O(« 􀀀2 log n) is actually the best possible. Therefore, the linearity requirement of T in the denition 2.2.1 is quite natural.

Constructions of random projections

In this section, we introduce several popular constructions of random projections. We rst consider sub-gaussian random matrices, the simplest case that contains many well-known constructions that are mentioned in the previous section. Then we move to fast constructions in the subsection 2.3.2. For simplicity, we will discard the scaling factor pn k in the random matrix T. This does not aect the main ideas we present but makes them clearer and more concise.

Sub-gaussian random projections

Perhaps the simplest construction of a random projection is matrices with i.i.d entries which are drawn from a certain \good distribution ». Some of good distributions are: Normal N(0; 1) (by Indyk and Motwani ([13]), Rademacher (1) (by Achlioptas [1]), Uniform U(􀀀a; a), . . . . In [18], Matousek shows that all these distributions turn out to be special cases of a general class of the so-called sub-Gaussian distributions. He proves that an RP can still be obtained by sampling under this general distribution.

Fast Johnson-Lindenstrauss transforms

In many applications, the bottle-neck in applying random projection techniques is the cost of matrix-vector multiplications. Indeed, the complexity of multiplying a k n matrix T to a vector is of order O(kn). Even with Achlioptas’ sparse construction, the computation time is only decreased by the factor of 3. Therefore, it is important to construct the random matrix T such that the operations Tx can be done as fast as possible. It is natural to expect that, the sparser the matrix T we can construct, the faster the product Tx becomes. However, due to the uncertainty principle in analysis, if the vector x is also sparse, then its image under a sparse matrix T can be largely distorted. Therefore, a random projection that satises Johnson-Lindenstrauss lemma cannot be too sparse. One of the ingenious ideas to construct fast random projectors is given by Ailon and Chazelle [3], in which they propose the so-called Fast Johnson Lindenstrauss Transform (FJLT). The idea is to precondition a vector (possibly sparse) by an orthogonal matrix, in order to enlarge its support. After the precondition step, we obtain a \smooth » vector, which can now be projected under a sparse random projector. More precisely, a FJLT is constructed as a product of three real-valued matrices T = PHD, which are dened as follows:

READ  School leadership and its effect on teachers’ ability to innovate

Table of contents :

Acknowledgement
1 Introduction 
1.1 Random projection versus Principal Component Analysis
1.2 Structure of the thesis
1.3 Preliminaries on Probability Theory
2 Random projections and Johnson-Lindenstrauss lemma 
2.1 Johnson-Lindenstrauss lemma
2.2 Denition of random projections
2.2.1 Normalized random projections
2.2.2 Preservation of scalar products
2.2.3 Lower bounds for projected dimension
2.3 Constructions of random projections
2.3.1 Sub-gaussian random projections
2.3.2 Fast Johnson-Lindenstrauss transforms
3 Random projections for linear and integer programming 
3.1 Restricted Linear Membership problems
3.2 Projections of separating hyperplanes
3.3 Projection of minimum distance
3.4 Certicates for projected problems
3.5 Preserving Optimality in LP
3.5.1 Transforming the cone membership problems
3.5.2 The main approximate theorem
3.6 Computational results
4 Random projections for convex optimization with linear constraints 
4.1 Introduction
4.1.1 Our contributions
4.2 Random -subgaussian sketches
4.3 Random orthonormal systems
4.4 Sketch-and-project method
5 Gaussian random projections for general membership problems 
5.1 Introduction
5.1.1 Our contributions
5.2 Finite and countable sets
5.3 Sets with low doubling dimensions
6 Random projections for trust-region subproblems 
6.1 Derivative-free optimization and trust-region methods
6.2 Random projections for linear and quadratic models
6.2.1 Approximation results
6.2.2 Trust-region subproblems with linear models
6.2.3 Trust-region subproblems with quadratic models
7 Concluding remarks 
7.1 Summary
7.2 Further research
References

GET THE COMPLETE PROJECT

Related Posts