Connection between phase retrieval, seriation and ranking

Get Complete Project Material File(s) Now! »

Phase Retrieval for Imaging Problems

Chapter abstract: We study convex relaxation algorithms for phase retrieval on imaging problems. We show that exploiting structural assumptions on the signal and the observations, such as sparsity, smoothness or positivity, can significantly speed-up convergence and im-prove recovery performance. We detail numerical results in molecular imaging experiments simulated using data from the Protein Data Bank (PDB).
The material of this part is based on the following publication:
F. Fogel, I. Waldspurger, A. d’Aspremont, Phase retrieval for imaging problems. To appear in Mathematical Programming Computation.

Introduction

Phase retrieval seeks to reconstruct a complex signal, given a number of observations on the magnitude of linear measurements, i.e., solve
find x
such that (2.1)
jAxj = b
in the variable x 2 Cp, where A 2 Rn p and b 2 Rn. This problem has direct applications in X-ray and crystallography imaging, diffraction imaging, Fourier optics or microscopy for example, in problems where physical limitations mean detectors usually capture the intensity of observations but cannot recover their phase. In what follows, we will focus on problems arising in diffraction imaging, where A is usually a Fourier transform, often composed with one or multiple masks. The Fourier structure, through the FFT, considerably speeds up basic linear operations, which allows us to solve large scale convex relaxations on realistically large imaging problems. We will also observe that in many of the imaging problems we consider, the Fourier transform is very sparse, with known support (we lose the phase but observe the magnitude of Fourier coefficients), which allows us to considerably reduce the size of our convex phase retrieval relaxations.
Because the phase constraint jAxj = b is non-convex, the phase recovery problem (2.1) is non-convex. Several greedy algorithms have been developed (see Gerchberg and Saxton, 1972; Fienup, 1982; Griffin and Lim, 1984; Bauschke et al., 2002, among others), which alternate pro-jections on the range of A and on the non-convex set of vectors y such that jyj = jAxj. While empirical performance is often good, these algorithms can stall in local minima. A convex relax-ation was introduced in (Chai et al., 2011) and (Candes et al., 2015a) (who call it PhaseLift) by observing that jAxj2 is a linear function of X = xx , which is a rank one Hermitian matrix, us-ing the classical lifting argument for non-convex quadratic programs developed in (Shor, 1987; Lovász and Schrijver, 1991). The recovery of x is thus expressed as a rank minimization prob-lem over positive semidefinite Hermitian matrices X satisfying some linear conditions, i.e., a matrix completion problem. This last problem has received a significant amount of attention be-cause of its link to compressed sensing and the NETFLIX collaborative filtering problem. This minimum rank matrix completion problem is approximated by a semidefinite program which has been shown to recover x for several (random) classes of observation operators A (Candes et al., 2013, 2014, 2015a).
On the algorithmic side, (Waldspurger et al., 2015) showed that the phase retrieval problem (2.1) can be reformulated in terms of a single phase variable, which can be read as an extension of the MAXCUT combinatorial graph partitioning problem over the unit complex torus, allowing fast algorithms designed for solving semidefinite relaxations of MAXCUT to be applied to the phase retrieval problem.
On the experimental side, phase recovery is a classical problem in Fourier optics for example (Goodman, 2008), where a diffraction medium takes the place of a lens. This has direct applica-tions in X-ray and crystallography imaging, diffraction imaging or microscopy (Harrison, 1993; Bunk et al., 2007; Johnson et al., 2008; Miao et al., 2008; Dierolf et al., 2010).
Here, we implement and study several efficient convex relaxation algorithms for phase retrieval on imaging problem instances where A is based on a Fourier operator. We show in particu-lar how structural assumptions on the signal and the observations (e.g., sparsity, smoothness, positivity, known support, oversampling, etc.) can be exploited to both speed-up convergence and improve recovery performance. While no experimental data is available from diffraction imaging problems with multiple randomly coded illuminations, we simulate numerical exper-iments of this type using molecular density information from the protein data bank (Berman et al., 2002). Our results show in particular that the convex relaxation is stable and that in some settings, as few as two random illuminations suffice to reconstruct the image.
This chapter is organized as follows. Section 3.4 briefly recalls the structure of some key algo-rithms used in phase retrieval. Section 2.3 describes applications to imaging problems and how structural assumptions can significantly reduce the cost of solving large-scale instances and im-prove recovery performance. Section 4.6 details some numerical experiments while Section 2.6 describes the interface to the numerical library developed for these problems.

Notations

We write Sp (resp. Hp) the cone of symmetric (resp. Hermitian) matrices of dimension p ; S+p (resp. H+p) denotes the set of positive symmetric (resp. Hermitian) matrices. We write k kp the Schatten p-norm of a matrix, that is the p-norm of the vector of its eigenvalues (in particular, k k1 is the spectral norm). We write Ay the (Moore-Penrose) pseudoinverse of a matrix A, and A B the Hadamard (or componentwise) product of the matrices A and B. For x 2 Rp, we write diag(x) the matrix with diagonal x. When X 2 Hp however, diag(X) is the vector containing the diagonal elements of X. For X 2 Hp, X is the Hermitian transpose of X, with T j j p j j X = (X) . refers to the complex modulus. When x is a vector in C , x is the vector with coefficients (jx1j; : : : ; jxpj). Finally, we write b2 the vector with components b2i, i = 1; : : : ; n.

READ  Dynamics of Concentration and Modeling of Natural Selection 

Algorithms

In this section, we briefly recall several basic algorithmic approaches to solve the phase re-trieval problem (2.1). Early methods were all based on extensions of an alternating projection algorithm. However, recent results showed that phase retrieval could be interpreted as a matrix completion problem similar to the NETFLIX problem, a formulation which yields both efficient convex relaxations and recovery guarantees.

Greedy algorithms

The phase retrieval problem (2.1) can be rewritten minimize kAx yk22 (2.2) subject to jyj = b; where we now optimize over both phased observations y 2 Cn and signal x 2 Cp. Several greedy algorithms attempt to solve this problem using variants of alternating projections, one iteration minimizing the quadratic error (the objective of (2.2)), the next normalizing the moduli (to satisfy the constraint). We detail some of the most classical examples in the paragraphs that follow.
The algorithm Gerchberg-Saxton in (Gerchberg and Saxton, 1972) for instance seeks to recon-struct y = Ax and alternates between orthogonal projections on the range of A and normaliza-tion of the magnitudes jy j to match the observations b. The cost per iteration of this method is minimal but convergence (when it happens) is often slow.
A classical “input-output” variant, detailed here as algorithm Fienup, introduced in (Fienup, 1982), adds an extra penalization step which usually speeds up convergence and improves re-covery performance when additional information is available on the support of the signal. Over-sampling the Fourier transform forming A in imaging problems usually helps performance as well. Of course, in all these cases, convergence to a global optimum cannot be guaranteed but empirical recovery performance is often quite good.

PhaseCut: semidefinite relaxation in phase

As in (Waldspurger et al., 2015) we can rewrite the phase reconstruction problem (2.1) in terms of a phase variable u (such that juj = 1) instead of the signal x. In the noiseless case, we then write the constraint jAxj = b as Ax = diag(b)u, where u 2 Cn is a phase vector, satisfying juij = 1 for i = 1; : : : ; n, so problem (2.1) becomes minimize kAx diag(b)uk22 (2.3) subject to juij = 1 where we optimize over both phase u 2 Cn and signal x 2 Cp. While the objective of this last problem is jointly convex in (x; u), the phase constraint juij = 1 is not.
Now, given the phase, signal reconstruction is a simple least squares problem, i.e., given u we obtain x as x = Ay diag(b)u (2.4) where Ay is the pseudo inverse of A. Replacing x by its value in problem (2.3), the phase recovery problem becomes minimize u M u subject to (2.5) juij = 1; i = 1; : : : ; n; in the variable u 2 Cn, where the Hermitian matrix M = diag(b)(I AAy) diag(b) is positive semidefinite. This problem is non-convex in the phase variable u. Waldspurger et al. (2015) detailed greedy algorithm Greedy to locally optimize (2.5) in the phase variable.
A convex relaxation to (2.5) was also derived in (Waldspurger et al., 2015) using the classi-cal lifting argument for non-convex quadratic programs developed in (Shor, 1987; Lovász and Schrijver, 1991). This relaxation is written minimize Tr(U M) (PhaseCut) subject to diag(U) = 1; U 0; which is a semidefinite program (SDP) in the matrix U 2 Hn. This problem has a structure similar to the classical MAXCUT relaxation and instances of reasonable size can be solved using specialized implementations of interior point methods designed for that problem (Helmberg et al., 1996). Larger instances are solved in (Waldspurger et al., 2015) using the block-coordinate descent algorithm BlockPhaseCut.
Ultimately, algorithmic choices heavily depend on problem structure, and these will be dis-cussed in detail in the section that follows. In particular, we will study how to exploit structural information on the signal (nonnegativity, sparse 2D Fast Fourier Transform, etc.), to solve real-istically large instances formed in diffraction imaging applications.

Imaging problems

In the imaging problems we study here, various illuminations of a single object are performed through randomly coded masks, hence the matrix A is usually formed using a combination of random masks and Fourier transforms, and we have significant structural information on both the signal we seek to reconstruct (regularity, etc.) and the observations (power law decay in frequency domain, etc.). Many of these additional structural hints can be used to speedup numerical operations, convergence and improve phase retrieval performance. The paragraphs that follow explore these points in more detail.

Fourier operators

In practical applications, because of the structure of the linear operator A, we may often reduce numerical complexity, using the Fourier structure of A to speedup the single matrix-vector prod-uct in algorithm BlockPhaseCut. We detail the case where A corresponds to a Fourier transform combined with k random masks, writing I1; :::; Ik 2 Cp the illumination masks.

Table of contents :

1 Introduction 
1.1 Contributions
1.2 Notations
1.3 Phase retrieval, seriation and ranking problems
1.4 Challenges
1.5 Connection between phase retrieval, seriation and ranking
1.6 Standard convex and spectral relaxation techniques
2 Phase Retrieval for Imaging Problems 
2.1 Introduction
2.2 Algorithms
2.3 Imaging problems
2.4 Numerical Experiments
2.5 Discussion
2.6 User guide
3 Convex Relaxations for Permutation Problems 
3.1 Introduction
3.2 Seriation, 2-SUM & consecutive ones
3.3 Convex relaxations
3.4 Algorithms
3.5 Applications & numerical experiments
3.6 Discussion
3.7 Appendix
4 Spectral Ranking Using Seriation 
4.1 Introduction
4.2 Seriation, Similarities & Ranking
4.3 Spectral Algorithms
4.4 Exact Recovery with Corrupted and Missing Comparisons
4.5 Spectral Perturbation Analysis
4.6 Numerical Experiments
4.7 Discussion
4.8 Appendix
5 Conclusion 

GET THE COMPLETE PROJECT

Related Posts