^{1,a)}

### Abstract

Iterative projection algorithms are successfully being used as a substitute of lenses to recombine, numerically rather than optically, light scattered by illuminated objects. Images obtained computationally allow aberration-free diffraction-limited imaging and the possibility of using radiation for which no lenses exist. The challenge of this imaging technique is transferred from the lenses to the algorithms. We evaluate these new computational “instruments” developed for the phase-retrieval problem, and discuss acceleration strategies.

This work was performed under the auspices of the U.S. Department of Energy by the Lawrence Livermore National Laboratory under Contract No. W-7405-ENG-48 and the Director, Office of Energy Research. This work was partially funded by the National Science Foundation through the Center for Biophotonics. The Center for Biophotonics, a National Science Foundation Science and Technology Center, is managed by the University of California, Davis, under Cooperative Agreement No. PHY0120999. The author acknowledge useful discussions with H. N. Chapman, M. R. Howells, J. C. H. Spence and D. R. Luke.

I. INTRODUCTION

II. THE PHASE PROBLEM

III. SETS, PROJECTORS AND METRICS

IV. ITERATIVE PROJECTION ALGORITHMS

A. Positivity

V. STEEPEST DESCENT, CONJUGATE GRADIENT, AND MIN-MAX ALGORITHMS

A. Feedback and the saddle-point problem

VI. CONCLUSIONS

### Key Topics

- X-ray diffraction
- 18.0
- Solvents
- 8.0
- X-ray optics
- 7.0
- Eigenvalues
- 6.0
- Fourier transforms
- 6.0

## Figures

Examples of sets and projectors: (a) Support: The axes represent the values on 3 pixels of an image known to be 0 outside the support . The vertical axis represents a pixel outside ( ), while the horizontal plane represents pixels inside . The projection on this set is performed simply by setting to 0 all the pixels outside the support. (b) Modulus: A pixel (in Fourier space) with a given complex value is projected on the closest point on the circle defined by the radius . If there is some uncertainty in the value of the radius , the circle becomes a band. The circle is a non-convex set, since the linear combination between two points on the same set and does not lie on the set. Also represented in the figure is the projection on the real axis (reality projection).

Examples of sets and projectors: (a) Support: The axes represent the values on 3 pixels of an image known to be 0 outside the support . The vertical axis represents a pixel outside ( ), while the horizontal plane represents pixels inside . The projection on this set is performed simply by setting to 0 all the pixels outside the support. (b) Modulus: A pixel (in Fourier space) with a given complex value is projected on the closest point on the circle defined by the radius . If there is some uncertainty in the value of the radius , the circle becomes a band. The circle is a non-convex set, since the linear combination between two points on the same set and does not lie on the set. Also represented in the figure is the projection on the real axis (reality projection).

The reflector applies the same step as the projector twice: .

The reflector applies the same step as the projector twice: .

Geometric representation of various algorithms using a simplified version of the constraint: two lines intersecting. (a) Error reduction algorithm: we start from a point on the modulus constraint by assigning a random phase to the diffraction pattern. The projection onto the modulus constraint finds the point on the set which is nearest to the current one. The arrows indicate the gradients of the error metric. (b) The speed of convergence is increased by replacing the projector on the support with the reflector. The algorithm jumps between the modulus constraint (solid diagonal line) and its mirror image with respect to the support constraint (dotted line). (c) Hybrid input–output, see text [Eq. (19) ]. The space perpendicular to the support set is represented by the vertical dotted line . (d) Difference map, see text [Eq. (22) ].

Geometric representation of various algorithms using a simplified version of the constraint: two lines intersecting. (a) Error reduction algorithm: we start from a point on the modulus constraint by assigning a random phase to the diffraction pattern. The projection onto the modulus constraint finds the point on the set which is nearest to the current one. The arrows indicate the gradients of the error metric. (b) The speed of convergence is increased by replacing the projector on the support with the reflector. The algorithm jumps between the modulus constraint (solid diagonal line) and its mirror image with respect to the support constraint (dotted line). (c) Hybrid input–output, see text [Eq. (19) ]. The space perpendicular to the support set is represented by the vertical dotted line . (d) Difference map, see text [Eq. (22) ].

The basic features of the iterative projection algorithms can be understood by this simple model of two lines intersecting (a). The aim is to find the intersection. The ER algorithm and the solvent flipping algorithms converge in some gradient-type fashion (the distance to the two sets never increases), the solvent flip method being slightly faster when the angle between the two lines is small. HIO and variants move following a spiral path. The lagrangian is represented in grayscale, and the descent-ascent directions are indicated by arrows. When the two lines do not intersect (b), HIO and variants keep moving in the direction of the gap between the two lines, away from the local minimum. ER, SF, and RAAR converge at (or close to) the local minimum.

The basic features of the iterative projection algorithms can be understood by this simple model of two lines intersecting (a). The aim is to find the intersection. The ER algorithm and the solvent flipping algorithms converge in some gradient-type fashion (the distance to the two sets never increases), the solvent flip method being slightly faster when the angle between the two lines is small. HIO and variants move following a spiral path. The lagrangian is represented in grayscale, and the descent-ascent directions are indicated by arrows. When the two lines do not intersect (b), HIO and variants keep moving in the direction of the gap between the two lines, away from the local minimum. ER, SF, and RAAR converge at (or close to) the local minimum.

The horizontal line represents a support constraint, while the two circles represent a nonconvex constraint, i.e., the modulus constraint. The gradient-type (ER and SF) algorithms converge to the local minimum, while HIO and variants follow the descent-ascent direction indicated by the arrows.

The horizontal line represents a support constraint, while the two circles represent a nonconvex constraint, i.e., the modulus constraint. The gradient-type (ER and SF) algorithms converge to the local minimum, while HIO and variants follow the descent-ascent direction indicated by the arrows.

Positivity constraint: the support constraint is represented by a horizontal line originating from 0 . A barrier due to the positivity constraint changes the behavior of the algorithms, which no longer follow the descent–ascent direction. HIO bounces on the axis, while the other algorithms are smoother.

Positivity constraint: the support constraint is represented by a horizontal line originating from 0 . A barrier due to the positivity constraint changes the behavior of the algorithms, which no longer follow the descent–ascent direction. HIO bounces on the axis, while the other algorithms are smoother.

A simple 2D phase-retrieval problem: only two variables (pixel values) are unknown. The solution–the global minimum–is the top minimum in the figures. The colormap and contour lines represents the error metric , and the descent direction is indicated by the arrows. The error reduction algorithm (a) proceeds toward the local minimum without optimizing the step length and stagnates at the local minima. The steepest descent method (b) moves toward the local minimum with a zigzag trajectory, while the conjugate gradient method reaches the solution faster (c). The HIO method generally converges to the global minimum, however some rare starting points converge to a local minimum (d). The saddle-point optimization with optimized step length [Eq. (37) ] stagnates in the same local minimum as HIO (e). The conjugate gradient version avoids stagnation (f). The saddle-point optimization using a two dimensional search of the saddle point reaches the global minimum from a larger range of starting points than HIO (g). The conjugate gradient version (h), (i) reaches the solution faster if the conjugate directions are obtained independently from (i), rather than their sum .

A simple 2D phase-retrieval problem: only two variables (pixel values) are unknown. The solution–the global minimum–is the top minimum in the figures. The colormap and contour lines represents the error metric , and the descent direction is indicated by the arrows. The error reduction algorithm (a) proceeds toward the local minimum without optimizing the step length and stagnates at the local minima. The steepest descent method (b) moves toward the local minimum with a zigzag trajectory, while the conjugate gradient method reaches the solution faster (c). The HIO method generally converges to the global minimum, however some rare starting points converge to a local minimum (d). The saddle-point optimization with optimized step length [Eq. (37) ] stagnates in the same local minimum as HIO (e). The conjugate gradient version avoids stagnation (f). The saddle-point optimization using a two dimensional search of the saddle point reaches the global minimum from a larger range of starting points than HIO (g). The conjugate gradient version (h), (i) reaches the solution faster if the conjugate directions are obtained independently from (i), rather than their sum .

Test figure used for benchmarking. The object of elements is surrounded by empty space. The whole image has elements.

Test figure used for benchmarking. The object of elements is surrounded by empty space. The whole image has elements.

Percentage of successful reconstructions over many tests starting from random phases as a function of number of iterations. The support is the only constraint. Positivity and reality are not enforced, and the support is loose: it is larger than the object by one additional row and column

Percentage of successful reconstructions over many tests starting from random phases as a function of number of iterations. The support is the only constraint. Positivity and reality are not enforced, and the support is loose: it is larger than the object by one additional row and column

## Tables

Summary of various algorithms.

Summary of various algorithms.

Benchmark of various algorithms.

Benchmark of various algorithms.

Article metrics loading...

Full text loading...

Commenting has been disabled for this content