<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article article-type="research-article" dtd-version="2.3" xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Photonics</journal-id>
<journal-title>Frontiers in Photonics</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Photonics</abbrev-journal-title>
<issn pub-type="epub">2673-6853</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">865666</article-id>
<article-id pub-id-type="doi">10.3389/fphot.2022.865666</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Photonics</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>A Multi-Wavelength Phase Retrieval With Multi-Strategy for Lensfree On-Chip Holography</article-title>
<alt-title alt-title-type="left-running-head">Wang et al.</alt-title>
<alt-title alt-title-type="right-running-head">Multi-Wavelength Phase Retrieval With Multi-Strategy</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name>
<surname>Wang</surname>
<given-names>Qinhua</given-names>
</name>
<uri xlink:href="https://loop.frontiersin.org/people/1644664/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Ma</surname>
<given-names>Jianshe</given-names>
</name>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Su</surname>
<given-names>Ping</given-names>
</name>
<xref ref-type="corresp" rid="c001">&#x2a;</xref>
<uri xlink:href="https://loop.frontiersin.org/people/1285557/overview"/>
</contrib>
</contrib-group>
<aff>
<institution>Tsinghua Shenzhen International Graduate School</institution>, <institution>Tsinghua University</institution>, <addr-line>Shenzhen</addr-line>, <country>China</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>
<bold>Edited by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1355689/overview">Jun Wang</ext-link>, Sichuan University, China</p>
</fn>
<fn fn-type="edited-by">
<p>
<bold>Reviewed by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/680937/overview">Chao Zuo</ext-link>, Nanjing University of Science and Technology, China</p>
<p>
<ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1035752/overview">Jianglei Di</ext-link>, Guangdong University of Technology, China</p>
</fn>
<corresp id="c001">&#x2a;Correspondence: Ping Su, <email>su.ping@sz.tsinghua.edu.cn</email>
</corresp>
<fn fn-type="other">
<p>This article was submitted to Optical Information Processing and Holography, a section of the journal Frontiers in Photonics</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>28</day>
<month>03</month>
<year>2022</year>
</pub-date>
<pub-date pub-type="collection">
<year>2022</year>
</pub-date>
<volume>3</volume>
<elocation-id>865666</elocation-id>
<history>
<date date-type="received">
<day>30</day>
<month>01</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>21</day>
<month>02</month>
<year>2022</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#xa9; 2022 Wang, Ma and Su.</copyright-statement>
<copyright-year>2022</copyright-year>
<copyright-holder>Wang, Ma and Su</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<abstract>
<p>Due to convenient operation and robust imaging, multi-wavelength phase retrieval has been widely applied in lensfree on-chip digital holographic microscope (LFOCDHM). Nevertheless, the insufficient diffraction variation and small number of measurements on the LFOCDHM make it difficult to eliminate the twin image by multi-wavelength phase retrieval. We propose a multi-wavelength phase retrieval for LFOCDHM based on energy constraint, global update strategy, and vector extrapolation acceleration. Simulations and experiments on the LFOCDHM show that our proposed method realizes efficient elimination effect and robust reconstruction with three wavelengths for illumination while maintaining fast convergence. More importantly, the proposed method is simple and non-parametric. It is believed that the proposed method could provide a promising solution for LFOCDHM.</p>
</abstract>
<kwd-group>
<kwd>phase retrieval</kwd>
<kwd>computational imaging</kwd>
<kwd>lensfree on-chip holography</kwd>
<kwd>image reconstruction</kwd>
<kwd>multi-wavelength</kwd>
</kwd-group>
</article-meta>
</front>
<body>
<sec id="s1">
<title>1 Introduction</title>
<p>Lensfree on-chip digital holographic microscope (LFOCDHM) is a new computational imaging technology. Due to its large field of view, high resolution, cost-effectiveness, perfect portability, and easy operation (<xref ref-type="bibr" rid="B15">Ozcan and McLeod 2016</xref>; <xref ref-type="bibr" rid="B20">Zhang et al., 2020</xref>), LFOCDHM plays an important role in analysis applications requiring large statistics, such as the Papanicolaou smear test for cervical cancer (<xref ref-type="bibr" rid="B8">Greenbaum and Ozcan 2012</xref>; <xref ref-type="bibr" rid="B7">Greenbaum et al., 2014</xref>) and blood smear inspection for malaria diagnosis (<xref ref-type="bibr" rid="B3">Bishara et al., 2011</xref>). LFOCDHM is essentially in-line holography that has the twin image in the direct reconstruction result and the twin image reduces the imaging quality. Iterative phase retrieval (IPR) method provides a feasible and simple computational method to eliminate the twin image (<xref ref-type="bibr" rid="B6">Gerchberg 1972</xref>; <xref ref-type="bibr" rid="B4">Fienup 1982</xref>; <xref ref-type="bibr" rid="B11">Latychevskaia 2019</xref>). To avoid the need for prior knowledge of samples, different multi-intensity IPRs have been proposed, including lateral translation of image sensors (<xref ref-type="bibr" rid="B18">Wu et al., 2016</xref>), axial scanning of sample-to-sensor distances (<xref ref-type="bibr" rid="B19">Zhang et al., 2017</xref>), etc.</p>
<p>In contrast to the IPR algorithm described above, wavelength-scanning (<xref ref-type="bibr" rid="B1">Bao et al., 2012</xref>; <xref ref-type="bibr" rid="B14">Noom et al., 2014</xref>; <xref ref-type="bibr" rid="B16">Sanz et al., 2015</xref>; <xref ref-type="bibr" rid="B22">Zuo et al., 2015</xref>) is a common practice on LFOCDHM, which avoids complex and time-consuming mechanical operations and steadily eliminates the twin image through multi-wavelength phase retrieval (MWPR). Due to the close sample-to-sensor distance and the small number of measurements required to ensure imaging resolution and convenient operation, LFOCDHM cannot produce sufficient diffraction diversity. This limitation leads to a series of problems such as low convergence accuracy, susceptibility to noise interference, and slow convergence speed of MWPR, making it difficult to eliminate the twin image. Therefore, a MWPR with effective elimination, stable reconstruction, and fast convergence is essential for LFOCDHM.</p>
<p>The performance of MWPR is mainly improved from three aspects. First, when appropriate constraints are imposed on the object plane, efficient elimination effect will be realized with a small number of wavelengths. For example, the weighted feedback constraint (<xref ref-type="bibr" rid="B9">Cheng et al., 2018</xref>) and the energy constraint (<xref ref-type="bibr" rid="B10">Latychevskaia and Fink, 2007</xref>; <xref ref-type="bibr" rid="B12">Li et al., 2016</xref>) realized efficient elimination effect without prior knowledge about the sample. Second, an appropriate updating strategy will improve the stability and robustness against the interference of noise. For example, the adaptive step-size strategy (<xref ref-type="bibr" rid="B21">Zuo et al., 2016</xref>; <xref ref-type="bibr" rid="B17">Wu et al., 2021</xref>) and the global update strategy (<xref ref-type="bibr" rid="B13">Liu et al., 2015</xref>; <xref ref-type="bibr" rid="B5">Gao and Cao, 2021</xref>) reinforced the true signal while reducing the influence of noise without prior knowledge about the noise statistics. Last but not least, the technique for the acceleration of iterative image restoration algorithms ensures great convergence accuracy while improving the convergence speed significantly. For instance, the vector extrapolation acceleration method (<xref ref-type="bibr" rid="B2">Biggs and Mark 1997</xref>) achieved favorable convergence performance without complex parameter calibration. Nevertheless, none of the existing research was carried out from all three aspects simultaneously. The potential of MWPR has not been fully explored. It is desirable that a new MWPR can be proposed for LFOCDHM to realize simple, clear, stable, and fast imaging.</p>
<p>In this work, we propose a MWPR based on energy constraint, global update strategy and vector extrapolation acceleration named MWPREGV for fast and robust elimination of the twin image on LFOCDHM. MWPREGV realizes more efficient elimination effect and more stable reconstruction by energy constraint and global update strategy, and meanwhile it improves the convergence speed by vector acceleration method. At the same time, the MWPREGV is simple and non-parametric. By numerical simulation and experimental verification, LFOCDHM with MWPREGV is able to provide simple, clear, stable, and fast imaging under three wavelength illumination.</p>
</sec>
<sec id="s2">
<title>2 Principles</title>
<sec id="s2-1">
<title>2.1 Multi-Wavelength Lensfree On-Chip Digital Holographic Microscope</title>
<p>The LFOCDHM is a cost-effective and compact system without any lenses or other optical elements between the object and the sensor plane, and it places the sample as close as possible to the imaging sensor to obtain a wide-field hologram. Multi-wavelength LFOCDHM uses different wavelengths of light sources to irradiate the sample to record different holograms shown in <xref ref-type="fig" rid="F1">Figure 1</xref>.</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption>
<p>Schematic diagram of multi-wavelength LFOCDHM.</p>
</caption>
<graphic xlink:href="fphot-03-865666-g001.tif"/>
</fig>
<p>The hologram recorded by LFOCDHM under the assumption of a thin sample is given by<disp-formula id="e1">
<mml:math id="m1">
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>H</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>u</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>&#x7c;</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>&#x3b5;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mo>&#xa0;</mml:mo>
<mml:mi>n</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1,2</mml:mn>
<mml:mo>,</mml:mo>
<mml:mo>&#x2026;</mml:mo>
<mml:mo>,</mml:mo>
<mml:mi>N</mml:mi>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:math>
<label>(1)</label>
</disp-formula>where <inline-formula id="inf1">
<mml:math id="m2">
<mml:mi>N</mml:mi>
</mml:math>
</inline-formula> denotes the number of measurements, <inline-formula id="inf2">
<mml:math id="m3">
<mml:mrow>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> denotes the intensity image recorded by the sensor, <inline-formula id="inf3">
<mml:math id="m4">
<mml:mrow>
<mml:msub>
<mml:mi>H</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> denotes the transmission mode of diffraction, <inline-formula id="inf4">
<mml:math id="m5">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3b5;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> denotes the noise in the imaging, and <inline-formula id="inf5">
<mml:math id="m6">
<mml:mrow>
<mml:msub>
<mml:mi>u</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> denotes the complex amplitude distribution function of the sample under <inline-formula id="inf6">
<mml:math id="m7">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> illumination, which is given by<disp-formula id="e2">
<mml:math id="m8">
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mi>u</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi>A</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:msup>
<mml:mtext>e</mml:mtext>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:msub>
<mml:mi>&#x3c6;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:math>
<label>(2)</label>
</disp-formula>where <inline-formula id="inf7">
<mml:math id="m9">
<mml:mrow>
<mml:msub>
<mml:mi>A</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> denotes the amplitude and <inline-formula id="inf8">
<mml:math id="m10">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c6;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> represents the phase. The absorption of the sample is assumed to be independent of the illumination wavelength, which is a reasonable assumption for most weakly scattering samples. Under different wavelength illumination, the amplitude is the same and the phase varies proportionally, which can be expressed as<disp-formula id="e3">
<mml:math id="m11">
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mi>A</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi>A</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mo>&#x22ef;</mml:mo>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi>A</mml:mi>
<mml:mi>N</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:math>
<label>(3)</label>
</disp-formula>
<disp-formula id="e4">
<mml:math id="m12">
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c6;</mml:mi>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mfrac>
<mml:msub>
<mml:mi>&#x3c6;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:math>
<label>(4)</label>
</disp-formula>
</p>
<p>This assumption implies that <inline-formula id="inf9">
<mml:math id="m13">
<mml:mrow>
<mml:mo>&#xa0;</mml:mo>
<mml:msub>
<mml:mi>u</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf10">
<mml:math id="m14">
<mml:mrow>
<mml:msub>
<mml:mi>u</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf11">
<mml:math id="m15">
<mml:mo>&#x22ef;</mml:mo>
</mml:math>
</inline-formula>, <inline-formula id="inf12">
<mml:math id="m16">
<mml:mrow>
<mml:msub>
<mml:mi>u</mml:mi>
<mml:mi>N</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> can be uniformly represented by <inline-formula id="inf13">
<mml:math id="m17">
<mml:mi>u</mml:mi>
</mml:math>
</inline-formula>. We consider the <inline-formula id="inf14">
<mml:math id="m18">
<mml:mrow>
<mml:msub>
<mml:mi>u</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> as <inline-formula id="inf15">
<mml:math id="m19">
<mml:mi>u</mml:mi>
</mml:math>
</inline-formula>, then <inline-formula id="inf16">
<mml:math id="m20">
<mml:mrow>
<mml:msub>
<mml:mi>u</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> can be expressed as:<disp-formula id="e5">
<mml:math id="m21">
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mi>u</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>u</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>A</mml:mi>
<mml:msup>
<mml:mtext>e</mml:mtext>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>&#x3c6;</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:math>
<label>(5)</label>
</disp-formula>
<disp-formula id="e6">
<mml:math id="m22">
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mi>u</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>A</mml:mi>
<mml:msup>
<mml:mtext>e</mml:mtext>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfrac>
<mml:mi>&#x3c6;</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:math>
<label>(6)</label>
</disp-formula>
</p>
<p>Due to the inherent ill-posedness of the phase retrieval problem, finding the desired <inline-formula id="inf17">
<mml:math id="m23">
<mml:mi>u</mml:mi>
</mml:math>
</inline-formula> is the efficient way to eliminate the twin image. To avoid tedious parameter calibration, we choose the alternating projection method as the iterative model for MWPR, which finds desired <inline-formula id="inf18">
<mml:math id="m24">
<mml:mi>u</mml:mi>
</mml:math>
</inline-formula> by iterating back and forth between the object and image planes and adding intensity constraints on the image plane. Compared to the reconstruction from a single-intensity image <inline-formula id="inf19">
<mml:math id="m25">
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> that finds desired <inline-formula id="inf20">
<mml:math id="m26">
<mml:mi>u</mml:mi>
</mml:math>
</inline-formula> in a single known constraint set <inline-formula id="inf21">
<mml:math id="m27">
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> shown in <xref ref-type="fig" rid="F2">Figure 2A</xref>, the reconstruction from some different intensity images <inline-formula id="inf22">
<mml:math id="m28">
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x3e;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is easier to find desired <inline-formula id="inf23">
<mml:math id="m29">
<mml:mi>u</mml:mi>
</mml:math>
</inline-formula> in the intersection of some known constraint sets <inline-formula id="inf24">
<mml:math id="m30">
<mml:mrow>
<mml:mo>&#xa0;</mml:mo>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf25">
<mml:math id="m31">
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf26">
<mml:math id="m32">
<mml:mo>&#x22ef;</mml:mo>
</mml:math>
</inline-formula>, <inline-formula id="inf27">
<mml:math id="m33">
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mi>N</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> shown in <xref ref-type="fig" rid="F2">Figure 2B</xref>, which is formulated as a feasibility problem:<disp-formula id="e7">
<mml:math id="m34">
<mml:mrow>
<mml:mi mathvariant="normal">F</mml:mi>
<mml:mi mathvariant="normal">i</mml:mi>
<mml:mi mathvariant="normal">n</mml:mi>
<mml:mi mathvariant="normal">d</mml:mi>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mi>u</mml:mi>
<mml:mo>&#x2208;</mml:mo>
<mml:mtext>&#x2009;</mml:mtext>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mo>&#x2229;</mml:mo>
<mml:mtext>&#x2009;</mml:mtext>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>&#x2229;</mml:mo>
<mml:mo>&#x22c5;</mml:mo>
<mml:mo>&#x22c5;</mml:mo>
<mml:mo>&#x22c5;</mml:mo>
<mml:mo>&#x2229;</mml:mo>
<mml:mtext>&#x2009;</mml:mtext>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mi>N</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
<label>(7)</label>
</disp-formula>
</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption>
<p>Two-dimensional illustration of the IPR reconstruction from <bold>(A)</bold> a single-intensity image (<inline-formula id="inf28">
<mml:math id="m35">
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>) and <bold>(B)</bold> some different intensity images <inline-formula id="inf29">
<mml:math id="m36">
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>N</mml:mi>
<mml:mo>&#x3e;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>.</p>
</caption>
<graphic xlink:href="fphot-03-865666-g002.tif"/>
</fig>
<p>The size of the intersection set is related to the diffraction diversity determined by the difference and number of holograms. When there is sufficient diffraction diversity, the size of the intersection will be small and the MWPR will find the desired <inline-formula id="inf30">
<mml:math id="m37">
<mml:mi>u</mml:mi>
</mml:math>
</inline-formula> quickly and accurately. Small source-to-sample distances and a small number of measurements make it difficult to produce a sufficient number of holograms with significant differences, resulting in poor elimination effect on LFOCDHM. Therefore, additional constraints are required to be imposed on the object plane to realize clear imaging for LFOCDHM.</p>
</sec>
<sec id="s2-2">
<title>2.2 Energy Constraint</title>
<p>The intensity of the hologram is mainly influenced by the intensity of the reference light. Dividing the measured hologram <inline-formula id="inf31">
<mml:math id="m38">
<mml:mrow>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>u</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> by the background image <inline-formula id="inf32">
<mml:math id="m39">
<mml:mrow>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mrow>
<mml:mi>b</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>k</mml:mi>
<mml:mi>g</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>u</mml:mi>
<mml:mi>n</mml:mi>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> measured after removing the sample results in the normalized hologram <inline-formula id="inf33">
<mml:math id="m40">
<mml:mrow>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, which will avoid non-uniform illumination of the light source:<disp-formula id="e8">
<mml:math id="m41">
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>m</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mi>e</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>u</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>/</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mrow>
<mml:mi>b</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>c</mml:mi>
<mml:mi>k</mml:mi>
<mml:mi>g</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>u</mml:mi>
<mml:mi>n</mml:mi>
<mml:mi>d</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:math>
<label>(8)</label>
</disp-formula>
</p>
<p>The basic physical notion of energy conservation requires that absorption will not lead to an increased amplitude following a scattering process. Since the normalized hologram eliminates the effect of the reference light intensity which is equivalent to irradiating the sample with a unit light wave, the amplitude <inline-formula id="inf34">
<mml:math id="m42">
<mml:mi>A</mml:mi>
</mml:math>
</inline-formula> on the object plane should not exceed 1. Therefore, during the iterative process, the amplitude part of regions with amplitude exceeding 1 are considered as interference from the twin image and should be replaced by 1 while the phase part remains, and the other regions are retained. This process can be expressed as:<disp-formula id="e9">
<mml:math id="m43">
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mi>u</mml:mi>
<mml:mo>&#x2032;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mi>u</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>,</mml:mo>
<mml:mo>&#xa0;</mml:mo>
<mml:mo>&#xa0;</mml:mo>
<mml:mi>A</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3c;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msup>
<mml:mi>e</mml:mi>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mi>&#x3c6;</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
<mml:mo>,</mml:mo>
<mml:mo>&#xa0;</mml:mo>
<mml:mo>&#xa0;</mml:mo>
<mml:mi>A</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2265;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:math>
<label>(9)</label>
</disp-formula>where <inline-formula id="inf35">
<mml:math id="m44">
<mml:mrow>
<mml:mi>u</mml:mi>
<mml:mo>&#x2032;</mml:mo>
</mml:mrow>
</mml:math>
</inline-formula> denotes the updated complex amplitude distribution function and <inline-formula id="inf36">
<mml:math id="m45">
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is the Cartesian coordinate system on the object plane.</p>
</sec>
<sec id="s2-3">
<title>2.3 Global Update Strategy</title>
<p>
<xref ref-type="fig" rid="F3">Figure 3</xref> shows the computational flowchart of two common update strategies for MWPR in the <inline-formula id="inf37">
<mml:math id="m46">
<mml:mrow>
<mml:msup>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mtext>th</mml:mtext>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> iteration, where <inline-formula id="inf38">
<mml:math id="m47">
<mml:mrow>
<mml:msup>
<mml:mi>u</mml:mi>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf39">
<mml:math id="m48">
<mml:mrow>
<mml:msubsup>
<mml:mi>u</mml:mi>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mtext>&#x2009;</mml:mtext>
<mml:msubsup>
<mml:mi>u</mml:mi>
<mml:mn>2</mml:mn>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x22ef;</mml:mo>
<mml:msubsup>
<mml:mi>u</mml:mi>
<mml:mrow>
<mml:mi>N</mml:mi>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> denotes the updated complex amplitude distribution, <inline-formula id="inf40">
<mml:math id="m49">
<mml:mrow>
<mml:msup>
<mml:mi>A</mml:mi>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> <inline-formula id="inf41">
<mml:math id="m50">
<mml:mrow>
<mml:msubsup>
<mml:mi>A</mml:mi>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mtext>&#x2009;</mml:mtext>
<mml:msubsup>
<mml:mi>A</mml:mi>
<mml:mn>2</mml:mn>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x22ef;</mml:mo>
<mml:msubsup>
<mml:mi>A</mml:mi>
<mml:mi>N</mml:mi>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> denotes the updated amplitude distribution, and <inline-formula id="inf42">
<mml:math id="m51">
<mml:mrow>
<mml:msup>
<mml:mi>&#x3c6;</mml:mi>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> <inline-formula id="inf43">
<mml:math id="m52">
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c6;</mml:mi>
<mml:mn>1</mml:mn>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mtext>&#x2009;</mml:mtext>
<mml:msubsup>
<mml:mi>&#x3c6;</mml:mi>
<mml:mn>2</mml:mn>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
<mml:mi>k</mml:mi>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:msubsup>
<mml:mo>&#x22ef;</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c6;</mml:mi>
<mml:mi>N</mml:mi>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> denotes the updated phase distribution. The incremental update strategy shown in <xref ref-type="fig" rid="F3">Figure 3A</xref> updates the <inline-formula id="inf44">
<mml:math id="m53">
<mml:mrow>
<mml:msubsup>
<mml:mi>u</mml:mi>
<mml:mi>n</mml:mi>
<mml:mi>k</mml:mi>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> at the <inline-formula id="inf45">
<mml:math id="m54">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> wavelength based on the <inline-formula id="inf46">
<mml:math id="m55">
<mml:mrow>
<mml:msubsup>
<mml:mi>u</mml:mi>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> that has updated at the <inline-formula id="inf47">
<mml:math id="m56">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> wavelength. The global update strategy shown in <xref ref-type="fig" rid="F3">Figure 3B</xref> updates the <inline-formula id="inf48">
<mml:math id="m57">
<mml:mrow>
<mml:msubsup>
<mml:mi>u</mml:mi>
<mml:mi>n</mml:mi>
<mml:mi>k</mml:mi>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> at the <inline-formula id="inf49">
<mml:math id="m58">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mi>n</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> wavelength based on the <inline-formula id="inf50">
<mml:math id="m59">
<mml:mrow>
<mml:msup>
<mml:mi>u</mml:mi>
<mml:mi>k</mml:mi>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> and takes the average of the updated <inline-formula id="inf51">
<mml:math id="m60">
<mml:mrow>
<mml:msubsup>
<mml:mi>u</mml:mi>
<mml:mi>n</mml:mi>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> as a result of the <inline-formula id="inf52">
<mml:math id="m61">
<mml:mrow>
<mml:msup>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mtext>th</mml:mtext>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> iteration.</p>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption>
<p>Flowchart of the <bold>(A)</bold> incremental update strategy and <bold>(B)</bold> global update strategy.</p>
</caption>
<graphic xlink:href="fphot-03-865666-g003.tif"/>
</fig>
<p>The incremental update strategy converges quickly. However, it is an error accumulation process, resulting in iterative computation being sensitive to noise in the case of a small number of measurements. As a result, the incremental update strategy leads to unstable convergence and imaging artifacts under noise interference. Inexpensive imaging sensors are prone to introduce a series of disturbances such as scattering noise, dark noise, and readout noise during image acquisition, which is especially challenging for low-cost LFOCDHM. Compared with the incremental update strategy, the averaging operation in the global update strategy can effectively suppress the effect of noise and improves the convergence stability. Therefore, the global update strategy is more suitable for LFOCDHM. However, it converges slowly.</p>
</sec>
<sec id="s2-4">
<title>2.4 Vector Extrapolation Acceleration Method</title>
<p>The use of acceleration methods can compensate for the disadvantage of slow convergence of global update strategy. Compared with other acceleration methods, the vector extrapolation acceleration method requires only a small amount of information about the iterative algorithm to improve the convergence speed and convergence accuracy, which is nonparametric. This acceleration method is a form of vector extrapolation, which gains a correction step and information for adjusting the step length based on previous iteration. Since IPR is to recover the missing phase, we chose the vector extrapolation acceleration method to update the phase. This process in the <inline-formula id="inf53">
<mml:math id="m62">
<mml:mrow>
<mml:msup>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mtext>th</mml:mtext>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> iteration can be expressed as:<disp-formula id="e10">
<mml:math id="m63">
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msup>
<mml:mi>&#x3c6;</mml:mi>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mi>&#x3c6;</mml:mi>
<mml:mi>k</mml:mi>
</mml:msup>
<mml:mo>&#x2b;</mml:mo>
<mml:msup>
<mml:mi>&#x3b1;</mml:mi>
<mml:mi>k</mml:mi>
</mml:msup>
<mml:msup>
<mml:mi>h</mml:mi>
<mml:mi>k</mml:mi>
</mml:msup>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:math>
<label>(10)</label>
</disp-formula>where<disp-formula id="e11">
<mml:math id="m64">
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msup>
<mml:mi>h</mml:mi>
<mml:mi>k</mml:mi>
</mml:msup>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mi>&#x3c6;</mml:mi>
<mml:mi>k</mml:mi>
</mml:msup>
<mml:mo>&#x2212;</mml:mo>
<mml:msup>
<mml:mi>&#x3c6;</mml:mi>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:math>
<label>(11)</label>
</disp-formula>
<disp-formula id="e12">
<mml:math id="m65">
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msup>
<mml:mi>&#x3b1;</mml:mi>
<mml:mi>k</mml:mi>
</mml:msup>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi>g</mml:mi>
<mml:mi>k</mml:mi>
</mml:msup>
<mml:msup>
<mml:mi>g</mml:mi>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
<mml:mrow>
<mml:mstyle displaystyle="true">
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi>g</mml:mi>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
<mml:msup>
<mml:mi>g</mml:mi>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:math>
<label>(12)</label>
</disp-formula>
<disp-formula id="e13">
<mml:math id="m66">
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:msup>
<mml:mi>g</mml:mi>
<mml:mi>k</mml:mi>
</mml:msup>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>&#x3c6;</mml:mi>
<mml:msup>
<mml:mo>&#x2032;</mml:mo>
<mml:mi>k</mml:mi>
</mml:msup>
<mml:mo>&#x2212;</mml:mo>
<mml:msup>
<mml:mi>&#x3c6;</mml:mi>
<mml:mi>k</mml:mi>
</mml:msup>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:math>
<label>(13)</label>
</disp-formula>
</p>
</sec>
</sec>
<sec id="s3">
<title>3 Performance Analysis of Multi-Wavelength Phase Retrieval</title>
<sec id="s3-1">
<title>3.1 Numerical Results</title>
<p>Algorithmic details about the MWPREGV are shown in <xref ref-type="fig" rid="F4">Figure 4</xref> as follows: (1) initializing the first estimation of <inline-formula id="inf54">
<mml:math id="m67">
<mml:mrow>
<mml:msup>
<mml:mi>u</mml:mi>
<mml:mn>0</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> with the average of all holograms back-propagated to the object plane with the angular spectrum method; (2) forward-propagating <inline-formula id="inf55">
<mml:math id="m68">
<mml:mrow>
<mml:msup>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mtext>th</mml:mtext>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> estimation of <inline-formula id="inf56">
<mml:math id="m69">
<mml:mrow>
<mml:msubsup>
<mml:mi>u</mml:mi>
<mml:mi>n</mml:mi>
<mml:mi>k</mml:mi>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> to recording plane with the angular spectrum method and replacing the computed amplitude distribution of recording plane with the modulus of recorded holograms yet retaining the computed phase; (3) back-propagating this synthesized complex amplitude distribution to the object plane with the angular spectrum method and updating the computed amplitude distribution of object plane based on energy constraint to obtain the updated complex amplitude distribution <inline-formula id="inf57">
<mml:math id="m70">
<mml:mrow>
<mml:msubsup>
<mml:mi>u</mml:mi>
<mml:mi>n</mml:mi>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula>; (4) averaging the <inline-formula id="inf58">
<mml:math id="m71">
<mml:mrow>
<mml:msubsup>
<mml:mi>u</mml:mi>
<mml:mi>n</mml:mi>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msubsup>
</mml:mrow>
</mml:math>
</inline-formula> to obtain a globally updated complex amplitude distribution <inline-formula id="inf59">
<mml:math id="m72">
<mml:mrow>
<mml:msup>
<mml:mi>u</mml:mi>
<mml:mrow>
<mml:mo>&#x2032;</mml:mo>
<mml:mi>k</mml:mi>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>; (5) updating the phase distribution <inline-formula id="inf60">
<mml:math id="m73">
<mml:mrow>
<mml:msup>
<mml:mi>&#x3c6;</mml:mi>
<mml:mi>k</mml:mi>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> based on the globally updated phase distribution <inline-formula id="inf61">
<mml:math id="m74">
<mml:mrow>
<mml:mi>&#x3c6;</mml:mi>
<mml:msup>
<mml:mo>&#x2032;</mml:mo>
<mml:mi>k</mml:mi>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> by the vector extrapolation acceleration method yet retaining the globally updated amplitude distribution <inline-formula id="inf62">
<mml:math id="m75">
<mml:mrow>
<mml:mi>A</mml:mi>
<mml:msup>
<mml:mo>&#x2032;</mml:mo>
<mml:mi>k</mml:mi>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> and thus the result <inline-formula id="inf63">
<mml:math id="m76">
<mml:mrow>
<mml:msup>
<mml:mi>u</mml:mi>
<mml:mrow>
<mml:mi>k</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> of the <inline-formula id="inf64">
<mml:math id="m77">
<mml:mrow>
<mml:msup>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mtext>th</mml:mtext>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> iteration is obtained; and (6) iteratively running steps (2) to (5) so that the reconstructed accuracy <inline-formula id="inf65">
<mml:math id="m78">
<mml:mrow>
<mml:mi>L</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> meets the given requirement <inline-formula id="inf66">
<mml:math id="m79">
<mml:mi>&#x3b5;</mml:mi>
</mml:math>
</inline-formula> or the number of iterations <inline-formula id="inf67">
<mml:math id="m80">
<mml:mi>k</mml:mi>
</mml:math>
</inline-formula> reaches the maximum number <inline-formula id="inf68">
<mml:math id="m81">
<mml:mrow>
<mml:msub>
<mml:mi>k</mml:mi>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mi>a</mml:mi>
<mml:mi>x</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>.</p>
<fig id="F4" position="float">
<label>FIGURE 4</label>
<caption>
<p>Flowchart of the MWPREGV for LFOCDHM.</p>
</caption>
<graphic xlink:href="fphot-03-865666-g004.tif"/>
</fig>
<p>To produce sufficient diffraction variations, a virtual object is illuminated by <inline-formula id="inf69">
<mml:math id="m82">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mrow>
<mml:mn>1,2,3</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>623</mml:mn>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mi mathvariant="normal">nm,</mml:mi>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mi mathvariant="normal">523</mml:mi>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mi mathvariant="normal">nm,</mml:mi>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mn>460</mml:mn>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mi mathvariant="normal">nm</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>, and its amplitude and phase under <inline-formula id="inf70">
<mml:math id="m83">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> illumination are shown in <xref ref-type="fig" rid="F5">Figure 5</xref>. The other parameters are listed as follows: (1) the imaging size is 1.1 &#xd7; 1.1&#xa0;mm<sup>2</sup> (512 &#xd7; 512 pixels); (2) the distance from light source to sample is 100&#xa0;mm, and the distance from sample to sensor is 1&#xa0;mm. To quantify the reconstruction quality, we use the relative error (RE) as reconstructed accuracy <inline-formula id="inf71">
<mml:math id="m84">
<mml:mrow>
<mml:mi>L</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>, which is defined as<disp-formula id="e14">
<mml:math id="m85">
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mi mathvariant="normal">R</mml:mi>
<mml:mi mathvariant="normal">E</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>&#x2016;</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>u</mml:mi>
<mml:mrow>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>u</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>u</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>&#x2016;</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>&#x2016;</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>u</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>u</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
<mml:mo>&#x2016;</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:math>
<label>(14)</label>
</disp-formula>where <inline-formula id="inf72">
<mml:math id="m86">
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>&#x2016;</mml:mo>
<mml:mo>&#xb7;</mml:mo>
<mml:mo>&#x2016;</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> denotes the Euclidean norm, <inline-formula id="inf73">
<mml:math id="m87">
<mml:mrow>
<mml:msub>
<mml:mi>u</mml:mi>
<mml:mrow>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> denotes the estimated value of <inline-formula id="inf74">
<mml:math id="m88">
<mml:mi>u</mml:mi>
</mml:math>
</inline-formula>, and <inline-formula id="inf75">
<mml:math id="m89">
<mml:mrow>
<mml:msub>
<mml:mi>u</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>u</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> denotes the ground truth value of <inline-formula id="inf76">
<mml:math id="m90">
<mml:mi>u</mml:mi>
</mml:math>
</inline-formula>.</p>
<fig id="F5" position="float">
<label>FIGURE 5</label>
<caption>
<p>Simulated virtual objects with complex amplitude.</p>
</caption>
<graphic xlink:href="fphot-03-865666-g005.tif"/>
</fig>
<p>We first compared the performance of MWPR and MWPR based on energy constraint (MWPRE). When the distance from the virtual object to sensor is 30&#xa0;mm, MWPR eliminates most of the twin image after 50 iterations, as shown in <xref ref-type="fig" rid="F6">Figure 6A</xref>. When the distance from the virtual object to sensor is 1&#xa0;mm, MWPR only partially eliminates the twin image after 50 iterations due to insufficient diffraction variation, resulting in a large deviation between <inline-formula id="inf77">
<mml:math id="m91">
<mml:mrow>
<mml:msub>
<mml:mi>u</mml:mi>
<mml:mrow>
<mml:mi>e</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>t</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf78">
<mml:math id="m92">
<mml:mrow>
<mml:msub>
<mml:mi>u</mml:mi>
<mml:mrow>
<mml:mi>t</mml:mi>
<mml:mi>r</mml:mi>
<mml:mi>u</mml:mi>
<mml:mi>e</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, as shown in <xref ref-type="fig" rid="F6">Figure 6B</xref>. In contrast, MWPRE after the same number of iterations almost completely eliminates the twin image for the same diffraction variation shown in <xref ref-type="fig" rid="F6">Figure 6C</xref>. And it achieves better convergence accuracy and faster convergence speed, as shown in <xref ref-type="fig" rid="F6">Figure 6D</xref>.</p>
<fig id="F6" position="float">
<label>FIGURE 6</label>
<caption>
<p>Reconstruction by <bold>(A)</bold> MWPR when the distance from the virtual object to sensor is 30&#xa0;mm. Reconstruction by <bold>(B)</bold> MWPR and <bold>(C)</bold> MWPRE when the distance from the virtual object to sensor is 1&#xa0;mm. <bold>(D)</bold> Convergence performance of MWPR and MWPRE.</p>
</caption>
<graphic xlink:href="fphot-03-865666-g006.tif"/>
</fig>
<p>Next, we compared the convergence behavior of the incremental update strategy and global update strategy on the MWPRE (MWPREI and MWPREG). The MWPREI converges much faster than the global update strategy and has a better convergence accuracy when the ideal holograms and background images without noise are used, as shown in <xref ref-type="fig" rid="F7">Figure 7A</xref>. Nevertheless, when adding white Gaussian noise to obtain holograms and background images with a signal-to-noise ratio of 30&#xa0;dB, we find that the MWPREG achieves better convergence accuracy as expected, as shown in <xref ref-type="fig" rid="F7">Figure 7B</xref>. Therefore, the global update strategy is more suitable for LFOCDHM with a lot of noisy interferences.</p>
<fig id="F7" position="float">
<label>FIGURE 7</label>
<caption>
<p>Convergence behavior of MWPREI and MWPREG with <bold>(A)</bold> noiseless and <bold>(B)</bold> noisy data.</p>
</caption>
<graphic xlink:href="fphot-03-865666-g007.tif"/>
</fig>
<p>Finally, we tested the impact of the acceleration algorithm on the MWPREG, as shown in <xref ref-type="fig" rid="F8">Figure 8</xref>. Compared with the MWPREG, the MWPREGV achieves more efficient elimination and faster convergence, which compensates for the disadvantage of slow convergence due to the global update strategy without tedious parameter calibration.</p>
<fig id="F8" position="float">
<label>FIGURE 8</label>
<caption>
<p>Reconstruction by <bold>(A)</bold> MWPREG and <bold>(B)</bold> MWPREGV. <bold>(C)</bold> Convergence performance of MWPREG and MWPREGV.</p>
</caption>
<graphic xlink:href="fphot-03-865666-g008.tif"/>
</fig>
</sec>
<sec id="s3-2">
<title>3.2 Validation on Experimental Data</title>
<p>We tested the proposed method on ant mounts, housefly leg mounts, and bee wing mounts. The experimental setup is shown in <xref ref-type="fig" rid="F1">Figure 1</xref>. An RGB LED (LZ4-04DCA, <inline-formula id="inf79">
<mml:math id="m93">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bb;</mml:mi>
<mml:mrow>
<mml:mn>1,2,3</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>623</mml:mn>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mi mathvariant="normal">nm</mml:mi>
<mml:mo>,</mml:mo>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mn>523</mml:mn>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mi mathvariant="normal">nm,</mml:mi>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mn>460</mml:mn>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mi mathvariant="normal">nm</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>) is utilized as the illumination source to avoid scattering noise, and a CMOS sensor with a pixel size of 2.2&#xa0;&#x3bc;m (2592 &#xd7; 1944, MT9P031) is utilized to record the hologram. The distance from the light source to the sample is 100&#xa0;mm, and the distance from the sample to the sensor is 1.2&#xa0;mm. Since the ground truth value of <inline-formula id="inf80">
<mml:math id="m94">
<mml:mi>u</mml:mi>
</mml:math>
</inline-formula> is unknown in the experiment, the <inline-formula id="inf81">
<mml:math id="m95">
<mml:mrow>
<mml:mi>L</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>s</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula> is used to evaluate the speed of convergence rather than the quality of the reconstruction, which is given by:<disp-formula id="equ1">
<mml:math id="m96">
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mi>L</mml:mi>
<mml:mi>o</mml:mi>
<mml:mi>s</mml:mi>
<mml:mi>s</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mrow>
<mml:mo>&#x2016;</mml:mo>
<mml:mrow>
<mml:mrow>
<mml:mo>&#x7c;</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>H</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>u</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>&#x7c;</mml:mo>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mrow>
<mml:msqrt>
<mml:mrow>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:msqrt>
</mml:mrow>
</mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mo>&#x2016;</mml:mo>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:math>
<label>(15)</label>
</disp-formula>
</p>
<p>
<xref ref-type="fig" rid="F9">Figure 9B</xref> shows full FOV(&#x223c;27.78&#xa0;mm<sup>2</sup>) holograms of the LFOCDHM captured at the 523&#xa0;nm wavelength, which has a larger field of view than microscopic images taken with a 4 &#xd7; times objective lens shown in the <xref ref-type="fig" rid="F9">Figure 9A</xref> and <xref ref-type="fig" rid="F9">Figure 9C</xref> show the reconstructed amplitude distribution by using MWPR after 15 iterations, indicating that only partial elimination of the twin image can be achieved. In contrast, MWPRE eliminates most of the twin image after 15 iterations, as shown in <xref ref-type="fig" rid="F9">Figure 9D</xref>. However, the reconstruction by MWPRE is not satisfactory due to the interference of noise, which is far from the microscopic images taken with a &#xd7;4 objective lens. Under the same conditions, MWPREGV realizes more efficient elimination effect after the same number of iterations, making the reconstruction similar to the microscopic images taken with a &#xd7;4 objective lens, as shown in <xref ref-type="fig" rid="F9">Figure 9E</xref>. And as shown in <xref ref-type="fig" rid="F9">Figure 9F</xref>, MWPREGV converges to a stable value with a faster rate compared to MWPREG.</p>
<fig id="F9" position="float">
<label>FIGURE 9</label>
<caption>
<p>
<bold>(A)</bold> Microscope images taken with a &#xd7;4 objective lens. <bold>(B)</bold> Full-FOV raw hologram captured at the 523&#xa0;nm wavelength. The reconstructed amplitude by <bold>(C)</bold> MWPR, <bold>(D)</bold> MWPRE, and <bold>(E)</bold> MWPREGV. <bold>(F)</bold> Convergence performance of MWPREG and MWPREGV.</p>
</caption>
<graphic xlink:href="fphot-03-865666-g009.tif"/>
</fig>
</sec>
</sec>
<sec id="s4">
<title>4 Conclusion</title>
<p>In this study, we propose a MWPR based on energy constraint, global update strategy, and vector extrapolation acceleration for LFOCDHM, termed MWPREGV. MWPREGV realizes efficient elimination effect by imposing energy constraint on the object plane, and meanwhile it combines the global update strategy and the vector extrapolation acceleration to improve the stability and robustness of the reconstruction towards noise yet retains the fast convergence speed. More importantly, MWPREGV avoids the need for complex parameter calibrations, which is attractive for simplifying the operation of LFOCDHM. Simulation and experimental results show that LFOCDHM with MWPREGV is feasible to realize clear, stable, and fast imaging under three-wavelength illumination. Compared to the microscope with a &#xd7;4 objective lens, LFOCDHM with MWPREGV realizes almost the same imaging effect with a larger field of view at a lower cost, which is believed to have more potential application directions in the future.</p>
</sec>
</body>
<back>
<sec id="s5">
<title>Data Availability Statement</title>
<p>The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.</p>
</sec>
<sec id="s6">
<title>Author Contributions</title>
<p>QW, JM, and PS performed the research and wrote the manuscript. All authors have read and approved the content of the manuscript.</p>
</sec>
<sec sec-type="funding" id="s7">
<title>Funding</title>
<p>This work was supported by Shenzhen general research fund (JCYJ20190813172405231 and JCYJ20200109143031287).</p>
</sec>
<sec sec-type="COI-statement" id="s8">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s9">
<title>Publisher&#x2019;s Note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bao</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Situ</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Pedrini</surname>
<given-names>G.</given-names>
</name>
<name>
<surname>Osten</surname>
<given-names>W.</given-names>
</name>
</person-group> (<year>2012</year>). <article-title>Lensless Phase Microscopy Using Phase Retrieval with Multiple Illumination Wavelengths</article-title>. <source>Appl. Opt.</source> <volume>51</volume> (<issue>22</issue>), <fpage>5486</fpage>. <pub-id pub-id-type="doi">10.1364/ao.51.005486</pub-id> </citation>
</ref>
<ref id="B2">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Biggs</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Mark</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>1997</year>). <article-title>Acceleration of Iterative Image Restoration Algorithms</article-title>. <source>Appl. Opt.</source> <volume>36</volume> (<issue>8</issue>), <fpage>1766</fpage>. <pub-id pub-id-type="doi">10.1364/ao.36.001766</pub-id> </citation>
</ref>
<ref id="B3">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bishara</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Sikora</surname>
<given-names>U.</given-names>
</name>
<name>
<surname>Mudanyali</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Su</surname>
<given-names>T.-W.</given-names>
</name>
<name>
<surname>Yaglidere</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Luckhart</surname>
<given-names>S.</given-names>
</name>
<etal/>
</person-group> (<year>2011</year>). <article-title>Holographic Pixel Super-resolution in Portable Lensless On-Chip Microscopy Using a Fiber-Optic Array</article-title>. <source>Lab. Chip</source> <volume>11</volume> (<issue>7</issue>), <fpage>1276</fpage>&#x2013;<lpage>1279</lpage>. <pub-id pub-id-type="doi">10.1039/c0lc00684j</pub-id> </citation>
</ref>
<ref id="B4">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Fienup</surname>
<given-names>J. R.</given-names>
</name>
</person-group> (<year>1982</year>). <article-title>Phase Retrieval Algorithms: A Comparison. </article-title>
<source>Applied optics</source> <volume>21</volume> (<issue>15</issue>), <fpage>2758</fpage>&#x2013;<lpage>2769</lpage>. </citation>
</ref>
<ref id="B5">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gao</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Cao</surname>
<given-names>L.</given-names>
</name>
</person-group> (<year>2021</year>). <article-title>Generalized Optimization Framework for Pixel Super-resolution Imaging in Digital Holography</article-title>. <source>Opt. Express</source> <volume>29</volume> (<issue>18</issue>), <fpage>28805</fpage>&#x2013;<lpage>28823</lpage>. <pub-id pub-id-type="doi">10.1364/OE.434449</pub-id> </citation>
</ref>
<ref id="B6">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gerchberg</surname>
<given-names>R. W.</given-names>
</name>
</person-group> (<year>1972</year>). <article-title>A Practical Algorithm for the Determination of Phase from Image and Diffraction Plane Pictures</article-title>. <source>Optik</source> <volume>35</volume>, <fpage>237</fpage>&#x2013;<lpage>250</lpage>. </citation>
</ref>
<ref id="B7">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Greenbaum</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Feizi</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Chung</surname>
<given-names>P. L.</given-names>
</name>
<name>
<surname>Luo</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Kandukuri</surname>
<given-names>S. R.</given-names>
</name>
<etal/>
</person-group> (<year>2014</year>). <article-title>Wide-field Computational Imaging of Pathology Slides Using Lens-free On-Chip Microscopy</article-title>. <source>Sci. Transl Med.</source> <volume>6</volume> (<issue>267</issue>), <fpage>267ra175</fpage>. <pub-id pub-id-type="doi">10.1126/scitranslmed.3009850</pub-id> </citation>
</ref>
<ref id="B8">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Greenbaum</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Ozcan</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2012</year>). <article-title>Maskless Imaging of Dense Samples Using Pixel Super-resolution Based Multi-Height Lensfree On-Chip Microscopy</article-title>. <source>Opt. Express</source> <volume>20</volume> (<issue>3</issue>), <fpage>3129</fpage>&#x2013;<lpage>3143</lpage>. <pub-id pub-id-type="doi">10.1364/oe.20.003129</pub-id> </citation>
</ref>
<ref id="B9">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Guo</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Tan</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>Z.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>Enhancing Imaging Contrast via Weighted Feedback for Iterative Multi-Image Phase Retrieval</article-title>. <source>J. Biomed. Opt.</source> <volume>23</volume> (<issue>1</issue>), <fpage>1</fpage>&#x2013;<lpage>10</lpage>. <pub-id pub-id-type="doi">10.1117/1.JBO.23.1.016015</pub-id> </citation>
</ref>
<ref id="B10">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Latychevskaia</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Fink</surname>
<given-names>H.-W.</given-names>
</name>
</person-group> (<year>2007</year>). <article-title>Solution to the Twin Image Problem in Holography</article-title>. <source>Phys. Rev. Lett.</source> <volume>98</volume> (<issue>23</issue>), <fpage>233901</fpage>. <pub-id pub-id-type="doi">10.1103/PhysRevLett.98.233901</pub-id> </citation>
</ref>
<ref id="B11">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Latychevskaia</surname>
<given-names>T.</given-names>
</name>
</person-group> (<year>2019</year>). <article-title>Iterative Phase Retrieval for Digital Holography: Tutorial</article-title>. <source>J. Opt. Soc. Am. A.</source> <volume>36</volume> (<issue>12</issue>), <fpage>D31</fpage>&#x2013;<lpage>D40</lpage>. <pub-id pub-id-type="doi">10.1364/JOSAA.36.000D31</pub-id> </citation>
</ref>
<ref id="B12">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Li</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Xiao</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Pan</surname>
<given-names>F.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Phase Retrieval Method from Multiple-Wavelength Holograms for In-Line Holography</article-title>. <source>Optik</source> <volume>127</volume> (<issue>1</issue>), <fpage>90</fpage>&#x2013;<lpage>95</lpage>. <pub-id pub-id-type="doi">10.1016/j.ijleo.2015.10.017</pub-id> </citation>
</ref>
<ref id="B13">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Liu</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Guo</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Tan</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Wu</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>Pan</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>S.</given-names>
</name>
</person-group> (<year>2015</year>). <article-title>Iterative Phase-Amplitude Retrieval with Multiple Intensity Images at Output Plane of Gyrator Transforms</article-title>. <source>J. Opt.</source> <volume>17</volume> (<issue>2</issue>), <fpage>025701</fpage>. <pub-id pub-id-type="doi">10.1088/2040-8978/17/2/025701</pub-id> </citation>
</ref>
<ref id="B14">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Noom</surname>
<given-names>D. W. E.</given-names>
</name>
<name>
<surname>Boonzajer Flaes</surname>
<given-names>D. E.</given-names>
</name>
<name>
<surname>Labordus</surname>
<given-names>E.</given-names>
</name>
<name>
<surname>Eikema</surname>
<given-names>K. S. E.</given-names>
</name>
<name>
<surname>Witte</surname>
<given-names>S.</given-names>
</name>
</person-group> (<year>2014</year>). <article-title>High-speed Multi-Wavelength Fresnel Diffraction Imaging</article-title>. <source>Opt. Express</source> <volume>22</volume> (<issue>25</issue>), <fpage>30504</fpage>&#x2013;<lpage>30511</lpage>. <pub-id pub-id-type="doi">10.1364/OE.22.030504</pub-id> </citation>
</ref>
<ref id="B15">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ozcan</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>McLeod</surname>
<given-names>E.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Lensless Imaging and Sensing</article-title>. <source>Annu. Rev. Biomed. Eng.</source> <volume>18</volume>, <fpage>77</fpage>&#x2013;<lpage>102</lpage>. <pub-id pub-id-type="doi">10.1146/annurev-bioeng-092515-010849</pub-id> </citation>
</ref>
<ref id="B16">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sanz</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Picazo-Bueno</surname>
<given-names>J. A.</given-names>
</name>
<name>
<surname>Garc&#xed;a</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Mic&#xf3;</surname>
<given-names>V.</given-names>
</name>
</person-group> (<year>2015</year>). <article-title>Improved Quantitative Phase Imaging in Lensless Microscopy by Single-Shot Multi-Wavelength Illumination Using a Fast Convergence Algorithm</article-title>. <source>Opt. Express</source> <volume>23</volume> (<issue>16</issue>), <fpage>21352</fpage>&#x2013;<lpage>21365</lpage>. <pub-id pub-id-type="doi">10.1364/OE.23.021352</pub-id> </citation>
</ref>
<ref id="B17">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wu</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Sun</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Lu</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>Q.</given-names>
</name>
<etal/>
</person-group> (<year>2021</year>). <article-title>Wavelength-scanning Lensfree On-Chip Microscopy for Wide-Field Pixel-Super-Resolved Quantitative Phase Imaging</article-title>. <source>Opt. Lett.</source> <volume>46</volume> (<issue>9</issue>), <fpage>2023</fpage>&#x2013;<lpage>2026</lpage>. <pub-id pub-id-type="doi">10.1364/OL.421869</pub-id> </citation>
</ref>
<ref id="B18">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wu</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Luo</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Ozcan</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Demosaiced Pixel Super-resolution for Multiplexed Holographic Color Imaging</article-title>. <source>Sci. Rep.</source> <volume>6</volume>, <fpage>28601</fpage>. <pub-id pub-id-type="doi">10.1038/srep28601</pub-id> </citation>
</ref>
<ref id="B19">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Sun</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>Li</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Zuo</surname>
<given-names>C.</given-names>
</name>
</person-group> (<year>2017</year>). <article-title>Adaptive Pixel-Super-Resolved Lensfree In-Line Digital Holography for Wide-Field On-Chip Microscopy</article-title>. <source>Sci. Rep.</source> <volume>7</volume> (<issue>1</issue>), <fpage>11777</fpage>. <pub-id pub-id-type="doi">10.1038/s41598-017-11715-x</pub-id> </citation>
</ref>
<ref id="B20">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Sun</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>Q.</given-names>
</name>
<name>
<surname>Zuo</surname>
<given-names>C.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Resolution Analysis in a Lens-free On-Chip Digital Holographic Microscope</article-title>. <source>IEEE Trans. Comput. Imaging</source> <volume>6</volume>, <fpage>697</fpage>&#x2013;<lpage>710</lpage>. <pub-id pub-id-type="doi">10.1109/tci.2020.2964247</pub-id> </citation>
</ref>
<ref id="B21">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zuo</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Sun</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>Q.</given-names>
</name>
</person-group> (<year>2016</year>). <article-title>Adaptive Step-Size Strategy for Noise-Robust Fourier Ptychographic Microscopy</article-title>. <source>Opt. Express</source> <volume>24</volume> (<issue>18</issue>), <fpage>20724</fpage>&#x2013;<lpage>20744</lpage>. <pub-id pub-id-type="doi">10.1364/OE.24.020724</pub-id> </citation>
</ref>
<ref id="B22">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zuo</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Sun</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Hu</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>Q.</given-names>
</name>
</person-group> (<year>2015</year>). <article-title>Lensless Phase Microscopy and Diffraction Tomography with Multi-Angle and Multi-Wavelength Illuminations Using a LED Matrix</article-title>. <source>Opt. Express</source> <volume>23</volume> (<issue>11</issue>), <fpage>14314</fpage>&#x2013;<lpage>14328</lpage>. <pub-id pub-id-type="doi">10.1364/OE.23.014314</pub-id> </citation>
</ref>
</ref-list>
</back>
</article>