<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article article-type="research-article" dtd-version="2.3" xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Artif. Intell.</journal-id>
<journal-title>Frontiers in Artificial Intelligence</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Artif. Intell.</abbrev-journal-title>
<issn pub-type="epub">2624-8212</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">642731</article-id>
<article-id pub-id-type="doi">10.3389/frai.2021.642731</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Artificial Intelligence</subject>
<subj-group>
<subject>Original Research</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Evaluation of MRI Denoising Methods Using Unsupervised Learning</article-title>
<alt-title alt-title-type="left-running-head">Moreno L&#xf3;pez et&#x20;al.</alt-title>
<alt-title alt-title-type="right-running-head">MRI Denoising Using Unsupervised Learning</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Moreno L&#xf3;pez</surname>
<given-names>Marc</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="corresp" rid="c001">&#x2a;</xref>
<uri xlink:href="https://loop.frontiersin.org/people/664008/overview"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Frederick</surname>
<given-names>Joshua M.</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Ventura</surname>
<given-names>Jonathan</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<uri xlink:href="https://loop.frontiersin.org/people/920344/overview"/>
</contrib>
</contrib-group>
<aff id="aff1">
<label>
<sup>1</sup>
</label>Department of Computer Science, University of Colorado Colorado Springs, <addr-line>Colorado Springs</addr-line>, <addr-line>CO</addr-line>, <country>United&#x20;States</country>
</aff>
<aff id="aff2">
<label>
<sup>2</sup>
</label>Department of Computer Science and Software Engineering, California Polytechnic State University, <addr-line>San Luis Obispo</addr-line>, <addr-line>CA</addr-line>, <country>United&#x20;States</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>
<bold>Edited by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/824805/overview">Shuai Li</ext-link>, Swansea University, United&#x20;Kingdom</p>
</fn>
<fn fn-type="edited-by">
<p>
<bold>Reviewed by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1029783/overview">Shivanand Sharanappa Gornale</ext-link>, Rani Channamma University, India</p>
<p>
<ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/1292641/overview">Ameer Tamoor Khan</ext-link>, Hong Kong Polytechnic University, China</p>
</fn>
<corresp id="c001">&#x2a;Correspondence: Marc Moreno L&#xf3;pez, <email>mmorenol@uccs.edu</email>
</corresp>
<fn fn-type="other">
<p>This article was submitted to Medicine and Public Health, a section of the journal Frontiers in Artificial Intelligence</p>
</fn>
</author-notes>
<pub-date pub-type="epub">
<day>04</day>
<month>06</month>
<year>2021</year>
</pub-date>
<pub-date pub-type="collection">
<year>2021</year>
</pub-date>
<volume>4</volume>
<elocation-id>642731</elocation-id>
<history>
<date date-type="received">
<day>16</day>
<month>12</month>
<year>2020</year>
</date>
<date date-type="accepted">
<day>17</day>
<month>05</month>
<year>2021</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#xa9; 2021 Moreno L&#xf3;pez, Frederick and Ventura.</copyright-statement>
<copyright-year>2021</copyright-year>
<copyright-holder>Moreno L&#xf3;pez, Frederick and Ventura</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these&#x20;terms.</p>
</license>
</permissions>
<abstract>
<p>In this paper we evaluate two unsupervised approaches to denoise Magnetic Resonance Images (MRI) in the complex image space using the raw information that k-space holds. The first method is based on Stein&#x2019;s Unbiased Risk Estimator, while the second approach is based on a blindspot network, which limits the network&#x2019;s receptive field. Both methods are tested on two different datasets, one containing real knee MRI and the other consists of synthetic brain MRI. These datasets contain information about the complex image space which will be used for denoising purposes. Both networks are compared against a state-of-the-art algorithm, Non-Local Means (NLM) using quantitative and qualitative measures. For most given metrics and qualitative measures, both networks outperformed NLM, and they prove to be reliable denoising methods.</p>
</abstract>
<kwd-group>
<kwd>deep learning</kwd>
<kwd>denoising</kwd>
<kwd>k-Space</kwd>
<kwd>MRI</kwd>
<kwd>unsupervised</kwd>
</kwd-group>
<contract-num rid="cn001">1R15GM128166-01</contract-num>
<contract-sponsor id="cn001">National Institutes of Health<named-content content-type="fundref-id">10.13039/100000002</named-content>
</contract-sponsor>
</article-meta>
</front>
<body>
<sec id="s1">
<title>1 Introduction</title>
<p>Magnetic Resonance Imaging, MRI, is one of the most widely used imaging techniques, as it provides detailed information about organs and tissues in a completely non-invasive way. In MRI, data needed to generate images is directly sampled from the spatial frequency domain; however, the quality of this data can be deteriorated by several thermal noise sources and artifacts. Noise in MRI is of major consequence as it can mislead and result in inaccurate diagnoses of patients. In addition to visually corrupting the recovered images, noise is also an obstacle when conducting quantitative imaging on the MRI. The utility of MRI decreases if a region or specific tissue suffers from a low signal to noise ratio. Thus, there is a necessity for an efficient MRI reconstruction process, where denoising methods are applied to noisy images in order to improve both qualitative and quantitative measures of&#x20;MRI.</p>
<p>Additionally, in the case of <italic>in vivo</italic> MRI, noise is implicit to the acquisition process. When taking an MRI of a living subject, there are multiple noise factors. All other factors withheld, the MR machine has an innate noise component when acquiring an image due to a thermal factor. Another source of thermal noise is inversely proportional to the amount of time that the subject stays inside the MR machine, and while in the machine the subjects movements also contribute to the thermal noise. Finally, the patient&#x2019;s body temperature and the thermal factor from the MR machine is another key element, specially since a long exposure inside the MR machine could lead to an increase in body temperature, <xref ref-type="bibr" rid="B29">web (2017)</xref>.</p>
<p>Thus, when training a MRI denoiser, no ground truth is available for the training procedure. Likewise, due to previously discussed movement of the subject, two independent samples for denoising strategies as used by <xref ref-type="bibr" rid="B17">Lehtinen et&#x20;al. (2018)</xref> cannot be reasonably obtained. Thus either synthetic data needs be generated for supervised learning or unsupervised and self-supervised strategies must be employed. As such, we evaluate self-supervised solutions to MRI denoising. Deep self-supervised image denoisers have been seeing recent success for general image denoising tasks, and provide robust denoisers without requiring access to denoised images. Self-supervised denoisers generally under-perform supervised techniques, but arise naturally in cases like MRI, where pure supervised learning is infeasible.</p>
<p>While deep learning has seen success in many areas, there is a lack of methods focused on denoising MRI. Additionally, many traditional techniques denoise MRI in the magnitude space, dismissing the innate spatial frequency information the MRI contain. Most of the MRI denoising methods available use a supervised approach where they use the original MRI as ground truth. We wanted to explore an unsupervised approach using the complex image space, where no ground truth data is needed. Therefore, we will compare two unsupervised denoising approaches that denoise MRI in the spatial frequency space, competing with the more classical and widely used denoising methods.</p>
</sec>
<sec id="s2">
<title>2 Materials and Methods</title>
<sec id="s2-1">
<title>2.1 Related Work</title>
<p>Previous attempts on MRI denoising can be categorized in three different ways: traditional methods, supervised learning, and unsupervised learning.</p>
<sec id="s2-1-1">
<title>2.1.1 Traditional Methods</title>
<p>Traditional MRI denoising techniques are generally based on filtering, transformations, or statistical methods such as <xref ref-type="bibr" rid="B18">Mohan et&#x20;al. (2014)</xref>. Three of the most widely-used methods currently are bilateral filtering by <xref ref-type="bibr" rid="B24">Tomasi and Manduchi (1998)</xref>, non-local means by <xref ref-type="bibr" rid="B4">Buades et&#x20;al. (2005)</xref>, and BM3D by <xref ref-type="bibr" rid="B7">Dabov et&#x20;al. (2007)</xref>.</p>
<p>The bilateral filter presented by <xref ref-type="bibr" rid="B24">Tomasi and Manduchi (1998)</xref> is an edge preserving non-iterative method. When applied to an image, it uses a low-pass denoising kernel which adjusts to the original image spatial distribution of pixel-values. This helps preserve the edges while denoising the image. In the presence of sharp transitions, the kernel is weighted according to this transition. This behavior is modeled by a convolution of the intensity values of the image and a non-linear weighting function.</p>
<p>Non-local means, <xref ref-type="bibr" rid="B4">Buades et&#x20;al. (2005)</xref>, or NLM, uses the self spatial similarities that natural images have. It exploits the redundancy of the neighborhood pixels to remove the noise. The simplicity of this filter consists of using those similarities to find similar patches on the rest of the image to the patch being denoised. This is known as neighborhood filtering. NLM assigns confidence weights based on similarity to the original patch and its distance from the center of the observed patch. The main issue with NLM is that since it relies on a large space search, it can create a bottleneck in terms of computation.</p>
<p>BM3D, <xref ref-type="bibr" rid="B7">Dabov et&#x20;al. (2007)</xref>, is a robust algorithm that has several parameters that can be modified in order to achieve the best denoising. It is an extension of NLM, in the sense that it uses spatial similarities within the image. It starts by searching for patches with similar intensities to the patch that is being denoised. A 3D matrix containing the size of the patch and the aggregated patches is built. Then, a 3D transform is applied. So as to remove high frequency noises, the transform space is filtered and thresholded. Finally, a denoised 3D block is yielded by doing the inverse transformation. To recover the original array, weights are assigned to every patch. These weights are based on the variance and distance of the&#x20;patch.</p>
</sec>
<sec id="s2-1-2">
<title>2.1.2 Supervised Learning</title>
<p>One of the most well-known approaches for supervised denoising, DnCNN, is presented by <xref ref-type="bibr" rid="B34">Zhang et&#x20;al. (2017)</xref>. Their method uses feed-forward Convolutional Neural Networks, CNN. In order to improve both algorithm speed and performance, they use residual modules and batch normalization. This makes their network unique. Also, it does not need to know the level of noise. So, it can perform blind Gaussian denoising.</p>
<p>
<xref ref-type="bibr" rid="B2">Bermudez et&#x20;al. (2018)</xref> implemented an autoencoder with skip connections. To test their method, they added Gaussian noise to a T1-weighted brain MRI dataset from healthy subjects. <xref ref-type="bibr" rid="B1">Benou et&#x20;al. (2017)</xref> worked on spatio-temporal denoising of brain MRI using ensembles of deep neural networks. Each network is trained on a different variations of SNR. By doing this, they generate different hypothesis and then select the most likely one to generate a clean output curve using a classification network. This method presented better denoising results than those presented by <xref ref-type="bibr" rid="B9">Gal et&#x20;al. (2010)</xref>, where they use a dynamic NLM method, and they were also better than the results presented by <xref ref-type="bibr" rid="B26">Vincent et&#x20;al. (2010)</xref>, where they use stacked denoising autoencoders. An interesting approach is presented by <xref ref-type="bibr" rid="B10">Jiang et&#x20;al. (2018)</xref>. They use a multi-channel DnCNN to denoise Rician noise in magnitude MRI instead of Gaussian noise. They test their network for both known and unknown levels of noise, which allows them to create a more general model. Finally, <xref ref-type="bibr" rid="B25">Tripathi and Bag (2020)</xref> present a CNN with residual learning to denoise synthetic brain MRI. They use five different clean synthetic magnitude datasets and add Rician noise to it. They also perform blind denoising, where the network is tested with a different level of noise than it was trained with. Their blind denoising test yields interesting results, since they prove that, when the network is trained with higher levels of noise and tested on lower levels of noise, the network yields better results than when training and testing with low&#x20;noise.</p>
</sec>
<sec id="s2-1-3">
<title>2.1.3 Unsupervised Learning</title>
<p>For unsupervised image denoising a novel method is presented by <xref ref-type="bibr" rid="B32">Xu et&#x20;al. (2020)</xref>, where they introduce a method that uses corrupted test images as their ground truth &#x201c;clean&#x201d; images. To train their network they use synthetic images consisting of small alterations to the corrupted test image. They add more noise to the test image, and they prove that if they introduce a small amount of noise to the test image as an alteration, their network is still capable of denoising the corrupt image and produce a clean output. Given their training methodology, which trains an image-specific network for each image to be denoised, their approach is not well suited for MRI denoising, given the volume of images contained in an MRI. Therefore, the denoising process would be too time-consuming.</p>
<p>One of the most effective models used for unsupervised denoising is presented by <xref ref-type="bibr" rid="B22">Soltanayev and Chun (2018)</xref> and it is based on Stein&#x2019;s unbiased risk estimator, SURE. The SURE estimator, presented by <xref ref-type="bibr" rid="B23">Stein (1981)</xref> is an unbiased MSE estimator. The only problem with the SURE estimator is that it can only be expressed in an analytical form. When this is not available, <xref ref-type="bibr" rid="B19">Ramani et&#x20;al. (2008)</xref> proposed a Monte-Carlo-based SURE, MC-SURE. The work presented by <xref ref-type="bibr" rid="B22">Soltanayev and Chun (2018)</xref> overcomes previous shortcomings and combines the Monte-Carlo approximation and makes it available for deep neural network models. Since it can be used with no need of noiseless ground truth data, deep neural networks can be trained for denoising purposes in an unsupervised manner.</p>
<p>The model Noise2Noise (N2N) by <xref ref-type="bibr" rid="B17">Lehtinen et&#x20;al. (2018)</xref>, saw success in denoising images by learning to predict one noisy image from another by training on independent pairs of noisy images. The result is a model that predicts the expected value of the noisy distribution for each pixel. For many real noise models, Gaussian, Poisson, etc, this expected value is clean signal.</p>
<p>Building upon this, Noise2Void (N2V) by <xref ref-type="bibr" rid="B13">Krull et&#x20;al. (2018)</xref> developed a strategy which removes the need for two independent samples, and instead learns to denoise an image in a fully self-supervised way. In place of a second independent sample, N2V learns to denoise from the receptive field of a single pixel, excluding itself.</p>
<p>Using this strategy, Noise2Self developed a general framework for this type of denoising problem for higher dimensional spaces, and <xref ref-type="bibr" rid="B16">Laine et&#x20;al. (2019)</xref> denoted this form of network as a &#x201c;blindspot&#x201d; network and provide several improvements.</p>
<p>Despite all the progress in unsupervised denoising in other areas, there is not that much work done in unsupervised MRI denoising. One example is by <xref ref-type="bibr" rid="B8">Eun et&#x20;al. (2020)</xref>, where they introduce a cycle generative adversarial network, CycleGAN to denoise compressed sensing MRI. Thus, we wanted to further explore this path, given the potential that unsupervised learning showed in other fields and the lack of clean ground truth data when working with&#x20;MRI.</p>
</sec>
</sec>
<sec id="s2-2">
<title>2.2 Background</title>
<sec id="s2-2-1">
<title>2.2.1&#x20;K-Space</title>
<p>In MRI terminology, k-space is the 2D or 3D Fourier transform of the MRI measured. When measuring an MRI, the complex values are sampled using a pulse sequence, such as radio-frequency and gradient pulses. At the end of the scan the data is mathematically processed to produce a final image. Therefore k-space holds raw data before reconstruction. K-space can be seen as an array of numbers representing spatial frequencies in the&#x20;MRI.</p>
<p>To transition between k-space and the complex image space, we apply an inverse fast Fourier transform, and vice versa. Even though they are visually different, the information contained in both spaces is the exactly the same. In k-space, the axes represent spatial frequencies instead of positions. The points plotted in this space do not correspond one on one to the pixels on the image in time domain. Every point in k-space contains information about phase and spatial frequency for every pixel in the time&#x20;as seen in <xref ref-type="fig" rid="F1">Figure&#x20;1</xref>.</p>
<fig id="F1" position="float">
<label>FIGURE 1</label>
<caption>
<p>Representation of how points translate between k-space and complex image space.</p>
</caption>
<graphic xlink:href="frai-04-642731-g001.tif"/>
</fig>
<p>In MRI, the thermal noise that deteriorates the k-space is Gaussian. This Gaussian noise model can be defined as <inline-formula id="inf1">
<mml:math id="m1">
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mi>x</mml:mi>
<mml:mo>&#x2b;</mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
</mml:math>
</inline-formula>, where <italic>x</italic> is the original MRI signal and <italic>n</italic> is Gaussian noise. Even after applying the inverse fast Fourier transform, the noise remains Gaussian. If we converted the complex MRI to magnitude MRI, then the noise would be Rician. This is why, we want to explore Gaussian denoising of complex-value data and avoid dealing with Rician noise in the magnitude&#x20;space.</p>
</sec>
<sec id="s2-2-2">
<title>2.2.2 SURE Estimator</title>
<p>When training a network, a gradient-based optimization algorithm is used such as the stochastic gradient descent (SGD) <xref ref-type="bibr" rid="B3">Bottou (1999)</xref>, momentum, or the Adam optimization algorithm <xref ref-type="bibr" rid="B11">Kingma and Ba (2015)</xref> to optimize the loss. In our case, we use the Mean Squared Error, MSE <xref ref-type="bibr" rid="B30">web (2020a)</xref>, to calculate the amount of noise present in the image.<disp-formula id="e1">
<mml:math id="m2">
<mml:mrow>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mi>M</mml:mi>
</mml:mfrac>
<mml:mstyle displaystyle="true">
<mml:munderover>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>M</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>&#x2016;</mml:mo>
<mml:mrow>
<mml:mi>h</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi>y</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>j</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
<mml:mo>;</mml:mo>
<mml:mi>&#x3b8;</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:msup>
<mml:mi>x</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>j</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mo>&#x2016;</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:math>
<label>(1)</label>
</disp-formula>where <italic>M</italic> is the number of samples in one batch of data. The main issue with <xref ref-type="disp-formula" rid="e1">Eq. 1</xref> is that, since we are working in an unsupervised environment, we do not have access to <italic>x</italic>, the ground truth. Therefore, an estimator for MSE needs to be used. This is done by the SURE estimator presented in <xref ref-type="disp-formula" rid="e2">Eq. 2</xref>
<disp-formula id="e2">
<mml:math id="m3">
<mml:mrow>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mi>M</mml:mi>
</mml:mfrac>
<mml:mstyle displaystyle="true">
<mml:munderover>
<mml:mo>&#x2211;</mml:mo>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>M</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>&#x2016;</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi mathvariant="bold-italic">y</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>j</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
<mml:mo>&#x2212;</mml:mo>
<mml:mi mathvariant="bold-italic">h</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi mathvariant="bold-italic">y</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>j</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
<mml:mo>;</mml:mo>
<mml:mi mathvariant="bold-italic">&#x3b8;</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>&#x2016;</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>K</mml:mi>
<mml:msup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>&#x2b;</mml:mo>
<mml:mn>2</mml:mn>
<mml:msup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo>&#x2211;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>i</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>K</mml:mi>
</mml:munderover>
<mml:mfrac>
<mml:mrow>
<mml:mo>&#x2202;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">h</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi mathvariant="bold-italic">y</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>j</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
<mml:mo>;</mml:mo>
<mml:mi mathvariant="bold-italic">&#x3b8;</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2202;</mml:mo>
<mml:msub>
<mml:mi mathvariant="bold-italic">y</mml:mi>
<mml:mi>i</mml:mi>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mstyle>
</mml:mrow>
</mml:math>
<label>(2)</label>
</disp-formula>noting that no noiseless ground truth data were used in <xref ref-type="disp-formula" rid="e2">Eq.&#x20;2</xref>.</p>
<p>The only problem with the SURE estimator is that the last divergence is intractable. However it can be approximated using the Monte-Carlo SURE estimator by <xref ref-type="bibr" rid="B19">Ramani et&#x20;al. (2008)</xref>. Therefore the final risk estimator which will be used as a loss function is<disp-formula id="e3">
<mml:math id="m4">
<mml:mrow>
<mml:mfrac>
<mml:mn>1</mml:mn>
<mml:mi>M</mml:mi>
</mml:mfrac>
<mml:munderover>
<mml:mstyle displaystyle="true">
<mml:mo>&#x2211;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>j</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
<mml:mi>M</mml:mi>
</mml:munderover>
<mml:mrow>
<mml:mo>{</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi mathvariant="bold-italic">y</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>j</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
<mml:mo>&#x2212;</mml:mo>
<mml:mi mathvariant="bold-italic">h</mml:mi>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi mathvariant="bold-italic">y</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>j</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
<mml:mo>;</mml:mo>
<mml:mi mathvariant="bold-italic">&#x3b8;</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>&#x2212;</mml:mo>
<mml:mi>K</mml:mi>
<mml:msup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
<mml:mo>&#x2b;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:msup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mi mathvariant="italic">&#x3f5;</mml:mi>
</mml:mfrac>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mrow>
<mml:mover accent="true">
<mml:mi mathvariant="bold-italic">n</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>j</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mtext>t</mml:mtext>
</mml:msup>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mi mathvariant="bold-italic">h</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi mathvariant="bold-italic">y</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>j</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
<mml:mo>&#x2b;</mml:mo>
<mml:mi mathvariant="italic">&#x3f5;</mml:mi>
<mml:msup>
<mml:mrow>
<mml:mover accent="true">
<mml:mi mathvariant="bold-italic">n</mml:mi>
<mml:mo>&#x2dc;</mml:mo>
</mml:mover>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>j</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
<mml:mo>;</mml:mo>
<mml:mi mathvariant="bold-italic">&#x3b8;</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mi mathvariant="bold-italic">h</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msup>
<mml:mi mathvariant="bold-italic">y</mml:mi>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>j</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:msup>
<mml:mo>;</mml:mo>
<mml:mi mathvariant="bold-italic">&#x3b8;</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>}</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(3)</label>
</disp-formula>where <inline-formula id="inf2">
<mml:math id="m5">
<mml:mi>&#x3b5;</mml:mi>
</mml:math>
</inline-formula> is a small fixed positive number and <inline-formula id="inf3">
<mml:math id="m6">
<mml:mrow>
<mml:mi>n</mml:mi>
<mml:mo>&#x223c;</mml:mo>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mi>j</mml:mi>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> is a single realization from the standard normal distribution for each training data&#x20;<italic>j</italic>.</p>
</sec>
<sec id="s2-2-3">
<title>2.2.3 Blindspot Network</title>
<p>
<xref ref-type="bibr" rid="B16">Laine et&#x20;al. (2019)</xref> provide an improved blindspot architecture and denoising procedure. The blindspot network architecture combines multiple branches, where each branch restricts its receptive field to a half-plane which does not contain the center pixel. Then four branches are combined using <inline-formula id="inf4">
<mml:math id="m7">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:mn>1</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> convolutions. This form allows for the receptive field to be efficiently extended arbitrarily in every direction, while still excluding the center&#x20;pixel.</p>
<p>In N2V, the center pixel information is not exploited to prevent the model from simply learning to output this value. However, using Bayesian reason to the denoising task, we have for a particular noisy pixel <italic>y</italic> and corresponding clean signal <italic>x</italic>
<disp-formula id="e4">
<mml:math id="m8">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mo>&#x7c;</mml:mo>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mi>y</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mtext>&#x3a9;</mml:mtext>
<mml:mi>y</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x221d;</mml:mo>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mo>&#x7c;</mml:mo>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mo>&#x7c;</mml:mo>
<mml:mtext>&#x2009;</mml:mtext>
<mml:msub>
<mml:mtext>&#x3a9;</mml:mtext>
<mml:mi>y</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(4)</label>
</disp-formula>where <inline-formula id="inf5">
<mml:math id="m9">
<mml:mrow>
<mml:msub>
<mml:mtext>&#x3a9;</mml:mtext>
<mml:mi>y</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> is the context given by the receptive field of the pixel <italic>y</italic>. Thus, using a blindspot architecture to model a Gaussian prior <inline-formula id="inf6">
<mml:math id="m10">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mo>&#x7c;</mml:mo>
<mml:mtext>&#x2009;</mml:mtext>
<mml:msub>
<mml:mtext>&#x3a9;</mml:mtext>
<mml:mi>y</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>, the posterior mean <inline-formula id="inf7">
<mml:math id="m11">
<mml:mrow>
<mml:msub>
<mml:mi mathvariant="script">E</mml:mi>
<mml:mi>x</mml:mi>
</mml:msub>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mo>&#x7c;</mml:mo>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mi>y</mml:mi>
<mml:mo>,</mml:mo>
<mml:mtext>&#x3a9;</mml:mtext>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula> has a closed form solution for many noise models. This allows for the use of the previously unexploited center pixel data at test time. In the case of MRI, with a Gaussian noise model, the posterior mean can be computed analytically.</p>
</sec>
<sec id="s2-2-4">
<title>2.2.4 Datasets</title>
<sec id="s2-2-4-1">
<title>2.2.4.1 Knee MRI</title>
<p>The Center for Advanced Imaging Innovation and Research (CAI<sup>2</sup>R), in the Department of Radiology at New York University, NYU, School of Medicine and NYU Langone Health, released two MRI datasets, <xref ref-type="bibr" rid="B33">Zbontar et&#x20;al. (2018)</xref>, <xref ref-type="bibr" rid="B12">Zbontar et&#x20;al. (2020)</xref>, to work on rapid image acquisition and advanced image reconstruction. The deidentified datasets consist of scans of knees and brains, which contain raw k-space data. For this experiment, we decided to use single coil data only, as it is the most widely used modality and due to its data size compared to multi coil, which is smaller.</p>
<p>The knee single coil dataset contains 973 training subjects and 199 validation subjects. According to their website, the fully sampled knee MRIs were obtained on 3 and 1.5 Tesla magnets. The raw dataset includes coronal proton density-weighted images with and without fat suppression. As such, NYU fastMRI investigators provided data but did not participate in analysis or writing of this report. A listing of NYU fastMRI investigators, subject to updates, can be found at <xref ref-type="bibr" rid="B31">web (2020b)</xref>.</p>
<p>Note that all knee MRI contain noise that varies from subject to subject.</p>
</sec>
<sec id="s2-2-4-2">
<title>2.2.4.2 Brainweb</title>
<p>In most of today&#x2019;s image analysis methods, a ground truth is expected, even if just for validation. In the case of MRI, noise is implicit to the <italic>in vivo</italic> acquisition process, and so no true noise free MR dataset exists. The Brainweb dataset provides an easy solution for this by creating a Simulated Brain Database (SBD) <xref ref-type="bibr" rid="B5">Cocosco et&#x20;al. (1997)</xref>; <xref ref-type="bibr" rid="B28">web (1998)</xref>; <xref ref-type="bibr" rid="B15">Kwan et&#x20;al. (1999)</xref>; <xref ref-type="bibr" rid="B14">Kwan et&#x20;al. (1996)</xref>; <xref ref-type="bibr" rid="B6">Collins et&#x20;al. (1998)</xref>, where an MRI simulator is used to created realistic MRI data volumes. In addition to providing a predefined magnitude image dataset, the Brainweb simulator is exposed to allow for custom simulations.</p>
<p>Using the custom simulator, we acquired raw frequency spatial data for varied simulator parameters. This includes data generated for all combinations of no, mild, moderate, and severe multiple sclerosis (MS) lesions anatomic models with the six available parameter templates. These six are generated by combining the AI and ICBM protocols with either T1, T2, or Proton Density (PD) weighting. For our purposes, we will only be using T1 and T2. All together this allowed for the generation of 16 brain MR volumes simulated from a realistic parameter set. 12 subjects were used for training and four subjects were used for testing. Additionally, the custom simulator allows for adding a noise level; however, as we are treating this data as ground truth, we did not use this feature. For all Brainweb experiments, we performed cross-validation to ensure the validity of the results.</p>
<p>Since our blindspot network expects square input, each individual slice of the MR volumes were zero padded in k-space to have matching dimensions.</p>
</sec>
</sec>
</sec>
<sec id="s2-3">
<title>2.3 Training</title>
<p>All models were trained and tested using a single NVIDIA GeForce GTX Titan X, with 12 GBytes of memory.</p>
<sec id="s2-3-1">
<title>2.3.1 SURE Model</title>
<p>The gradient of <xref ref-type="disp-formula" rid="e3">Eq. 3</xref> can be automatically calculated when training a deep learning framework. Therefore, we use <xref ref-type="disp-formula" rid="e3">Eq. 3</xref> as a cost function for a basic U-Net architecture, <xref ref-type="bibr" rid="B20">Ronneberger et&#x20;al. (2015)</xref>, with five convolutional layers on both&#x20;sides.</p>
<p>To train the SURE estimator in 2D, we use a U-Net of depth 5, convolution kernel size of 3 and 48 initial feature maps. After each convolutional layer, a LeakyReLU is applied, except for the last convolutional layer, where no activation function is used. We train the network in batches of 10 for 300 epochs, using the Adam optimizer with an initial learning rate of <inline-formula id="inf8">
<mml:math id="m12">
<mml:mrow>
<mml:mn>3</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>4</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>. The data, both training and testing, is center cropped to 320&#x20;&#xd7; 320 for knee MRI and 192&#x20;&#xd7; 192 for brain MRI, using all available slices for&#x20;both.</p>
</sec>
<sec id="s2-3-2">
<title>2.3.2 Blindspot Model</title>
<p>Due to large regions of no-signal in MRI and a shared standard deviation across all pixels, many techniques exist to estimate the standard deviation of the noise <italic>&#x3c3;</italic>, <xref ref-type="bibr" rid="B21">Sardy et&#x20;al. (2001)</xref>. Thus, we use a blindspot architecture with knowledge of <italic>&#x3c3;</italic>, and our prior becomes <inline-formula id="inf9">
<mml:math id="m13">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mo>&#x7c;</mml:mo>
<mml:mtext>&#x2009;</mml:mtext>
<mml:msub>
<mml:mtext>&#x3a9;</mml:mtext>
<mml:mi>y</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>&#x3c3;</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</inline-formula>. This modifies <xref ref-type="disp-formula" rid="e4">Eq. 4</xref> in training to<disp-formula id="equ1">
<mml:math id="m14">
<mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mo>&#x7c;</mml:mo>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mi>y</mml:mi>
<mml:mo>,</mml:mo>
<mml:msub>
<mml:mtext>&#x3a9;</mml:mtext>
<mml:mi>y</mml:mi>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x221d;</mml:mo>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>y</mml:mi>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mo>&#x7c;</mml:mo>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mi>x</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mi>p</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mtext>&#x2009;</mml:mtext>
<mml:mo>&#x7c;</mml:mo>
<mml:mtext>&#x2009;</mml:mtext>
<mml:msub>
<mml:mtext>&#x3a9;</mml:mtext>
<mml:mi>y</mml:mi>
</mml:msub>
<mml:mo>,</mml:mo>
<mml:mi>&#x3c3;</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
</disp-formula>We train a 5-layer deep blindspot network in batches of 5 for 300 epochs. The convolution kernel has size of 3 and there are 48 initial feature maps. No activation function is used. We use Adam optimizer with an initial learning rate of <inline-formula id="inf10">
<mml:math id="m15">
<mml:mrow>
<mml:mn>3</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>4</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>. The learning rate is reduced if the validation loss has not decreased after ten epochs. The data, both training and testing, is center cropped to 320&#x20;&#xd7; 320 for knee MRI and 192&#x20;&#xd7; 192 for brain MRI, using all available slices for&#x20;both. For a more detailed network architecture description, please refer to <xref ref-type="bibr" rid="B16">Laine et&#x20;al. (2019)</xref>. We used the same blindspot network and U-Net architecture as described in <xref ref-type="bibr" rid="B16">Laine et&#x20;al. (2019)</xref>.</p>
</sec>
</sec>
</sec>
<sec id="s3">
<title>3 Results</title>
<p>For both datasets, different levels of noise were added to the original images in order to do a quantitative comparison to NLM. Since both models rely on Gaussian noise, we will only be adding Gaussian noise to the images.</p>
<p>For the knee single coil dataset, we started by adding noise with <inline-formula id="inf11">
<mml:math id="m16">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>. Then, we followed with twice the amount of noise with <inline-formula id="inf12">
<mml:math id="m17">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> to test both algorithms with an elevated amount of noise. Finally, the average background noise, <inline-formula id="inf13">
<mml:math id="m18">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>8.2</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>6</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>, was calculated for all images and was used for the last test. The three levels of noise can be seen in <xref ref-type="fig" rid="F2">Figure&#x20;2</xref>. Since the data is comprised of small values, a scale factor is needed. This factor is calculated using the maximum value found in the dataset as a reference. For both networks, a scale factor of 500 was&#x20;used.</p>
<fig id="F2" position="float">
<label>FIGURE 2</label>
<caption>
<p>Different levels of noise. <bold>(A)</bold> Low level <inline-formula id="inf57">
<mml:math id="m67">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>8.2</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>6</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>. <bold>(B)</bold> Medium level <inline-formula id="inf58">
<mml:math id="m68">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>. <bold>(C)</bold> High level <inline-formula id="inf59">
<mml:math id="m69">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>.</p>
</caption>
<graphic xlink:href="frai-04-642731-g002.tif"/>
</fig>
<p>For the Brainweb dataset, we added three different levels of noise. To understand how the networks behave with different levels of noise, we used low level noise with <inline-formula id="inf14">
<mml:math id="m19">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>50</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, middle noise with <inline-formula id="inf15">
<mml:math id="m20">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>100</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> and high level noise with <inline-formula id="inf16">
<mml:math id="m21">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>200</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>. In this case, since the data has much bigger values, a higher sigma is used. The three levels of noise can be seen in <xref ref-type="fig" rid="F3">Figure&#x20;3</xref>. Note how the data has to be scaled too, specially for the SURE network, which is highly sensitive to the input scale. For the Brainweb dataset, we scaled all input by a factor of 1/25,000. While the blindspot network presented good results even without the scaling factor, it performed slightly better with scaling.</p>
<fig id="F3" position="float">
<label>FIGURE 3</label>
<caption>
<p>Different levels of noise. <bold>(A)</bold> Low level <inline-formula id="inf60">
<mml:math id="m70">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>50</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>. <bold>(B)</bold> Medium level <inline-formula id="inf61">
<mml:math id="m71">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>100</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>. <bold>(C)</bold> High level <inline-formula id="inf62">
<mml:math id="m72">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>200</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>.</p>
</caption>
<graphic xlink:href="frai-04-642731-g003.tif"/>
</fig>
<p>In order to evaluate the proposed algorithm, three quantitative measures were used for the first three tests. Through all tests, a qualitative measure will be used, based on our perception of the images.</p>
<p>The three quantitative measures used are peak signal-to-noise-ratio, PSNR, mean-squared error, MSE <xref ref-type="bibr" rid="B30">web (2020a)</xref> and Structural Similarity Index Measure, SSIM <xref ref-type="bibr" rid="B27">Wang et&#x20;al. (2004)</xref>. Both MSE and PSNR are used to compare image compression quality, while SSIM is used for measuring the similarity between two images.</p>
<p>MSE represents the cumulative squared error between the compressed and the original image. The lower the value of MSE, the lower the error. MSE can be defined as<disp-formula id="e5">
<mml:math id="m22">
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mi>S</mml:mi>
<mml:mi>E</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mstyle displaystyle="true">
<mml:mo>&#x2211;</mml:mo>
</mml:mstyle>
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:msub>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:msub>
<mml:mi>I</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>m</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>n</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mo>&#x2217;</mml:mo>
<mml:mi>N</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
<label>(5)</label>
</disp-formula>where <italic>M</italic> and <italic>N</italic> are the number of rows and columns in the input&#x20;image.</p>
<p>PSNR computes the peak signal-to-noise ratio between two images. This ratio is used as a quality measurement between the original and a compressed or reconstructed image. The higher the PSNR, the better the quality of the image. PSNR can be defined as<disp-formula id="e6">
<mml:math id="m23">
<mml:mrow>
<mml:mi>P</mml:mi>
<mml:mi>S</mml:mi>
<mml:mi>N</mml:mi>
<mml:mi>R</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>10</mml:mn>
<mml:msub>
<mml:mrow>
<mml:mtext>log</mml:mtext>
</mml:mrow>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
</mml:msub>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mfrac>
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mi>A</mml:mi>
<mml:msup>
<mml:mi>X</mml:mi>
<mml:mn>2</mml:mn>
</mml:msup>
</mml:mrow>
<mml:mrow>
<mml:mi>M</mml:mi>
<mml:mi>S</mml:mi>
<mml:mi>E</mml:mi>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:math>
<label>(6)</label>
</disp-formula>where MAX is the maximum achievable value in the input image data&#x20;type.</p>
<p>SSIM is a method for measuring the similarity between two images. The SSIM index can be viewed as a quality measure of one of the images being compared, taking into account that the other image is regarded as of the ground&#x20;truth.</p>
<p>The main difference between SSIM and PSNR or MSE is that SSIM quantifies the change in structural information, while PSNR or MSE approach estimate absolute errors. Structural information, such as luminance and contrast, is based on the fact that pixels have inter-dependencies, especially when they are spatially&#x20;close.</p>
<p>The overall index is a multiplicative combination of the three terms and can be described the following way:<disp-formula id="e7">
<mml:math id="m24">
<mml:mrow>
<mml:mtext>SSIM</mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mi>&#x3b1;</mml:mi>
</mml:msup>
<mml:mo>&#x22c5;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mi>&#x3b2;</mml:mi>
</mml:msup>
<mml:mo>&#x22c5;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mrow>
<mml:mo>[</mml:mo>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mo>]</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mi>&#x3b3;</mml:mi>
</mml:msup>
</mml:mrow>
</mml:math>
<label>(7)</label>
</disp-formula>where<disp-formula id="e8">
<mml:math id="m25">
<mml:mrow>
<mml:mtable>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mi>l</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:msub>
<mml:mi>&#x3bc;</mml:mi>
<mml:mi>x</mml:mi>
</mml:msub>
<mml:msub>
<mml:mi>&#x3bc;</mml:mi>
<mml:mi>y</mml:mi>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3bc;</mml:mi>
<mml:mi>x</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x2b;</mml:mo>
<mml:msubsup>
<mml:mi>&#x3bc;</mml:mi>
<mml:mi>y</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfrac>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mi>c</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:msub>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>x</mml:mi>
</mml:msub>
<mml:msub>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>y</mml:mi>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>x</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x2b;</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>y</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfrac>
<mml:mo>,</mml:mo>
</mml:mrow>
</mml:mtd>
</mml:mtr>
<mml:mtr>
<mml:mtd>
<mml:mrow>
<mml:mi>s</mml:mi>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>3</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>x</mml:mi>
</mml:msub>
<mml:msub>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>y</mml:mi>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>3</mml:mn>
</mml:msub>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:mtd>
</mml:mtr>
</mml:mtable>
</mml:mrow>
</mml:math>
<label>(8)</label>
</disp-formula>where <inline-formula id="inf17">
<mml:math id="m26">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bc;</mml:mi>
<mml:mi>x</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf18">
<mml:math id="m27">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3bc;</mml:mi>
<mml:mi>y</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf19">
<mml:math id="m28">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>x</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula>, <inline-formula id="inf20">
<mml:math id="m29">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>y</mml:mi>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf21">
<mml:math id="m30">
<mml:mrow>
<mml:msub>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
</mml:mrow>
</mml:msub>
</mml:mrow>
</mml:math>
</inline-formula> are the local means, standard deviations, and cross-covariance for images <italic>x</italic>, <italic>y</italic>. If <italic>&#x3b1;</italic> &#x3d; <italic>&#x3b2;</italic> &#x3d; <italic>&#x3b3;</italic> &#x3d; 1, and <inline-formula id="inf22">
<mml:math id="m31">
<mml:mrow>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>3</mml:mn>
</mml:msub>
<mml:mo>&#x3d;</mml:mo>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
<mml:mo>/</mml:mo>
<mml:mn>2</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula> the index simplifies to:<disp-formula id="e9">
<mml:math id="m32">
<mml:mrow>
<mml:mtext>SSIM</mml:mtext>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mo>,</mml:mo>
<mml:mi>y</mml:mi>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mo>&#x3d;</mml:mo>
<mml:mfrac>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:msub>
<mml:mi>&#x3bc;</mml:mi>
<mml:mi>x</mml:mi>
</mml:msub>
<mml:msub>
<mml:mi>&#x3bc;</mml:mi>
<mml:mi>y</mml:mi>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:msub>
<mml:mi>&#x3c3;</mml:mi>
<mml:mrow>
<mml:mi>x</mml:mi>
<mml:mi>y</mml:mi>
</mml:mrow>
</mml:msub>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
<mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3bc;</mml:mi>
<mml:mi>x</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x2b;</mml:mo>
<mml:msubsup>
<mml:mi>&#x3bc;</mml:mi>
<mml:mi>y</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>1</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
<mml:mrow>
<mml:mo>(</mml:mo>
<mml:mrow>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>x</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x2b;</mml:mo>
<mml:msubsup>
<mml:mi>&#x3c3;</mml:mi>
<mml:mi>y</mml:mi>
<mml:mn>2</mml:mn>
</mml:msubsup>
<mml:mo>&#x2b;</mml:mo>
<mml:msub>
<mml:mi>C</mml:mi>
<mml:mn>2</mml:mn>
</mml:msub>
</mml:mrow>
<mml:mo>)</mml:mo>
</mml:mrow>
</mml:mrow>
</mml:mfrac>
</mml:mrow>
</mml:math>
<label>(9)</label>
</disp-formula>For all the results that are presented here, an optimal h parameter for the NLM algorithm was previously found and set to <italic>h</italic>&#x20;&#x3d; 0.71. The patch size was set to 5&#x20;&#xd7; 5 with a patch distance of&#x20;6.</p>
<p>The same tests were done for both the SURE network and the blindspot network, <xref ref-type="table" rid="T1">Table&#x20;1</xref>, <xref ref-type="table" rid="T2">2</xref> respectively. For each evaluation metric, the best scoring algorithm is highlighted in&#x20;bold.</p>
<table-wrap id="T1" position="float">
<label>TABLE 1</label>
<caption>
<p>Test results knee single-coil dataset.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">
<italic>&#x3c3;</italic>
</th>
<th align="center">Noisy MSE</th>
<th align="center">SURE MSE</th>
<th align="center">Blindspot MSE</th>
<th align="center">NLM MSE</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">
<inline-formula id="inf23">
<mml:math id="m33">
<mml:mrow>
<mml:mn>8.2</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>6</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="center">
<inline-formula id="inf24">
<mml:math id="m34">
<mml:mrow>
<mml:mn>6.5954</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>11</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="center">
<inline-formula id="inf25">
<mml:math id="m35">
<mml:mrow>
<mml:mn>3.6943</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>11</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="center">
<inline-formula id="inf26">
<mml:math id="m36">
<mml:mrow>
<mml:mn>3.9075</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>11</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="center">
<inline-formula id="inf27">
<mml:math id="m37">
<mml:mrow>
<mml:mn>3.9826</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>11</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
</tr>
<tr>
<td align="left">
<inline-formula id="inf28">
<mml:math id="m38">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="center">
<inline-formula id="inf29">
<mml:math id="m39">
<mml:mrow>
<mml:mn>9.8777</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>11</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="center">
<inline-formula id="inf30">
<mml:math id="m40">
<mml:mrow>
<mml:mn>4.7123</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>11</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="center">
<inline-formula id="inf31">
<mml:math id="m41">
<mml:mrow>
<mml:mn>4.8734</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>11</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="center">
<inline-formula id="inf32">
<mml:math id="m42">
<mml:mrow>
<mml:mn>4.9732</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>11</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
</tr>
<tr>
<td align="left">
<inline-formula id="inf33">
<mml:math id="m43">
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="center">
<inline-formula id="inf34">
<mml:math id="m44">
<mml:mrow>
<mml:mn>4.2101</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>10</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="center">
<inline-formula id="inf35">
<mml:math id="m45">
<mml:mrow>
<mml:mn>9.0616</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>11</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="center">
<inline-formula id="inf36">
<mml:math id="m46">
<mml:mrow>
<mml:mn>8.7264</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>11</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="center">
<inline-formula id="inf37">
<mml:math id="m47">
<mml:mrow>
<mml:mn>9.0004</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>11</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
</tr>
</tbody>
</table>
<table>
<thead valign="top">
<tr>
<th align="left">
<italic>&#x3c3;</italic>
</th>
<th align="center">Noisy PSNR</th>
<th align="center">SURE PSNR</th>
<th align="center">Blindspot PSNR</th>
<th align="center">NLM PSNR</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">
<inline-formula id="inf38">
<mml:math id="m48">
<mml:mrow>
<mml:mn>8.2</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>6</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="char" char=".">28.266</td>
<td align="char" char=".">30.866</td>
<td align="char" char=".">30.626</td>
<td align="char" char=".">30.555</td>
</tr>
<tr>
<td align="left">
<inline-formula id="inf39">
<mml:math id="m49">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="char" char=".">26.512</td>
<td align="char" char=".">29.846</td>
<td align="char" char=".">29.692</td>
<td align="char" char=".">29.610</td>
</tr>
<tr>
<td align="left">
<inline-formula id="inf40">
<mml:math id="m50">
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="char" char=".">20.226</td>
<td align="char" char=".">27.196</td>
<td align="char" char=".">27.329</td>
<td align="char" char=".">27.235</td>
</tr>
</tbody>
</table>
<table>
<thead valign="top">
<tr>
<th align="left">
<italic>&#x3c3;</italic>
</th>
<th align="center">Noisy SSIM</th>
<th align="center">SURE SSIM</th>
<th align="center">Blindspot SSIM</th>
<th align="center">NLM SSIM</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">
<inline-formula id="inf41">
<mml:math id="m51">
<mml:mrow>
<mml:mn>8.2</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>6</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="char" char=".">0.7238</td>
<td align="char" char=".">0.7795</td>
<td align="char" char=".">0.7708</td>
<td align="char" char=".">0.7661</td>
</tr>
<tr>
<td align="left">
<inline-formula id="inf42">
<mml:math id="m52">
<mml:mrow>
<mml:mn>1</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="char" char=".">0.6487</td>
<td align="char" char=".">0.7284</td>
<td align="char" char=".">0.7215</td>
<td align="char" char=".">0.7119</td>
</tr>
<tr>
<td align="left">
<inline-formula id="inf43">
<mml:math id="m53">
<mml:mrow>
<mml:mn>2</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>
</td>
<td align="char" char=".">0.3628</td>
<td align="char" char=".">0.5579</td>
<td align="char" char=".">0.5605</td>
<td align="char" char=".">0.5653</td>
</tr>
</tbody>
</table>
</table-wrap>
<table-wrap id="T2" position="float">
<label>TABLE 2</label>
<caption>
<p>Test results for the Brainweb dataset.</p>
</caption>
<table>
<thead valign="top">
<tr>
<th align="left">
<italic>&#x3c3;</italic>
</th>
<th align="center">Noisy MSE</th>
<th align="center">SURE MSE</th>
<th align="center">Blindspot MSE</th>
<th align="center">NLM MSE</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">50</td>
<td align="char" char=".">2,981.044</td>
<td align="char" char=".">1,281.977</td>
<td align="char" char=".">1,259.961</td>
<td align="char" char=".">1,322.726</td>
</tr>
<tr>
<td align="left">100</td>
<td align="char" char=".">12,332.774</td>
<td align="char" char=".">3,508.540</td>
<td align="char" char=".">2,758.001</td>
<td align="char" char=".">4,059.259</td>
</tr>
<tr>
<td align="left">200</td>
<td align="char" char=".">50,639.730</td>
<td align="char" char=".">9,150.021</td>
<td align="char" char=".">7,245.904</td>
<td align="char" char=".">11,578.606</td>
</tr>
</tbody>
</table>
<table>
<thead valign="top">
<tr>
<th align="left">
<italic>&#x3c3;</italic>
</th>
<th align="center">Noisy PSNR</th>
<th align="center">SURE PSNR</th>
<th align="center">Blindspot PSNR</th>
<th align="center">NLM PSNR</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">50</td>
<td align="char" char=".">33.524</td>
<td align="char" char=".">38.015</td>
<td align="char" char=".">38.012</td>
<td align="char" char=".">37.781</td>
</tr>
<tr>
<td align="left">100</td>
<td align="char" char=".">27.361</td>
<td align="char" char=".">34.036</td>
<td align="char" char=".">35.240</td>
<td align="char" char=".">33.166</td>
</tr>
<tr>
<td align="left">200</td>
<td align="char" char=".">21.227</td>
<td align="char" char=".">30.301</td>
<td align="char" char=".">31.429</td>
<td align="char" char=".">28.753</td>
</tr>
</tbody>
</table>
<table>
<thead valign="top">
<tr>
<th align="left">
<italic>&#x3c3;</italic>
</th>
<th align="center">Noisy SSIM</th>
<th align="center">SURE SSIM</th>
<th align="center">Blindspot SSIM</th>
<th align="center">NLM SSIM</th>
</tr>
</thead>
<tbody valign="top">
<tr>
<td align="left">50</td>
<td align="char" char=".">0.7663</td>
<td align="char" char=".">0.8971</td>
<td align="char" char=".">0.8977</td>
<td align="char" char=".">0.8790</td>
</tr>
<tr>
<td align="left">100</td>
<td align="char" char=".">0.6314</td>
<td align="char" char=".">0.8466</td>
<td align="char" char=".">0.9066</td>
<td align="char" char=".">0.8014</td>
</tr>
<tr>
<td align="left">200</td>
<td align="char" char=".">0.4710</td>
<td align="char" char=".">0.7829</td>
<td align="char" char=".">0.8409</td>
<td align="char" char=".">0.6996</td>
</tr>
</tbody>
</table>
</table-wrap>
</sec>
<sec id="s4">
<title>4 Discussion</title>
<p>As seen in <xref ref-type="table" rid="T1">Table&#x20;1</xref> for the knee data, the SURE network presents better results than NLM and blindspot for both <inline-formula id="inf44">
<mml:math id="m54">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf45">
<mml:math id="m55">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>8.2</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>6</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>. In both those cases, MSE is smaller and both PSNR and SSIM are larger than NLM and blindspot. Note how in the case of <inline-formula id="inf46">
<mml:math id="m56">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>, NLM does better than the SURE network, but worse than blindspot, except for SSIM. Given that this is an extreme case, where the amount of noise is unrealistically elevated, it would be uncommon to find data in those circumstances.</p>
<p>We can also see that the blindspot network presents better results than NLM for all levels of noise, except for SSIM for <inline-formula id="inf47">
<mml:math id="m57">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>. Compared to SURE, it presents worse results for <inline-formula id="inf48">
<mml:math id="m58">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula> and <inline-formula id="inf49">
<mml:math id="m59">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>8.2</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>6</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>. Note however, how in the case of <inline-formula id="inf50">
<mml:math id="m60">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>, blindspot outperforms both SURE and NLM except for NLM SSIM. This presents a divergence in the results previously seen in the complex image space, where for the case of high level noise, NLM was overall better than blindspot and&#x20;SURE.</p>
<p>For the Brainweb dataset, both networks present better results in all scoring metrics than NLM. The best overall performing network is the blindspot network, edging out the SURE network, except in one case, PSNR for <inline-formula id="inf51">
<mml:math id="m61">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>50</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>, where SURE is slightly better than blindspot. Again, we believe that in this case both networks do better than NLM even in the presence of high amounts of noise because there is no background noise at all in the original images. Therefore, the networks only need to remove just the added&#x20;noise.</p>
<p>Another comparison can be done using qualitative measures, based on observing the images and comparing all outputs. Using <xref ref-type="fig" rid="F4">Figures 4</xref>&#x2013;<xref ref-type="fig" rid="F6">6</xref> as references, at a first glance, NLM does a better job at taking noise out, but does it while having a negative effect on the edges and the tissue pixels. NLM does an excellent job when removing noise from the background, but does not do as well on the tissue pixels. This can be a problem, since we want to maintain the tissue structure as much as possible. The SURE network does a better job at preserving the tissue while doing a good job when denoising. In some cases, NLM introduces artifacts that interfere with the tissue pixels. In terms of edge preservation, again NLM presents an undesired effect, which makes the edges look worse than the original&#x20;image.</p>
<fig id="F4" position="float">
<label>FIGURE 4</label>
<caption>
<p>Example of denoised knee MRI for <inline-formula id="inf52">
<mml:math id="m62">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>8.2</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>6</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>. The example image is the middle slice from one of the subjects. In this case, this is the PSNR for every method for this particular subject. <bold>(A)</bold> Original image, no noise added&#x2014;<bold>(B)</bold> Noisy image&#x2014;<bold>(C)</bold> SURE PSNR &#x3d; 37.092&#x2014;<bold>(D)</bold> Blindspot PSNR &#x3d; 37.317&#x2014;<bold>(E)</bold> NLM PSNR &#x3d; 36.350.</p>
</caption>
<graphic xlink:href="frai-04-642731-g004.tif"/>
</fig>
<fig id="F5" position="float">
<label>FIGURE 5</label>
<caption>
<p>Example of denoised knee MRI for <inline-formula id="inf53">
<mml:math id="m63">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>1</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>. The example image is the middle slice from one of the subjects. In this case, this is the PSNR for every method for this particular subject. <bold>(A)</bold> Original image, no noise added&#x2014;<bold>(B)</bold> Noisy image&#x2014;<bold>(C)</bold> SURE PSNR &#x3d; 30.800&#x2014;<bold>(D)</bold> Blindspot PSNR &#x3d; 30.953&#x2014;<bold>(E)</bold> NLM PSNR &#x3d; 30.189.</p>
</caption>
<graphic xlink:href="frai-04-642731-g005.tif"/>
</fig>
<fig id="F6" position="float">
<label>FIGURE 6</label>
<caption>
<p>Example of denoised knee MRI for <inline-formula id="inf54">
<mml:math id="m64">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>2</mml:mn>
<mml:mo>&#xd7;</mml:mo>
<mml:msup>
<mml:mrow>
<mml:mn>10</mml:mn>
</mml:mrow>
<mml:mrow>
<mml:mo>&#x2212;</mml:mo>
<mml:mn>5</mml:mn>
</mml:mrow>
</mml:msup>
</mml:mrow>
</mml:math>
</inline-formula>. The example image is the middle slice from one of the subjects. In this case, this is the PSNR for every method for this particular subject. <bold>(A)</bold> Original image, no noise added&#x2014;<bold>(B)</bold> Noisy image&#x2014;<bold>(C)</bold> SURE PSNR &#x3d; 23.823&#x2014;<bold>(D)</bold> Blindspot PSNR &#x3d; 23.931&#x2014;<bold>(E)</bold> NLM PSNR &#x3d; 24.086.</p>
</caption>
<graphic xlink:href="frai-04-642731-g006.tif"/>
</fig>
<p>For the Brainweb dataset, both networks present better results in all scoring metrics than NLM. The best overall performing network is the blindspot network, edging out the SURE network. We believe that in this case both networks do better than NLM even in the presence of high amounts of noise because there is no background noise at all in the original images. Therefore, the networks only need to remove just the added noise. We can see this in <xref ref-type="fig" rid="F7">Figures 7</xref>, <xref ref-type="fig" rid="F8">8</xref>, <xref ref-type="fig" rid="F9">9</xref>. NLM still presents an undesired effect on the images which can be costly. If we take a closer look, we can see some of the tissue details that the NLM is removing completely and some of the artifacts that it presents. We can clearly see this in <xref ref-type="fig" rid="F10">Figures 10</xref>,&#x20;<xref ref-type="fig" rid="F11">11</xref>.</p>
<fig id="F7" position="float">
<label>FIGURE 7</label>
<caption>
<p>Example of denoised brain MRI for <inline-formula id="inf63">
<mml:math id="m73">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>50</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>. The example image is the middle slice from one of the subjects. In this case, this is the PSNR for every method for this particular subject. <bold>(A)</bold> Original image, no noise added&#x2014;<bold>(B)</bold> Noisy image&#x2014;<bold>(C)</bold> SURE PSNR &#x3d; 43.883&#x2014;<bold>(D)</bold> Blindspot PSNR &#x3d; 44.731&#x2014;<bold>(E)</bold> NLM PSNR &#x3d; 43.000.</p>
</caption>
<graphic xlink:href="frai-04-642731-g007.tif"/>
</fig>
<fig id="F8" position="float">
<label>FIGURE 8</label>
<caption>
<p>Example of denoised brain MRI for <inline-formula id="inf55">
<mml:math id="m65">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>100</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>. The example image is the middle slice from one of the subjects. In this case, this is the PSNR for every method for this particular subject. <bold>(A)</bold> Original image, no noise added&#x2014;<bold>(B)</bold> Noisy image&#x2014;<bold>(C)</bold> SURE PSNR &#x3d; 38.130&#x2014;<bold>(D)</bold> Blindspot PSNR &#x3d; 39.072&#x2014;<bold>(E)</bold> NLM PSNR &#x3d; 37.108.</p>
</caption>
<graphic xlink:href="frai-04-642731-g008.tif"/>
</fig>
<fig id="F9" position="float">
<label>FIGURE 9</label>
<caption>
<p>Example of denoised brain MRI for <inline-formula id="inf56">
<mml:math id="m66">
<mml:mrow>
<mml:mi>&#x3c3;</mml:mi>
<mml:mo>&#x3d;</mml:mo>
<mml:mn>200</mml:mn>
</mml:mrow>
</mml:math>
</inline-formula>. The example image is the middle slice from one of the subjects. In this case, this is the PSNR for every method for this particular subject. <bold>(A)</bold> Original image, no noise added&#x2014;<bold>(B)</bold> Noisy image&#x2014;<bold>(C)</bold> SURE PSNR &#x3d; 29.610&#x2014;<bold>(D)</bold> Blindspot PSNR &#x3d; 30.904&#x2014;<bold>(E)</bold> NLM PSNR &#x3d; 26.616.</p>
</caption>
<graphic xlink:href="frai-04-642731-g009.tif"/>
</fig>
<fig id="F10" position="float">
<label>FIGURE 10</label>
<caption>
<p>
<bold>(A)</bold> Original close-up. No noise added. <bold>(B)</bold> NLM denoised close-up. <bold>(C)</bold> SURE network denoised close-up. <bold>(D)</bold> Blindspot denoised close-up. Observe how all three algorithms do a good job at denoising, but NLM introduces undesired artifacts.</p>
</caption>
<graphic xlink:href="frai-04-642731-g010.tif"/>
</fig>
<fig id="F11" position="float">
<label>FIGURE 11</label>
<caption>
<p>
<bold>(A)</bold> Original close-up. No noise added. <bold>(B)</bold> NLM denoised close-up. <bold>(C)</bold> SURE network denoised close-up. <bold>(D)</bold> Blindspot denoised close-up. Observe how NLM completely removes some of the tissue while both SURE and blindspot, do not remove as much noise, but do a better job at maintaining the tissue&#x2019;s structure without inserting any artifacts.</p>
</caption>
<graphic xlink:href="frai-04-642731-g011.tif"/>
</fig>
<p>After seeing how both networks outperform NLM in most categories, the next step was to work with the original images from the knee dataset, without adding any extra noise. When doing this test, no quantitative measure can be used, since there is no image to compare to. Therefore, only qualitative measures will be&#x20;used.</p>
<p>As seen in <xref ref-type="fig" rid="F12">Figures 12</xref>, <xref ref-type="fig" rid="F13">13</xref>, both networks have mixed results. Both networks still do a better job at preserving the edges and tissue, but sometimes struggle to remove noise from parts of the image without any tissue. This is happening due to a few circumstances. First of all, when training the data, there is no ground truth to compare it to. This can lead to over-training and over-fitting. Second, the inherent noise that the images have, might not be Gaussian noise. This is also supported by the previous results that were obtained for both datasets. Both the SURE and blindspot network were outperformed only in the presence of high levels of noise for the knee dataset. In the same conditions of high level of noise for the Brainweb dataset, both networks outperformed NLM. Therefore, the background noise from the knee dataset has a negative effect on the networks, which might indicate that it is not truly Gaussian. The discrepancy in the type of noise might also be causing the calculated <italic>&#x3c3;</italic> to be irrelevant and misleading, since <italic>&#x3c3;</italic> is used for both networks. Despite all of this, the networks are competitive with NLM in most&#x20;cases.</p>
<fig id="F12" position="float">
<label>FIGURE 12</label>
<caption>
<p>Example 1 of denoised brain MRI without adding any noise. The example image is the middle slice from one of the subjects. <bold>(A)</bold> Original image, no noise&#x2014;<bold>(B)</bold> SURE denoised image&#x2014;<bold>(C)</bold> Blindspot denoised image&#x2014;<bold>(D)</bold> NLM denoised image.</p>
</caption>
<graphic xlink:href="frai-04-642731-g012.tif"/>
</fig>
<fig id="F13" position="float">
<label>FIGURE 13</label>
<caption>
<p>Example 2 of denoised brain MRI without adding any noise. The example image is the middle slice from one of the subjects. <bold>(A)</bold> Original image, no noise&#x2014;<bold>(B)</bold> SURE denoised image&#x2014;<bold>(C)</bold> Blindspot denoised image&#x2014;<bold>(D)</bold> NLM denoised image.</p>
</caption>
<graphic xlink:href="frai-04-642731-g013.tif"/>
</fig>
</sec>
<sec id="s5">
<title>5 Conclusion</title>
<p>We evaluated two unsupervised approaches to denoise Magnetic Resonance Image, MRI, one approach based on a Stein&#x2019;s Unbiased Risk Estimator and another one based on a Blindspot network. Using the complex image space, innate to MRI, we tested a real dataset containing knee MRI, and a synthetic dataset consisting of brain MRI. Both networks were compared against Non-Local Means using quantitative and qualitative measures. Both networks outperformed NLM for all scoring metrics except when in the presence of exceptionally high levels of noise. One interesting direction that we would like to explore is 3D denoising using both networks. This is especially compelling for the blindspot network, since we will have to explore a 3D receptive&#x20;field.</p>
</sec>
</body>
<back>
<sec id="s6">
<title>Data Availability Statement</title>
<p>The data analyzed in this study is subject to the following licenses/restrictions: Need to ask for personalized download code from dataset owners. Requests to access these datasets should be directed to <email>fastmri@med.nyu.edu</email>.</p>
</sec>
<sec id="s7">
<title>Ethics Statement</title>
<p>Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.</p>
</sec>
<sec id="s8">
<title>Author Contributions</title>
<p>MML implemented and ran the experiments and contributed to the manuscript. JF helped organize the datasets and contributed to the manuscript. JV conceived of the project, managed the work and contributed to the manuscript.</p>
</sec>
<sec id="s9">
<title>Funding</title>
<p>This work was supported by the Balsells Foundation and the National Institutes of Health Grant No. 1R15GM128166-01.</p>
</sec>
<sec sec-type="COI-statement" id="s10">
<title>Conflict of Interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Benou</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Veksler</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Friedman</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Riklin Raviv</surname>
<given-names>T.</given-names>
</name>
</person-group> (<year>2017</year>). <article-title>Ensemble of Expert Deep Neural Networks for Spatio-Temporal Denoising of Contrast-Enhanced MRI Sequences</article-title>. <source>Med. Image Anal.</source> <volume>42</volume>, <fpage>145</fpage>&#x2013;<lpage>159</lpage>. <pub-id pub-id-type="doi">10.1016/j.media.2017.07.006</pub-id>
<comment>Accessed November 19, 2020</comment>) </citation>
</ref>
<ref id="B2">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Bermudez</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Plassard</surname>
<given-names>A. J.</given-names>
</name>
<name>
<surname>Davis</surname>
<given-names>T. L.</given-names>
</name>
<name>
<surname>Newton</surname>
<given-names>A. T.</given-names>
</name>
<name>
<surname>Resnick</surname>
<given-names>S. M.</given-names>
</name>
<name>
<surname>Landman</surname>
<given-names>B. A.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>Learning Implicit Brain MRI Manifolds with Deep Learning</article-title>. <source>Proc. SPIE Int. Soc. Opt. Eng.</source> <volume>10574</volume>, <fpage>408</fpage>&#x2013;<lpage>414</lpage>. <pub-id pub-id-type="doi">10.1117/12.2293515</pub-id>
<comment>Accessed November 19, 2020</comment>) </citation>
</ref>
<ref id="B3">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Bottou</surname>
<given-names>L.</given-names>
</name>
</person-group> (<year>1999</year>). <source>On-Line Learning and Stochastic Approximations</source>. <publisher-loc>USA</publisher-loc>: <publisher-name>Cambridge University Press</publisher-name>, <fpage>9</fpage>&#x2013;<lpage>42</lpage>. <pub-id pub-id-type="doi">10.1017/cbo9780511569920.003</pub-id> </citation>
</ref>
<ref id="B4">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Buades</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Coll</surname>
<given-names>B.</given-names>
</name>
<name>
<surname>Morel</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2005</year>). &#x201c;<article-title>A Non-local Algorithm for Image Denoising</article-title>,&#x201d; in <conf-name>2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR&#x2019;05)</conf-name>, <fpage>60</fpage>&#x2013;<lpage>65</lpage>. </citation>
</ref>
<ref id="B5">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Cocosco</surname>
<given-names>C. A.</given-names>
</name>
<name>
<surname>Kollokian</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Kwan</surname>
<given-names>R. K.-S.</given-names>
</name>
<name>
<surname>Evans</surname>
<given-names>A. C.</given-names>
</name>
</person-group> (<year>1997</year>). <article-title>Brainweb: Online Interface to a 3D MRI Simulated Brain Database</article-title>. <source>NeuroImage</source> <volume>5</volume>, <fpage>4</fpage>. <pub-id pub-id-type="doi">10.1016/S1053-8119(97)80018-3</pub-id> </citation>
</ref>
<ref id="B6">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Collins</surname>
<given-names>D. L.</given-names>
</name>
<name>
<surname>Zijdenbos</surname>
<given-names>A. P.</given-names>
</name>
<name>
<surname>Kollokian</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Sled</surname>
<given-names>J.&#x20;G.</given-names>
</name>
<name>
<surname>Kabani</surname>
<given-names>N. J.</given-names>
</name>
<name>
<surname>Holmes</surname>
<given-names>C. J.</given-names>
</name>
<etal/>
</person-group> (<year>1998</year>). <article-title>Design and Construction of a Realistic Digital Brain Phantom</article-title>. <source>IEEE Trans. Med. Imaging</source> <volume>17</volume>, <fpage>463</fpage>&#x2013;<lpage>468</lpage>. <pub-id pub-id-type="doi">10.1109/42.712135</pub-id> </citation>
</ref>
<ref id="B7">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Dabov</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Foi</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Katkovnik</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Egiazarian</surname>
<given-names>K.</given-names>
</name>
</person-group> (<year>2007</year>). <article-title>Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering</article-title>. <source>IEEE Trans. Image Process.</source> <volume>16</volume>, <fpage>2080</fpage>&#x2013;<lpage>2095</lpage>. <pub-id pub-id-type="doi">10.1109/TIP.2007.901238</pub-id>
<comment>Accessed November 19, 2020</comment>) </citation>
</ref>
<ref id="B8">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Eun</surname>
<given-names>D.-i.</given-names>
</name>
<name>
<surname>Jang</surname>
<given-names>R.</given-names>
</name>
<name>
<surname>Ha</surname>
<given-names>W. S.</given-names>
</name>
<name>
<surname>Lee</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Jung</surname>
<given-names>S. C.</given-names>
</name>
<name>
<surname>Kim</surname>
<given-names>N.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Deep-learning-based Image Quality Enhancement of Compressed Sensing Magnetic Resonance Imaging of Vessel wall: Comparison of Self-Supervised and Unsupervised Approaches</article-title>. <source>Sci. Rep.</source> <volume>10</volume>, <fpage>13950</fpage>. <pub-id pub-id-type="doi">10.1038/s41598-020-69932-w</pub-id>
<comment>Accessed November 28, 2020</comment>) </citation>
</ref>
<ref id="B9">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Gal</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Mehnert</surname>
<given-names>A. J.&#x20;H.</given-names>
</name>
<name>
<surname>Bradley</surname>
<given-names>A. P.</given-names>
</name>
<name>
<surname>McMahon</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Kennedy</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Crozier</surname>
<given-names>S.</given-names>
</name>
</person-group> (<year>2010</year>). <article-title>Denoising of Dynamic Contrast-Enhanced MR Images Using Dynamic Nonlocal Means</article-title>. <source>IEEE Trans. Med. Imaging</source> <volume>29</volume>, <fpage>302</fpage>&#x2013;<lpage>310</lpage>. <pub-id pub-id-type="doi">10.1109/TMI.2009.2026575</pub-id>
<comment>Accessed November 19, 2020</comment>) </citation>
</ref>
<ref id="B10">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Jiang</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Dou</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Vosters</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>X.</given-names>
</name>
<name>
<surname>Sun</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Tan</surname>
<given-names>T.</given-names>
</name>
</person-group> (<year>2018</year>). <article-title>Denoising of 3D Magnetic Resonance Images with Multi-Channel Residual Learning of Convolutional Neural Network</article-title>. <source>Jpn. J.&#x20;Radiol.</source> <volume>36</volume>, <fpage>566</fpage>&#x2013;<lpage>574</lpage>. <pub-id pub-id-type="doi">10.1007/s11604-018-0758-8</pub-id>
<comment>Accessed November 28, 2020</comment>) </citation>
</ref>
<ref id="B11">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Kingma</surname>
<given-names>D. P.</given-names>
</name>
<name>
<surname>Ba</surname>
<given-names>J.</given-names>
</name>
</person-group> (<year>2015</year>). &#x201c;<article-title>Adam: a Method for Stochastic Optimization</article-title>,&#x201d; in <conf-name>3rd International Conference on Learning Representations, ICLR 2015</conf-name> (<publisher-loc>San Diego, CA, USA</publisher-loc>: <publisher-name>Conference Track Proceedings</publisher-name>). </citation>
</ref>
<ref id="B12">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Knoll</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Zbontar</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Sriram</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Muckley</surname>
<given-names>M. J.</given-names>
</name>
<name>
<surname>Bruno</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Defazio</surname>
<given-names>A.</given-names>
</name>
<etal/>
</person-group> (<year>2020</year>). <article-title>fastMRI: a Publicly Available Raw K-Space and Dicom Dataset of Knee Images for Accelerated MR Image Reconstruction Using Machine Learning</article-title>. <source>Radiol. Artif. Intell.</source> <volume>2</volume>, <fpage>e190007</fpage>. <pub-id pub-id-type="doi">10.1148/ryai.2020190007</pub-id>
<comment>Accessed November 27, 2020</comment>) </citation>
</ref>
<ref id="B13">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Krull</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Buchholz</surname>
<given-names>T. O.</given-names>
</name>
<name>
<surname>Jug</surname>
<given-names>F.</given-names>
</name>
</person-group> (<year>2018</year>). <source>Noise2void - Learning Denoising from Single Noisy Images</source>. <publisher-name>CoRR abs/1811</publisher-name>, <fpage>10980</fpage>.</citation>
</ref>
<ref id="B14">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kwan</surname>
<given-names>R. K.-S.</given-names>
</name>
<name>
<surname>Evans</surname>
<given-names>A. C.</given-names>
</name>
<name>
<surname>Pike</surname>
<given-names>G. B.</given-names>
</name>
</person-group> (<year>1996</year>). <article-title>An Extensible MRI Simulator for post-processing Evaluation</article-title>. <source>Visualization Biomed. Comput.</source> <volume>VBC&#x2019;96</volume>, <fpage>135</fpage>&#x2013;<lpage>140</lpage>. <pub-id pub-id-type="doi">10.1007/BFb0046947</pub-id> </citation>
</ref>
<ref id="B15">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Kwan</surname>
<given-names>R. K.-S.</given-names>
</name>
<name>
<surname>Evans</surname>
<given-names>A. C.</given-names>
</name>
<name>
<surname>Pike</surname>
<given-names>G. B.</given-names>
</name>
</person-group> (<year>1999</year>). <article-title>Mri Simulation-Based Evaluation of Image-Processing and Classification Methods</article-title>. <source>IEEE Trans. Med. Imaging</source> <volume>18</volume>, <fpage>1085</fpage>&#x2013;<lpage>1097</lpage>. <pub-id pub-id-type="doi">10.1109/42.816072</pub-id> </citation>
</ref>
<ref id="B16">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Laine</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Karras</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Lehtinen</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Aila</surname>
<given-names>T.</given-names>
</name>
</person-group> (<year>2019</year>). &#x201c;<article-title>High-quality Self-Supervised Deep Image Denoising</article-title>,&#x201d; in <source>Advances in Neural Information Processing Systems</source> (<publisher-name>Curran Associates, Inc.</publisher-name>), <fpage>6970</fpage>&#x2013;<lpage>6980</lpage>. (<comment>Accessed November 19, 2020</comment>) </citation>
</ref>
<ref id="B17">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Lehtinen</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Munkberg</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Hasselgren</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Laine</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Karras</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Aittala</surname>
<given-names>M.</given-names>
</name>
<etal/>
</person-group> (<year>2018</year>). &#x201c;<article-title>Noise2Noise: Learning Image Restoration without Clean Data</article-title>,&#x201d; in <conf-name>Proceedings of the 35th International Conference on Machine Learning</conf-name>, <fpage>2965</fpage>&#x2013;<lpage>2974</lpage>. (<comment>Accessed November 27, 2020</comment>) </citation>
</ref>
<ref id="B18">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Mohan</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Krishnaveni</surname>
<given-names>V.</given-names>
</name>
<name>
<surname>Guo</surname>
<given-names>Y.</given-names>
</name>
</person-group> (<year>2014</year>). <article-title>A Survey on the Magnetic Resonance Image Denoising Methods</article-title>. <source>Biomed. Signal Process. Control.</source> <volume>9</volume>, <fpage>56</fpage>&#x2013;<lpage>69</lpage>. <pub-id pub-id-type="doi">10.1016/j.bspc.2013.10.007</pub-id>
<comment>Accessed November 19, 2020</comment>) </citation>
</ref>
<ref id="B19">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Ramani</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Blu</surname>
<given-names>T.</given-names>
</name>
<name>
<surname>Unser</surname>
<given-names>M.</given-names>
</name>
</person-group> (<year>2008</year>). <article-title>Monte-Carlo SURE: a Black-Box Optimization of Regularization Parameters for General Denoising Algorithms</article-title>. <source>IEEE Trans. Image Process.</source> <volume>17</volume>, <fpage>1540</fpage>&#x2013;<lpage>1554</lpage>. <pub-id pub-id-type="doi">10.1109/TIP.2008.2001404</pub-id>
<comment>Accessed November 19, 2020</comment>) </citation>
</ref>
<ref id="B20">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Ronneberger</surname>
<given-names>O.</given-names>
</name>
<name>
<surname>Fischer</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Brox</surname>
<given-names>T.</given-names>
</name>
</person-group> (<year>2015</year>). &#x201c;<article-title>U-net: Convolutional Networks for Biomedical Image Segmentation</article-title>,&#x201d; in <source>Medical Image Computing and Computer-Assisted Intervention</source> (<publisher-name>MICCAI</publisher-name>), <fpage>234</fpage>&#x2013;<lpage>241</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-319-24574-4_28</pub-id>
<comment>Accessed November 27, 2020</comment>) </citation>
</ref>
<ref id="B21">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Sardy</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Tseng</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Bruce</surname>
<given-names>A.</given-names>
</name>
</person-group> (<year>2001</year>). <article-title>Robust Wavelet Denoising</article-title>. <source>IEEE Trans. Signal. Process.</source> <volume>49</volume>, <fpage>1146</fpage>&#x2013;<lpage>1152</lpage>. <pub-id pub-id-type="doi">10.1109/78.923297</pub-id>
<comment>Accessed November 28, 2020</comment>) </citation>
</ref>
<ref id="B22">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Soltanayev</surname>
<given-names>S.</given-names>
</name>
<name>
<surname>Chun</surname>
<given-names>S. Y.</given-names>
</name>
</person-group> (<year>2018</year>). &#x201c;<article-title>Training Deep Learning Based Denoisers without Ground Truth Data</article-title>,&#x201d; in <source>Advances in Neural Information Processing Systems</source> (<publisher-name>Curran Associates, Inc.</publisher-name>), <fpage>3257</fpage>&#x2013;<lpage>3267</lpage>. <pub-id pub-id-type="doi">10.5555/3327144.3327246</pub-id>
<comment>Accessed November 19, 2020</comment>) </citation>
</ref>
<ref id="B23">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Stein</surname>
<given-names>C. M.</given-names>
</name>
</person-group> (<year>1981</year>). <article-title>Estimation of the Mean of a Multivariate normal Distribution</article-title>. <source>Ann. Statist.</source> <volume>9</volume>, <fpage>1135</fpage>&#x2013;<lpage>1151</lpage>. <pub-id pub-id-type="doi">10.1214/aos/1176345632</pub-id>
<comment>Accessed November 19, 2020</comment>) </citation>
</ref>
<ref id="B24">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Tomasi</surname>
<given-names>C.</given-names>
</name>
<name>
<surname>Manduchi</surname>
<given-names>R.</given-names>
</name>
</person-group> (<year>1998</year>). &#x201c;<article-title>Bilateral Filtering for gray and Color Images</article-title>,&#x201d; in <conf-name>Proceedings of the Sixth International Conference on Computer Vision</conf-name> (<publisher-name>IEEE Computer Society), ICCV &#x2019;98</publisher-name>), <fpage>839</fpage>. </citation>
</ref>
<ref id="B25">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Tripathi</surname>
<given-names>P. C.</given-names>
</name>
<name>
<surname>Bag</surname>
<given-names>S.</given-names>
</name>
</person-group> (<year>2020</year>). <article-title>Cnn-dmri: a Convolutional Neural Network for Denoising of Magnetic Resonance Images</article-title>. <source>Pattern Recognition Lett.</source> <volume>135</volume>, <fpage>57</fpage>&#x2013;<lpage>63</lpage>. <pub-id pub-id-type="doi">10.1016/j.patrec.2020.03.036</pub-id>
<comment>Accessed November 28, 2020</comment>) </citation>
</ref>
<ref id="B26">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Vincent</surname>
<given-names>P.</given-names>
</name>
<name>
<surname>Larochelle</surname>
<given-names>H.</given-names>
</name>
<name>
<surname>Lajoie</surname>
<given-names>I.</given-names>
</name>
<name>
<surname>Bengio</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Manzagol</surname>
<given-names>P. A.</given-names>
</name>
</person-group> (<year>2010</year>). <article-title>Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion</article-title>. <source>J.&#x20;Mach. Learn. Res.</source> <volume>11</volume>, <fpage>3371</fpage>&#x2013;<lpage>3408</lpage>. <pub-id pub-id-type="doi">10.5555/1756006.1953039</pub-id>
<comment>Accessed November 19, 2020</comment>) </citation>
</ref>
<ref id="B27">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Wang</surname>
<given-names>Z.</given-names>
</name>
<name>
<surname>Bovik</surname>
<given-names>A. C.</given-names>
</name>
<name>
<surname>Sheikh</surname>
<given-names>H. R.</given-names>
</name>
<name>
<surname>Simoncelli</surname>
<given-names>E. P.</given-names>
</name>
</person-group> (<year>2004</year>). <article-title>Image Quality Assessment: from Error Visibility to Structural Similarity</article-title>. <source>IEEE Trans. Image Process.</source> <volume>13</volume>, <fpage>600</fpage>&#x2013;<lpage>612</lpage>. <pub-id pub-id-type="doi">10.1109/TIP.2003.819861</pub-id> <comment>Accessed November 19, 2020</comment>) </citation>
</ref>
<ref id="B28">
<citation citation-type="book">
<collab>web</collab> (<year>1998</year>). <source>Online</source> (<comment>Accessed November 19, 2020</comment>)</citation>
</ref>
<ref id="B29">
<citation citation-type="book">
<collab>web</collab> (<year>2017</year>). <source>Online</source> (<comment>Accessed November 28, 2020</comment>)</citation>
</ref>
<ref id="B30">
<citation citation-type="book">
<collab>web</collab> (<year>2020a</year>). <source>Online</source> (<comment>Accessed November 19, 2020</comment>)</citation>
</ref>
<ref id="B31">
<citation citation-type="book">
<collab>web</collab> (<year>2020b</year>). <source>Online</source> (<comment>Accessed November 28, 2020</comment>)</citation>
</ref>
<ref id="B32">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Xu</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Huang</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Cheng</surname>
<given-names>M.-M.</given-names>
</name>
<name>
<surname>Liu</surname>
<given-names>L.</given-names>
</name>
<name>
<surname>Zhu</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Xu</surname>
<given-names>Z.</given-names>
</name>
<etal/>
</person-group> (<year>2020</year>). <article-title>Noisy-as-clean: Learning Self-Supervised Denoising from Corrupted Image</article-title>. <source>IEEE Trans. Image Process.</source> <volume>29</volume>, <fpage>9316</fpage>&#x2013;<lpage>9329</lpage>. <pub-id pub-id-type="doi">10.1109/TIP.2020.3026622</pub-id>
<comment>Accessed November 28, 2020</comment>) </citation>
</ref>
<ref id="B33">
<citation citation-type="book">
<person-group person-group-type="author">
<name>
<surname>Zbontar</surname>
<given-names>J.</given-names>
</name>
<name>
<surname>Knoll</surname>
<given-names>F.</given-names>
</name>
<name>
<surname>Sriram</surname>
<given-names>A.</given-names>
</name>
<name>
<surname>Muckley</surname>
<given-names>M. J.</given-names>
</name>
<name>
<surname>Bruno</surname>
<given-names>M.</given-names>
</name>
<name>
<surname>Defazio</surname>
<given-names>A.</given-names>
</name>
<etal/>
</person-group> (<year>2018</year>). <source>Fast MRI: An Open Dataset and Benchmarks for Accelerated MRI</source>.</citation>
</ref>
<ref id="B34">
<citation citation-type="journal">
<person-group person-group-type="author">
<name>
<surname>Zhang</surname>
<given-names>K.</given-names>
</name>
<name>
<surname>Zuo</surname>
<given-names>W.</given-names>
</name>
<name>
<surname>Chen</surname>
<given-names>Y.</given-names>
</name>
<name>
<surname>Meng</surname>
<given-names>D.</given-names>
</name>
<name>
<surname>Zhang</surname>
<given-names>L.</given-names>
</name>
</person-group> (<year>2017</year>). <article-title>Beyond a Gaussian Denoiser: Residual Learning of Deep Cnn for Image Denoising</article-title>. <source>IEEE Trans. Image Process.</source> <volume>26</volume>, <fpage>3142</fpage>&#x2013;<lpage>3155</lpage>. <pub-id pub-id-type="doi">10.1109/TIP.2017.2662206</pub-id>
<comment>Accessed November 28, 2020</comment>) </citation>
</ref>
</ref-list>
</back>
</article>