<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<?covid-19-tdm?>
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="review-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Big Data</journal-id>
<journal-title>Frontiers in Big Data</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Big Data</abbrev-journal-title>
<issn pub-type="epub">2624-909X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fdata.2023.1120989</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Big Data</subject>
<subj-group>
<subject>Review</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>AI-based radiodiagnosis using chest X-rays: A review</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author">
<name><surname>Akhter</surname> <given-names>Yasmeena</given-names></name>
<uri xlink:href="http://loop.frontiersin.org/people/1576967/overview"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Singh</surname> <given-names>Richa</given-names></name>
<uri xlink:href="http://loop.frontiersin.org/people/1048576/overview"/>
</contrib>
<contrib contrib-type="author" corresp="yes">
<name><surname>Vatsa</surname> <given-names>Mayank</given-names></name>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1049931/overview"/>
</contrib>
</contrib-group>
<aff><institution>Indian Institute of Technology Jodhpur</institution>, <addr-line>Jodhpur</addr-line>, <country>India</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited by: Nitesh V. Chawla, University of Notre Dame, United States</p></fn>
<fn fn-type="edited-by"><p>Reviewed by: Mehul S. Raval, Ahmedabad University, India; Ceren Kaya, Bulent Ecevit University, T&#x000FC;rkiye</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Mayank Vatsa <email>mvasta&#x00040;iitj.ac.in</email></corresp>
<fn fn-type="other" id="fn001"><p>This article was submitted to Data Analytics for Social Impact, a section of the journal Frontiers in Big Data</p></fn></author-notes>
<pub-date pub-type="epub">
<day>06</day>
<month>04</month>
<year>2023</year>
</pub-date>
<pub-date pub-type="collection">
<year>2023</year>
</pub-date>
<volume>6</volume>
<elocation-id>1120989</elocation-id>
<history>
<date date-type="received">
<day>10</day>
<month>12</month>
<year>2022</year>
</date>
<date date-type="accepted">
<day>06</day>
<month>01</month>
<year>2023</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2023 Akhter, Singh and Vatsa.</copyright-statement>
<copyright-year>2023</copyright-year>
<copyright-holder>Akhter, Singh and Vatsa</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license> </permissions>
<abstract>
<p>Chest Radiograph or Chest X-ray (CXR) is a common, fast, non-invasive, relatively cheap radiological examination method in medical sciences. CXRs can aid in diagnosing many lung ailments such as Pneumonia, Tuberculosis, Pneumoconiosis, COVID-19, and lung cancer. Apart from other radiological examinations, every year, 2 billion CXRs are performed worldwide. However, the availability of the workforce to handle this amount of workload in hospitals is cumbersome, particularly in developing and low-income nations. Recent advances in AI, particularly in computer vision, have drawn attention to solving challenging medical image analysis problems. Healthcare is one of the areas where AI/ML-based assistive screening/diagnostic aid can play a crucial part in social welfare. However, it faces multiple challenges, such as small sample space, data privacy, poor quality samples, adversarial attacks and most importantly, the model interpretability for reliability on machine intelligence. This paper provides a structured review of the CXR-based analysis for different tasks, lung diseases and, in particular, the challenges faced by AI/ML-based systems for diagnosis. Further, we provide an overview of existing datasets, evaluation metrics for different[][15mm][0mm]Q5 tasks and patents issued. We also present key challenges and open problems in this research domain.</p></abstract>
<kwd-group>
<kwd>chest X-ray</kwd>
<kwd>trusted AI</kwd>
<kwd>interpretable deep learning</kwd>
<kwd>Pneumoconiosis</kwd>
<kwd>tuberculosis</kwd>
<kwd>pneumonia</kwd>
<kwd>COVID-19</kwd>
</kwd-group>
<counts>
<fig-count count="8"/>
<table-count count="8"/>
<equation-count count="10"/>
<ref-count count="242"/>
<page-count count="27"/>
<word-count count="21068"/>
</counts>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="s1">
<title>1. Introduction</title>
<p>Advances in medical technology have enhanced the process of disease diagnosis, prevention, monitoring, treatment and care. Imaging technologies such as computer tomography (CT), medical imaging resonance (MRI), ultrasonography (USG), PET and others, along with digital pathology, are at ease for medical practitioners to assess and treat any disorder. <xref ref-type="table" rid="T1">Table 1</xref> provides a comparative overview of the existing common imaging modalities used in medical sciences.<xref ref-type="fn" rid="fn0001"><sup>1</sup></xref> <xref ref-type="fn" rid="fn0002"><sup>2</sup></xref> Every year across the globe, a massive number of investigations are performed to assess human health for disease diagnosis and treatment and the data generated from hospitals annually is in petabytes (IDC, <xref ref-type="bibr" rid="B79">2014</xref>). The generated &#x02018;big data&#x00027; include all electronic health records (EHR) consisting of medical imaging, lab reports, genomics, clinical notes and financial and operational data (Murphy, <xref ref-type="bibr" rid="B135">2019</xref>). Out of the total generated data from the hospital, the maximum contribution is made by radiology or imaging data. However, 97% of this data remained unanalyzed or unused (Murphy, <xref ref-type="bibr" rid="B135">2019</xref>).</p>
<table-wrap position="float" id="T1">
<label>Table 1</label>
<caption><p>Comparative analysis of common and widely used imaging modalities for medical applications.</p></caption> 
<table frame="hsides" rules="groups">
<thead>
<tr style="background-color:#919497;color:#ffffff">
<th valign="top" align="left"><bold>Specifications</bold></th>
<th valign="top" align="left"><bold>CT</bold></th>
<th valign="top" align="left"><bold>MRI</bold></th>
<th valign="top" align="left"><bold>X-Ray</bold></th>
<th valign="top" align="left"><bold>PET</bold></th>
<th valign="top" align="left"><bold>SPECT</bold></th>
<th valign="top" align="left"><bold>USG</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Acronynm</td>
<td valign="top" align="left">Computer Tomography</td>
<td valign="top" align="left">Magnetic Resonance Imaging</td>
<td valign="top" align="left">X-radiation/ Rontgen radiation</td>
<td valign="top" align="left">Positron Emission Tomography</td>
<td valign="top" align="left">Single Photon Emission Computed Tomography</td>
<td valign="top" align="left">Ultrasound/ Ultrasonography</td>
</tr> <tr>
<td valign="top" align="left">Working principle</td>
<td valign="top" align="left">Uses multiple X-rays at different angles to generate 3D image</td>
<td valign="top" align="left">Uses magnet and pulsing radio waves to generate response from presence of water molecules inside the human body</td>
<td valign="top" align="left">X-ray beam passed through body gets blocked due to denser tissue which results in shadow of the tissue</td>
<td valign="top" align="left">Injection with Radioactive tracer that emits positrons. Later, these positrons are tracked over time in the form of a 3D image.</td>
<td valign="top" align="left">Same as PET</td>
<td valign="top" align="left">Uses high frequency sound waves as short pulses from area of interest as reflections received by transducer</td>
</tr> <tr>
<td valign="top" align="left">Usage/ application</td>
<td valign="top" align="left">Recommended for all structures of human body (soft/ bone/blood vessels)</td>
<td valign="top" align="left">Best Suited for soft tissues</td>
<td valign="top" align="left">Recommended for diseased tissues/organs like lungs and bony structures such as teeth, skull etc.</td>
<td valign="top" align="left">Allows to trace the biological processes within human body</td>
<td valign="top" align="left">Same as PET</td>
<td valign="top" align="left">Best suited for internal organs. Not recommended for bony structures</td>
</tr> <tr>
<td valign="top" align="left">Scanner cost ($)</td>
<td valign="top" align="left">85&#x02013;450 K</td>
<td valign="top" align="left">225&#x02013;500 K&#x0002B;</td>
<td valign="top" align="left">40&#x02013;175 K</td>
<td valign="top" align="left">225&#x02013;750 K</td>
<td valign="top" align="left">400&#x02013;600 K</td>
<td valign="top" align="left">20&#x02013;200 K</td>
</tr> <tr>
<td valign="top" align="left">Radiation exposure</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">None</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">None</td>
</tr> <tr>
<td valign="top" align="left">Per scan cost ($)</td>
<td valign="top" align="left">1,200&#x02013;3,200</td>
<td valign="top" align="left">1,200&#x02013;4,000</td>
<td valign="top" align="left">&#x0007E;70</td>
<td valign="top" align="left">3,000&#x02013;6,000</td>
<td/>
<td valign="top" align="left">100&#x02013;1,000</td>
</tr> <tr>
<td valign="top" align="left">Time of scanning</td>
<td valign="top" align="left">30 s</td>
<td valign="top" align="left">10 min&#x02013;2 h</td>
<td valign="top" align="left">A few seconds</td>
<td valign="top" align="left">2&#x02013;4 h</td>
<td valign="top" align="left">2&#x02013;4 h</td>
<td valign="top" align="left">10&#x02013;15 min</td>
</tr> <tr>
<td valign="top" align="left">Side effect</td>
<td valign="top" align="left">Excessive exposure can lead to cancer</td>
<td/>
<td valign="top" align="left">Prolonged exposure is hazardous</td>
<td valign="top" align="left">Radioactive allergy can occur. Overdue exposure can be dangerous</td>
<td valign="top" align="left">Same as PET</td>
<td valign="top" align="left">Comparatively safer</td>
</tr> <tr>
<td valign="top" align="left">Spatial resolution (mm)</td>
<td valign="top" align="left">0.5&#x02013;1</td>
<td valign="top" align="left">0.2</td>
<td valign="top" align="left">-</td>
<td valign="top" align="left">6&#x02013;10</td>
<td valign="top" align="left">7&#x02013;15</td>
<td valign="top" align="left">0.1&#x02013;1</td>
</tr> <tr>
<td valign="top" align="left">Details of soft/hard tissue</td>
<td valign="top" align="left">Higher contrast images are generated and ideal for both types of</td>
<td valign="top" align="left">Data with higher details of soft tissues are received</td>
<td valign="top" align="left">Can be used for soft tissues as well such Gall bladder, lungs etc.</td>
<td valign="top" align="left">Covers biological phenomenon such as drug delivery etc.</td>
<td valign="top" align="left">Allows to inspect functioning of various body organs and useful in brain disorders, heart problems and bone disorders</td>
<td valign="top" align="left">Soft tissues such as muscles, internal organs etc.</td>
</tr> <tr>
<td valign="top" align="left">Limitations</td>
<td valign="top" align="left">Patients with large body size may underfit the scanning process</td>
<td valign="top" align="left">Patients with heavy weight may underfit the scanning process. Also, patients with pacemakers, tattoos are not advised the scan</td>
<td valign="top" align="left">Limited to few body parts</td>
<td valign="top" align="left">Kids and pregnant women are not recommended. Expensive.</td>
<td valign="top" align="left">Long scan time, low resolution, higher artifacts rate. Expensive</td>
<td valign="top" align="left">Objects deeper or hidden under bone are not captured. Presence of air spaces also fail scanning process.</td>
</tr></tbody>
</table>
</table-wrap>
<p>Among all the imaging modalities, X-ray is the most common, fast and inexpensive modality used to diagnose many human body disorders such as fractures and dislocations and ailments such as cancer, osteoporosis of bones, chest conditions such as pneumonia, tuberculosis, COVID-19, and many more. It is a non-invasive and painless medical examination that uses an electric device for emission to pass through the patient body, and a 2-D image with the impression of internal body structures is generated. It is estimated that more than 3.5 billion diagnostic X-rays are performed annually worldwide (Mitchell, <xref ref-type="bibr" rid="B129">2012</xref>) and they contribute 40% to the total imaging count per year (WHO, <xref ref-type="bibr" rid="B223">2016</xref>), billion CXRs are performed worldwide. However, the availability of a trained workforce to handle this amount of workload is limited, particularly in developing and low-income nations. For instance, in some parts of India, there is one radiologist for 100,000 patients, and in the U.S., it is one radiologist for 10,000 patients.</p>
<p>In recent years, with the unprecedented advancements in deep learning and computer vision, computer-aided diagnosis has started to intervene in the diagnosis process and ease the workload for doctors. CXR-based analysis with machine learning and deep learning has drawn attention among researchers to provide an easy and reliable solution for different lung diseases. Many attempts have been made to provide easy automatic CXR-based diagnosis to increase the acceptance of AI-based solutions. Currently, many commercial products are available for clinical use which have cleared CE marked (Europe) and/or FDA clearance (United States), for instance, qXR by <ext-link ext-link-type="uri" xlink:href="https://qure.ai">qure.ai</ext-link> (Singh et al., <xref ref-type="bibr" rid="B189">2018</xref>), TIRESYA by Digitec (Kim et al., <xref ref-type="bibr" rid="B100">2017</xref>), Lunit INSIGHT CXR by Lunit (Hwang et al., <xref ref-type="bibr" rid="B76">2019</xref>), Auto lung by Samsung Healthcare (Sim et al., <xref ref-type="bibr" rid="B187">2020</xref>), AI-Rad companion by Seimens Healthineers (Fischer et al., <xref ref-type="bibr" rid="B51">2020</xref>), CAD4COVID-XRay by Thirona (Murphy et al., <xref ref-type="bibr" rid="B136">2020</xref>) and many more.</p>
<p>Based on the projection, CXRs are differentiated into three categories; posteroanterior (PA), anteroposterior (AP) and lateral (LL). <xref ref-type="fig" rid="F1">Figure 1</xref> showcases CXR samples for three different projections. PA view is the standard projection of the X-ray beam traversing the patient from posterior to anterior. On the other hand, AP is the opposite alternative to PA, where an X-ray beam passes the patient chest from anterior to posterior. A lateral view is performed erect left lateral (default). It demonstrates a better anatomical view of the heart, and assesses posterior costophrenic recesses. It is generally done to assess the retrosternal and retrocardiac airspaces.<xref ref-type="fn" rid="fn0003"><sup>3</sup></xref> <xref ref-type="table" rid="T2">Table 2</xref> tabulates the differences in the AP and PA views. The patient alignment also compromises the assessment of the chest X-ray for different organs such as the heart, mediastinum, tracheal position, and lung appearances. Rotation of the person can lead to certain misleading appearances in CXRs, such as heart size. In a left rotation in PA CXR, the heart appears enlarged and vice-versa. Moreover, the rotation can affect the assessment of soft tissue in CXRs, misleading the impressions in the lungs, for instance, costophrenic angle.<xref ref-type="fn" rid="fn0004"><sup>4</sup></xref> About 25% of the total CXR count per year, faces the &#x02018;reject rates&#x00027; due to image quality or patient positioning (Little et al., <xref ref-type="bibr" rid="B116">2017</xref>).</p>
<fig id="F1" position="float">
<label>Figure 1</label>
<caption><p>Showcasing the chest-X rays for three projections. <bold>(A)</bold> AP view, <bold>(B)</bold> PA view, and <bold>(C)</bold> Lateral View.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-06-1120989-g0001.tif"/>
</fig>
<table-wrap position="float" id="T2">
<label>Table 2</label>
<caption><p>Illustrates the differences between two common CXR projections.</p></caption> 
<table frame="hsides" rules="groups">
<thead>
<tr style="background-color:#919497;color:#ffffff">
<th valign="top" align="left"><bold>PA view</bold></th>
<th valign="top" align="left"><bold>AP view</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Standard frontal Chest projection</td>
<td valign="top" align="left">Alternative frontal projection to the PA</td>
</tr> <tr>
<td valign="top" align="left">X-ray beam traverses the patient from posterior to anterior</td>
<td valign="top" align="left">X-ray beam traverses the patient from anterior to posterior</td>
</tr> <tr>
<td valign="top" align="left">Needs full aspiration and standing position from patient</td>
<td valign="top" align="left">Can be performed patient sitting on the bed</td>
</tr> <tr>
<td valign="top" align="left">Best practice to examine lungs, mediastinum and thoracic cavity</td>
<td valign="top" align="left">Best practice for intubated and sick patients</td>
</tr> <tr>
<td valign="top" align="left">Heart size appear normal</td>
<td valign="top" align="left">Heart size appear magnified</td>
</tr> <tr>
<td valign="top" align="left">Images are of higher quality and a better option to assess heart size</td>
<td valign="top" align="left">Not a good option to assess the size of heart</td>
</tr></tbody>
</table>
</table-wrap>
<p>In the existing literature, with the release of multiple datasets for different lung diseases, different tasks have been established with CXR data. Below is the list of tasks accomplished for CXR-based analysis using different ML and DL approaches. <xref ref-type="fig" rid="F2">Figure 2</xref> showcases the transition across different tasks for CXR-based image analysis.</p>
<list list-type="bullet">
<list-item><p>Image enhancement: The collected data from the hospitals do not always contribute to the detection process. The reason is varying quality samples. So, before proposing a detection pipeline, authors have used different CXR enhancement techniques for noise reduction, contrast enhancement, edge detection and many more.</p></list-item>
<list-item><p>Segmentation: In CXR, segmentation of ROI usually gives a better edge to the disease detection pipeline. This reduces the ineffectual part of CXR, allowing lesser chances of misdiagnosis. Existing work has focused on the segmentation of the lung field, ribs, diseased part, diaphragm, costophrenic angle and support devices.</p></list-item>
<list-item><p>Image classification: For the CXR datasets, multi-class and multi-label classification tasks have been performed using ML and DL approaches. With datasets such as CheXpert (Irvin et al., <xref ref-type="bibr" rid="B80">2019</xref>), ChestXray14 (Wang et al., <xref ref-type="bibr" rid="B217">2017</xref>) etc., multi-label classification is done. It reflects the different manifestations (local labels) in CXR due to any disease. For instance, Pneumoconiosis can cause multiple manifestations in the lung tissue, such as atelectasis, nodules, fibrosis, emphysema and many more. Similarly, in multi-class, we differentiate CXR into a particular class for diseases. For instance, the detection of pneumonia in CXR is a multi-class problem. We need to distinguish viral, bacterial and COVID-19, representing three classes (types) of pneumonia.</p></list-item>
<list-item><p>Disease localization: It specifies the region within CXR infected by any particular disease. This is generally indicated by a bounding box, dot or circular shape.</p></list-item>
<list-item><p>Image generation: Generally, the datasets are small in number and also suffer class imbalance problems. In order to improve the training set number, different approaches apart from affine transformation-based data augmentation, such as Generative Adversarial network-based approaches, are used. Moreover, analysis are done on the real and synthetic CXRs.</p></list-item>
<list-item><p>Report generation: The generation of reports for a given CXR is one of the recent areas covered in CXR-based image analysis. The task involves reporting all the findings present in CXR in a text file.</p></list-item>
<list-item><p>Model explainability: With the remarkable performance of the deep model, model explainability is a must to build trust in the machine intelligence-based decision. Explanation of machine intelligence justifies the decision process and builds trust in the automatic decision process. Interpretability encourages understanding the mechanism of algorithmic predictions.</p></list-item>
</list>
<fig id="F2" position="float">
<label>Figure 2</label>
<caption><p>Showcasing the transition across different tasks in CXR-based analysis for a given input image.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-06-1120989-g0002.tif"/>
</fig>
<p>The availability of intelligent machine diagnostics for Chest X-rays aids in reducing information overload and exhaustion of radiologists by interpreting and reporting the radiology scans. Many diseases affect the lungs, including lung cancer, bronchitis, COPD, Fibrosis, and many more. The literature review below is based on the publicly available datasets and the work done for these common diseases. <xref ref-type="fig" rid="F3">Figure 3</xref> showcases different areas for which existing literature is available for CXR-based analysis.</p>
<fig id="F3" position="float">
<label>Figure 3</label>
<caption><p>Showcasing research problems which have been studied in the literature.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-06-1120989-g0003.tif"/>
</fig>
<p>In this review article, we focus on using computer vision, machine learning, and deep learning algorithms for different disorders where CXR is a standard medical investigation. We discuss the related work about the tasks mentioned above for CXR-based analysis. We further present the literature for widely studied disorders such as TB, Pneumonia, Pneumoconiosis, COVID-19, and lung cancer available in terms of publications and patents. We also discuss the evaluation metrics used to assess the performance of different tasks, publicly available datasets for various disorders and tasks. <xref ref-type="fig" rid="F4">Figure 4</xref> shows the schematic organization of the paper.</p>
<fig id="F4" position="float">
<label>Figure 4</label>
<caption><p>Illustrating the schematic structure of the paper.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-06-1120989-g0004.tif"/>
</fig>
</sec>
<sec id="s2">
<title>2. Task-based literature review</title>
<p>We first present the review of different tasks with respect to CXR-based analysis, such as pre-processing and classification and disease localization.</p>
<sec>
<title>2.1. Image pre-processing</title>
<p>Image pre-processing includes enhancement and segmentation tasks and are either rule-based/handcrafted or deep learning based.</p>
<sec>
<title>2.1.1. Pre-deep learning based approaches</title>
<p>Sherrier and Johnson (<xref ref-type="bibr" rid="B184">1987</xref>) used a region-based histogram equalization technique to improve the image quality of CXR locally and finally obtain an enhanced image. Zhang D. et al. (<xref ref-type="bibr" rid="B235">2021</xref>) used the dynamic histogram enhancement technique (Abin et al., <xref ref-type="bibr" rid="B3">2022</xref>) used different image enhancement techniques such as Brightness Preserving Bi Histogram (BBHE) (Zadbuke, <xref ref-type="bibr" rid="B232">2012</xref>), Equal Area Dualistic Sub-Image Histogram Equalization (DSIHE) (Yao et al., <xref ref-type="bibr" rid="B228">2016</xref>), Recursive Mean Separate Histogram Equalization (RMSHE) (Chen and Ramli, <xref ref-type="bibr" rid="B27">2003</xref>) followed by a Particle swarm optimization (PSO) (Settles, <xref ref-type="bibr" rid="B180">2005</xref>) for further enhancing the CXRs for detecting pneumonia. Soleymanpour et al. (<xref ref-type="bibr" rid="B191">2011</xref>) used adaptive contrast equalization for enhancement, morphological operation-based region growing to find lung contour for lung segmentation followed by oriental spatial Gabor filter (Gabor, <xref ref-type="bibr" rid="B52">1946</xref>) for rib suppression. Candemir et al. (<xref ref-type="bibr" rid="B19">2013</xref>) used graph cut optimization (Boykov and Funka-Lea, <xref ref-type="bibr" rid="B17">2006</xref>) method to find the lung boundary. Van Ginneken et al. (<xref ref-type="bibr" rid="B208">2006</xref>) used three approaches, Active shape model (Cootes et al., <xref ref-type="bibr" rid="B34">1994</xref>), active appearance models (Cootes et al., <xref ref-type="bibr" rid="B33">2001</xref>), pixelwise classification to segment the lung fields in CXRs. Li et al. (<xref ref-type="bibr" rid="B109">2001</xref>) used an edge detection-based approach by calculating vertical and horizontal derivatives to find the RoI in CXR. Annangi et al. (<xref ref-type="bibr" rid="B8">2010</xref>) used edge detection with an active contour method-based approach for lung segmentation.</p>
</sec>
<sec>
<title>2.1.2. Deep learning based approaches</title>
<p>Abdullah-Al-Wadud et al. (<xref ref-type="bibr" rid="B2">2007</xref>) proposed enhancing the CXR images input for a CNN model for pneumonia detection. Hasegawa et al. (<xref ref-type="bibr" rid="B64">1994</xref>) used a shift-invariant CNN-based approach for lung segmentation. Hwang and Park (<xref ref-type="bibr" rid="B78">2017</xref>) proposed a Multi-stage training approach to perform segmentation using atrous convolutions. Hurt et al. (<xref ref-type="bibr" rid="B74">2020</xref>) used UNet-based (Ronneberger et al., <xref ref-type="bibr" rid="B169">2015</xref>) semantic segmentation for extracting lung field and performed pneumonia classification. Li B. et al. (<xref ref-type="bibr" rid="B107">2019</xref>) used the UNet model to segment the lung part, followed by the attention-based CNN for pneumonia classification. Kusakunniran et al. (<xref ref-type="bibr" rid="B103">2021</xref>) and Blain et al. (<xref ref-type="bibr" rid="B16">2021</xref>) used UNet for lung segmentation for COVID-19 detection. Oh et al. (<xref ref-type="bibr" rid="B140">2020</xref>) used the extended fully convolution DenseNet (J&#x000E9;gou et al., <xref ref-type="bibr" rid="B86">2017</xref>) to perform pixel-wise segmentation for lung fields in CXR to improve the classification performance for COVID-19 detection. Subramanian et al. (<xref ref-type="bibr" rid="B194">2019</xref>) used UNet based model to segment out the central venous catheters (CVCs) in CXRs. Cao and Zhao (<xref ref-type="bibr" rid="B20">2021</xref>) used a UNet-based semantic segmentation model with variational auto-encoder features in the encoder and decoder of UNet with an attention mechanism to perform automatic lung segmentation. Singh et al. (<xref ref-type="bibr" rid="B188">2021</xref>) propose an approach based on DeepLabV3&#x0002B; (Chen et al., <xref ref-type="bibr" rid="B26">2017b</xref>) with dilated convolution for lung field segmentation. <xref ref-type="fig" rid="F5">Figures 5A</xref>, <xref ref-type="fig" rid="F5">B</xref> showcase examples of the preprocessing tasks.</p>
<fig id="F5" position="float">
<label>Figure 5</label>
<caption><p>Showcases the examples of outputs obtained after tasks such as pre-processing and classification. <bold>(A)</bold> shows output of contrast enhancement. <bold>(B)</bold> shows output of the segmentation task and <bold>(C)</bold> shows the classification pipeline.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-06-1120989-g0005.tif"/>
</fig>
</sec>
<sec>
<title>2.1.3. Patent review</title>
<p>Hong et al. (<xref ref-type="bibr" rid="B68">2009a</xref>) proposed an approach to segment the diaphragm from the CXR using a rule-based method. Huo and Zhao (<xref ref-type="bibr" rid="B73">2014</xref>) proposed an approach to suppress the clavicle bone in CXR based on the edge detection algorithm. Chandalia and Gupta (<xref ref-type="bibr" rid="B21">2022</xref>) proposed a deep learning-based detection model to detect the inputted image as CT or CXR. Jiezhi et al. (<xref ref-type="bibr" rid="B87">2018</xref>) proposed a method to determine the quality of the inputted CXR image using deep learning.</p>
</sec>
<sec>
<title>2.1.4. Discussion</title>
<p>From the above literature, pre-deep learning-based approaches require well-defined heuristics to either enhance or segment the lung region. A major focus has been laid on noise removal or contrast enhancement and lung segmentation. However, limited attention has been given to diseased ROIs segmentation. The common datasets used to perform lung segmentation are Montgomery and Shenzhen (Jaeger et al., <xref ref-type="bibr" rid="B83">2014</xref>); however, the number of samples is limited. No dataset is available to focus on local findings.</p>
</sec>
</sec>
<sec>
<title>2.2. Image classification</title>
<p>This section covers the literature on CXR classification for multiclass and multilabel settings. Input CXR images undergoes feature extraction followed by classification algorithms, which are either rule-based or handcrafted or use deep learning.</p>
<sec>
<title>2.2.1. Pre-deep learning based approaches</title>
<p>Katsuragawa et al. (<xref ref-type="bibr" rid="B94">1988</xref>) developed an automated approach based on the two-dimensional Fourier transform for detecting and characterizing interstitial lung disorder. The approach uses the textural information for a given CXR as normal or abnormal. Ashizawa et al. (<xref ref-type="bibr" rid="B10">1999</xref>) used 16 radiological features from CXR and ten clinical parameters to classify a given CXR as one of the classes among 11 interstitial lung diseases using ANN. A statistically significant improvement was reported over the diagnostic results from the radiologists.</p>
</sec>
<sec>
<title>2.2.2. Deep Learning based approaches</title>
<p>Thian et al. (<xref ref-type="bibr" rid="B203">2021</xref>) combined two large publicly available datasets, ChestXray14 (Wang et al., <xref ref-type="bibr" rid="B217">2017</xref>) and MIMICCXR (Johnson et al., <xref ref-type="bibr" rid="B90">2019</xref>), to train a deep learning model for the detection of pneumothorax and assess its generalizability on six external validation CXR sets independent of the training set.</p>
<p>Homayounieh et al. (<xref ref-type="bibr" rid="B67">2021</xref>) proposed an approach to assess the ability of AI for nodule detection in CXR. The study included an in-house dataset trained on the deep model, which is pretrained on ChestXray14 (Wang et al., <xref ref-type="bibr" rid="B217">2017</xref>) and ImageNet datasets for 14 class classifications. Lenga et al. (<xref ref-type="bibr" rid="B106">2020</xref>) used the existing continual learning approach for the medical domain for CXR-based analysis. Zech et al. (<xref ref-type="bibr" rid="B234">2018</xref>) assessed deep models for pneumonia using the training data from different institutions. <xref ref-type="fig" rid="F5">Figure 5C</xref> showcases the classification pipeline using CXR for different lung diseases.</p>
</sec>
<sec>
<title>2.2.3. Patent review</title>
<p>Lyman et al. (<xref ref-type="bibr" rid="B122">2019</xref>) proposed a model to differentiate CXR into normal or abnormal. The model is trained to find any abnormality like effusion, emphysema etc., to classify a given CXR as abnormal. Hong et al. (<xref ref-type="bibr" rid="B69">2009b</xref>) proposed a method for feature extraction to detect nodules in the CXR to reduce the false positives. Hong and Shen (<xref ref-type="bibr" rid="B70">2008</xref>) proposed an approach for automatically segmenting the heart region for nodule detection. Guendel et al. (<xref ref-type="bibr" rid="B58">2020</xref>) proposed a deep multitask learning approach to classify CXR for different findings present in it. The proposed approach also performs segmentation along with disease localization simultaneously. Clarke et al. (<xref ref-type="bibr" rid="B31">2022</xref>) proposed a computer-assisted diagnostic (CAD) method using wavelet transform-based feature extraction for automatically detecting nodules in the CXRs. Putha et al. (<xref ref-type="bibr" rid="B153">2022</xref>) proposed a deep learning-based method to predict the risk of lung cancer associated with the characteristics (size, calcification etc.) of nodules present in the CXR. Doi and Aoyama (<xref ref-type="bibr" rid="B42">2002</xref>) proposed a neural network-based approach to detect the presence of nodule and further classify them as benign or malignant. Lei et al. (<xref ref-type="bibr" rid="B105">2021</xref>) created a cloud-based platform for lung-based disease detection using CXR. Ting et al. (<xref ref-type="bibr" rid="B205">2021</xref>) proposed a transfer learning approach for detecting lung inflammation from a given CXR. Kang et al. (<xref ref-type="bibr" rid="B93">2019</xref>) proposed a transfer learning-based approach for predicting lung disease in the CXR image. Qiang et al. (<xref ref-type="bibr" rid="B155">2020</xref>) proposed a lung disease classification approach, which extracts the lung mask and enhances the segmented image and CNN-based model for feature extraction and classification. Luojie and Jinhua (<xref ref-type="bibr" rid="B121">2018</xref>) proposed a deep learning-based classification for lung disease for 14 different findings. Kai et al. (<xref ref-type="bibr" rid="B91">2019</xref>) proposed a deep learning system to classify the lung lesion in a given CXR. Harding et al. (<xref ref-type="bibr" rid="B62">2015</xref>) proposed an approach for lung segmentation and bone suppression in a given CXR to improve CAD results.</p>
</sec>
<sec>
<title>2.2.4. Discussion</title>
<p>Researchers have generally developed algorithms for classification using supervised machine learning approaches. Both multilabel and multi-class classification tasks are studied. Due to availability of small sample size datasets with data imbalance, transfer learning is widely used in most research.</p>
</sec>
</sec>
<sec>
<title>2.3. Image generation</title>
<p>This section covers the existing work for the image generation task. This is a new field, where mostly generative models are used for other tasks and verify the model performance on synthetic and real CXR-based disease detection. <xref ref-type="fig" rid="F6">Figure 6</xref> showcases the synthetically generated CXR samples.</p>
<fig id="F6" position="float">
<label>Figure 6</label>
<caption><p>Showcasing the synthetically generated chest-X ray images. For a given normal image <bold>(A)</bold>, the proposed approach by Tang et al. (<xref ref-type="bibr" rid="B201">2019b</xref>) generates the abnormal images <bold>(B&#x02013;G)</bold> is predicted segmentation mask for same input image and results in mask-image pairs from <bold>(B&#x02013;G)</bold>. Figure is adapted from Tang et al. (<xref ref-type="bibr" rid="B201">2019b</xref>).</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-06-1120989-g0006.tif"/>
</fig>
<p>Tang et al. (<xref ref-type="bibr" rid="B201">2019b</xref>) proposed XLSor, a deep learning model for generating CXRs for data augmentation and a criss-cross attention-based segmentation approach. Eslami et al. (<xref ref-type="bibr" rid="B49">2020</xref>) proposed a multi-task GAN-based approach for image-to-image translation, generating bone-suppressed and segmented images using the JSRT dataset. Wang et al. (<xref ref-type="bibr" rid="B212">2018</xref>) proposed a hybrid CNN-based model for CXR classification and image reconstruction. Madani et al. (<xref ref-type="bibr" rid="B124">2018</xref>) used GAN based approach to generate and discriminate CXRs for the classification tasks. Sundaram and Hulkund (<xref ref-type="bibr" rid="B195">2021</xref>) used GAN based approach to perform data augmentation and evaluated the classification model for synthetically generated and affine transformation-based data in the CheXpert dataset.</p>
<sec>
<title>2.3.1. Discussion</title>
<p>The current work in image generation for CXR has focused on alleviating the data deficiency for training deep models. It is observed that the synthetic data generated using GAN-based approaches improve model performance compared to the standard data augmentation methods such as rotation and flip.</p>
</sec>
</sec>
<sec>
<title>2.4. Disease localization</title>
<p>Disease localization is an interesting task for localizing diseased ROIs. This allows us to look at the difference between the predicted and the actual diseased area in the CXR. Yu et al. (<xref ref-type="bibr" rid="B229">2020</xref>) proposed a multitasking-based approach to segment Peripherally inserted central catheter (PICC) lines and detect tips simultaneously in CXRs. Zhang et al. (<xref ref-type="bibr" rid="B239">2019</xref>) proposed SDSLung, a multitasking-based approach adapted from Mask RCNN (Girshick et al., <xref ref-type="bibr" rid="B53">2014</xref>) for lung field detection and segmentation. Wessel et al. (<xref ref-type="bibr" rid="B221">2019</xref>) proposed a Mask RCNN-based approach for rib detection and segmentation in CXRs. Schultheiss et al. (<xref ref-type="bibr" rid="B178">2020</xref>) used a RetinaNet (Ren et al., <xref ref-type="bibr" rid="B168">2015</xref>) based approach to detect the nodule along with lung segmentation in CXRs. Kim et al. (<xref ref-type="bibr" rid="B102">2020</xref>) used Mask RCNN and RetinaNet to assess the effect of input size for nodule and mass detection in CXRs. Takemiya et al. (<xref ref-type="bibr" rid="B198">2019</xref>) proposed a CNN-based approach to perform nodule opacity classification and further used R-CNN to detect the nodules in CXRs. Kim et al. (<xref ref-type="bibr" rid="B101">2019</xref>) compared existing CNN-based object detection models for nodule and mass detection in CXRs. Cho et al. (<xref ref-type="bibr" rid="B28">2020</xref>) used a YOLO (Redmon and Farhadi, <xref ref-type="bibr" rid="B166">2017</xref>) object detection model to detect different findings in CXRs.</p>
<sec>
<title>2.4.1. Patent review</title>
<p>Putha et al. (<xref ref-type="bibr" rid="B154">2021</xref>) proposed a deep learning-based system for detecting and localizing infectious diseases in CXR alongside using the information from the clinical sample for the same patient. Jinpeng et al. (<xref ref-type="bibr" rid="B89">2020</xref>) proposed a deep learning approach for automatic disease localization using CXRs based on weakly-supervised learning.</p>
</sec>
<sec>
<title>2.4.2. Discussion</title>
<p>The current work in CXR-based analysis has focused on detecting the lung part in the given CXR or the disease area in the bounding box. Most of the work have used object detection algorithms such as YOLO, RCNN and its variants (Mask RCNN, Faster RCNN).</p>
</sec>
</sec>
<sec>
<title>2.5. Report generation</title>
<p>This section covers the existing work in report generation for CXR image analysis. This is a recent area which combines two domains; Natural Language Processing (NLP) and Computer Vision (CV).</p>
<p>Xue et al. (<xref ref-type="bibr" rid="B226">2018</xref>) proposed a multimodal approach consisting of LSTM and CNN for the cohesive indent-based report generation with an attention mechanism. Li X. et al. (<xref ref-type="bibr" rid="B110">2019</xref>) proposed VisPi, a CNN and LSTM-based approach with attention to generating reports in medical imaging. The proposed algorithm performs classification and localization and then finally generates a detailed report. Syeda-Mahmood et al. (<xref ref-type="bibr" rid="B196">2020</xref>) proposed a novel approach to generate reports for fine-grained labels by fine-tuning the model learnt on fine-grained and coarse labels.</p>
<sec>
<title>2.5.1. Discussion</title>
<p>This recently explored area requires more attention. In CXR-based analysis, report generation allows a <italic>multi-modal learning</italic> using CNNs and sequential models. However, the task is challenging as the large text corpus is required with the CXR dataset and only a fewer datasets are available for this task.</p>
</sec>
</sec>
<sec>
<title>2.6. Model explainability</title>
<p>Jang et al. (<xref ref-type="bibr" rid="B85">2020</xref>) trained a CNN on three CXR-based datasets (Asan Medical Center-Seoul National University Bundang Hospital (AMC-SNUBH), NIH, and CheXpert) for assessing the robustness of deep models in labeling noise. Authors added different noise levels in the labels of these datasets to demonstrate that the deep models are sensitive to the label noise; as for huge datasets, the labeling is done using report parsing or NLP, leading to a certain extent in labeling the CXR samples. Kaviani et al. (<xref ref-type="bibr" rid="B95">2022</xref>) and Li et al. (<xref ref-type="bibr" rid="B112">2021</xref>) reviewed different deep adversarial attacks and defenses on medical imaging. Li and Zhu (<xref ref-type="bibr" rid="B113">2020</xref>) proposed an unsupervised learning approach to detect the different adversarial attacks in CXRs and assess the robustness of deep models. Gongye et al. (<xref ref-type="bibr" rid="B54">2020</xref>) studied the effect of different existing adversarial attacks on the performance of the deep model for COVID-19 detection from CXRs. Hirano et al. (<xref ref-type="bibr" rid="B66">2021</xref>) studied the universal adversarial perturbations (UAP) effect on the deep model-based pneumonia detection and reported performance degradation in the classification of CXRs. Ma et al. (<xref ref-type="bibr" rid="B123">2021</xref>) studied the effect of altering the textural information present in the CXRs, which can lead to misdiagnosis. Seyyed-Kalantari et al. (<xref ref-type="bibr" rid="B181">2021</xref>) studied the fairness gaps in existing deep models and datasets for CXR classifications. Li et al. (<xref ref-type="bibr" rid="B108">2022</xref>) studied the gender bias affecting the performance of different deep models on existing datasets. Rajpurkar et al. (<xref ref-type="bibr" rid="B164">2017</xref>) used Class Activation Maps (CAMs) to interpret the model decisions for detecting different findings in CXRs. Pasa et al. (<xref ref-type="bibr" rid="B149">2019</xref>) used a 5-layered CNN-based architecture for detecting TB in CXRs from two publicly available datasets, Shenzhen and Montgomery. The authors used Grad-CAM visualization for model interpretability.</p>
<sec>
<title>2.6.1. Discussion</title>
<p>Work done so far on model interpretability for CXR-based disease detection is based on <italic>post-hoc</italic> approaches such as saliency map or CAM analysis. Explainability in AI-based decisions is a must to rely on machine intelligence. Healthcare is a challenging domain, and the life of humans is at risk based on a false positive or false negative. There is a need to incorporate the inbuilt model explainability to handle noisy or adversarial samples, thus improving model robustness for CXR-based systems. Further, challenges occur due to the data imbalance and less model interoperability, as models are usually trained on data from a single hospital. This results in unfair decisions by learning sensitive information from the data. The existing work should encourage more pathways for robust and fair CXR-based systems, which will further increase the chances of deployment of such systems in places with poor healthcare settings.</p>
</sec>
</sec>
</sec>
<sec id="s3">
<title>3. Disease detection based literature</title>
<p>In this section, we present the literature review of commonly addressed lung diseases. Several CXR datasets are made publicly available, allowing to development of novel approaches for different disease-related tasks. <xref ref-type="fig" rid="F7">Figure 7</xref> showcases the samples of CXRs affected with different lung diseases.</p>
<fig id="F7" position="float">
<label>Figure 7</label>
<caption><p>Showcasing the chest-X rays affected with different lung disorders. <bold>(A)</bold> Normal, <bold>(B)</bold> Pneumoconoisis, <bold>(C)</bold> TB, <bold>(D)</bold> Pneumonia, <bold>(E)</bold> Bronchitis, <bold>(F)</bold> COPD, <bold>(G)</bold> Fibrosis, and <bold>(H)</bold> COVID-19.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-06-1120989-g0007.tif"/>
</fig>
<sec>
<title>3.1. Tuberculosis</title>
<p>TB is caused by <italic>Mycobacterium tuberculosis</italic>. It is one of the most common reasons for mortality in lung disease worldwide. About 10 million people were affected by TB in 2019 (WHO, <xref ref-type="bibr" rid="B224">2021</xref>). In the year 2013, it took 1.5 million lives (WHO, <xref ref-type="bibr" rid="B222">2013</xref>). TB is curable; however, hospital patient rush delays the diagnostic process and its treatment. CXR are the common radiological modality used to diagnose TB. Computer-aided diagnosis and CAD-based TB detection for CXR images will ease the detection process.</p>
<sec>
<title>3.1.1. Pre-deep learning based approaches</title>
<p>Govindarajan and Swaminathan (<xref ref-type="bibr" rid="B56">2021</xref>) used reaction-diffusion set method for lung segmentation followed by local feature descriptors such as Median Robust Extended Local Binary Patterns (Liu et al., <xref ref-type="bibr" rid="B118">2016</xref>), local binary pattern (Liu et al., <xref ref-type="bibr" rid="B117">2017</xref>) and Gradient Local Ternary Patterns (Ahmed and Hossain, <xref ref-type="bibr" rid="B4">2013</xref>) with Extreme Learning Machine (ELM) and Online Sequential ELM (OSELM) (Liang et al., <xref ref-type="bibr" rid="B114">2006</xref>) classifiers for detecting TB in CXR images using Montgomery dataset. Alfadhli et al. (<xref ref-type="bibr" rid="B5">2017</xref>) used speed-up robust features (SURF) (Bay et al., <xref ref-type="bibr" rid="B14">2008</xref>) for feature detection and performed classification using SVM for TB diagnosis. Jaeger et al. (<xref ref-type="bibr" rid="B83">2014</xref>) collected different handcrafted features such as histogram of gradients (HOG) (Dalal and Triggs, <xref ref-type="bibr" rid="B35">2005</xref>), the histogram of intensity, magnitude, shape and curvature descriptors, LBP Ojala et al., <xref ref-type="bibr" rid="B141">1996</xref>) as set A for detection. They further used edge, color (fuzzy-color and color layout) based features as Set B for image retrieval. Chandra et al. (<xref ref-type="bibr" rid="B22">2020</xref>) used two-level hierarchical features (shape and texture) with SVM for TB classification. Santosh et al. (<xref ref-type="bibr" rid="B177">2016</xref>) used thoracic edge map encoding using PHOG (Opelt et al., <xref ref-type="bibr" rid="B144">2006</xref>) for feature extraction followed by multilayer perceptron-based (MLP) based classification of CXR into TB or normal.</p>
</sec>
<sec>
<title>3.1.2. Deep learning based approaches</title>
<p>Duong et al. (<xref ref-type="bibr" rid="B44">2021</xref>) created a dataset of 28,672 images by merging different publicly available datasets (Jaeger et al., <xref ref-type="bibr" rid="B83">2014</xref>; Wang et al., <xref ref-type="bibr" rid="B217">2017</xref>; Chowdhury et al., <xref ref-type="bibr" rid="B30">2020</xref>; Cohen et al., <xref ref-type="bibr" rid="B32">2020</xref>) for three class classification; TB, pneumonia and normal. Authors performed a deep learning-based classification using a pretrained EfficientNet (Tan and Le, <xref ref-type="bibr" rid="B199">2019</xref>) trained on ImageNet (Deng et al., <xref ref-type="bibr" rid="B40">2009</xref>) dataset, pretrained Vision Transformer (ViT) (Dosovitskiy et al., <xref ref-type="bibr" rid="B43">2020</xref>) and finally developed a hybrid between EfficientNet and Vision Transformer. For the proposed hybrid model, the CXR is given input to the pretrained EfficientNet to generate features which are later fed to the ViT and finally, the classification results are obtained. Ayaz et al. (<xref ref-type="bibr" rid="B11">2021</xref>) proposed a feature ensemble-based approach for TB detection using Shenzen and Montgomery datasets. The authors used Gabor filter-based handcrafted features and seven different deep learning architectures to generate the deep features. Dasanayaka and Dissanayake (<xref ref-type="bibr" rid="B37">2021</xref>) proposed a deep learning-algorithm comprising of data generation using DCGAN (Radford et al., <xref ref-type="bibr" rid="B156">2015</xref>), lung segmentation using UNet (Ronneberger et al., <xref ref-type="bibr" rid="B169">2015</xref>) and transfer learning approach based feature ensemble and classification. The authors used genetic algorithm-based hyperparameter tuning. Msonda et al. (<xref ref-type="bibr" rid="B133">2020</xref>) used the deep model-based approach with spatial pyramid pooling and analyzed its effect on TB detection using CXR allowing robustness to the combination of features, thus improving the performance. Sahlol et al. (<xref ref-type="bibr" rid="B176">2020</xref>) proposed an Artificial Ecosystem-based Optimization (AEO) (Zhao et al., <xref ref-type="bibr" rid="B241">2020</xref>) on top of the features extracted from a pre-trained network, MobileNet, trained on ImageNet dataset as feature selector. The authors used two publicly available datasets, Shenzen and Pediatric Pneumonia CXR dataset (Kermany et al., <xref ref-type="bibr" rid="B97">2018b</xref>). Rahman et al. (<xref ref-type="bibr" rid="B159">2020b</xref>) used a deep learning approach for CXR segmentation and classification into TB or normal. For segmentation, the authors used two deep models, UNet and modified UNet (Azad et al., <xref ref-type="bibr" rid="B12">2019</xref>). Authors also used different existing visualizations techniques such as SmoothGrad (Smilkov et al., <xref ref-type="bibr" rid="B190">2017</xref>), Grad-CAM (Selvaraju et al., <xref ref-type="bibr" rid="B179">2017</xref>), Grad-CAM&#x0002B;&#x0002B; (Chattopadhay et al., <xref ref-type="bibr" rid="B24">2018</xref>), and Score-CAM (Wang H. et al., <xref ref-type="bibr" rid="B214">2020</xref>) for interpreting deep model for making classification decisions. The authors used nine different deep models for CNN-based classification of CXR into TB or normal. Rajaraman and Antani (<xref ref-type="bibr" rid="B162">2020</xref>) created three different models for three different lung diseases. First model was trained and tested on RSNA pneumonia (Stein et al., <xref ref-type="bibr" rid="B193">2018</xref>), pediatric pneumonia (Kermany et al., <xref ref-type="bibr" rid="B97">2018b</xref>), and Indiana (McDonald et al., <xref ref-type="bibr" rid="B127">2005</xref>) datasets for pneumonia detection. The second model is trained and tested for TB detection using the Shenzhen dataset. Finally, the first model is finetuned for TB detection to improve model adaption for a new task and reported majority voting results for TB classification. Rajpurkar et al. (<xref ref-type="bibr" rid="B165">2020</xref>) collected CXRs from HIV-infected patients from two hospitals in South Africa and developed CheXaid, a deep learning algorithm for the detection of TB to assist clinicians with web-based diagnosis. The proposed model consists of DenseNet121 trained on CheXpert (Irvin et al., <xref ref-type="bibr" rid="B80">2019</xref>) dataset, and outputs six findings (micronodular, nodularity, pleural effusion, cavitation, and ground-glass) with the presence or absence of TB in a given CXR. Zhang et al. (<xref ref-type="bibr" rid="B238">2020</xref>) proposed an attention-based CNN model, CBAM, and used channel and spatial attention to generate more focus on the manifestation present in the TB CXR. The authors used different deep models and analyzed the effect of the attention network on detecting TB. <xref ref-type="table" rid="T3">Table 3</xref> summarizes the above work for TB detection using CXRs.</p>
<table-wrap position="float" id="T3">
<label>Table 3</label>
<caption><p>Review of the literature for TB detection using CXRs based on different feature extraction methods.</p></caption> 
<table frame="hsides" rules="groups">
<thead>
<tr style="background-color:#919497;color:#ffffff">
<th valign="top" align="left"><bold>References</bold></th>
<th valign="top" align="left"><bold>Highlights</bold></th>
<th valign="top" align="left"><bold>Pretraining</bold></th>
<th valign="top" align="left"><bold>Dataset</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Govindarajan and Swaminathan (<xref ref-type="bibr" rid="B56">2021</xref>)</td>
<td valign="top" align="left">Texture-based feature descriptors with ML classifier</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">Montgomery</td>
</tr> <tr>
<td valign="top" align="left">Alfadhli et al. (<xref ref-type="bibr" rid="B5">2017</xref>)</td>
<td valign="top" align="left">Used SURF as feature extractor and SVM as classifier</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">Montgomery</td>
</tr> <tr>
<td valign="top" align="left">Jaeger et al. (<xref ref-type="bibr" rid="B83">2014</xref>)</td>
<td valign="top" align="left">Used texture-based features (LBP, HOG) and statistical feature with ML Classifier</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">Shenzhen, Montgomery</td>
</tr> <tr>
<td valign="top" align="left">Chandra et al. (<xref ref-type="bibr" rid="B22">2020</xref>)</td>
<td valign="top" align="left">Used shape and textural features with SVM</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">Shenzhen, Montgomery</td>
</tr> <tr>
<td valign="top" align="left">Santosh et al. (<xref ref-type="bibr" rid="B177">2016</xref>)</td>
<td valign="top" align="left">Used PHOG as features with MLP as classifier</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">Shenzhen, Montgomery</td>
</tr> <tr>
<td valign="top" align="left">Duong et al. (<xref ref-type="bibr" rid="B44">2021</xref>)</td>
<td valign="top" align="left">Used Pretrained EfficientNet and ViT, and developed a hybrid of two</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Shenzhen, Montgomery, Chestxray14, COVID-CXR (Chowdhury et al., <xref ref-type="bibr" rid="B30">2020</xref>)</td>
</tr> <tr>
<td valign="top" align="left">Ayaz et al. (<xref ref-type="bibr" rid="B11">2021</xref>)</td>
<td valign="top" align="left">Used Feature ensemble of handcrafted and deep features</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Shenzen, Montgomery</td>
</tr> <tr>
<td valign="top" align="left">Dasanayaka and Dissanayake (<xref ref-type="bibr" rid="B37">2021</xref>)</td>
<td valign="top" align="left">Generated synthetic images, performed segmentation and used feature ensemble for classification</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Shenzhen, Montgomery</td>
</tr> <tr>
<td valign="top" align="left">Msonda et al. (<xref ref-type="bibr" rid="B133">2020</xref>)</td>
<td valign="top" align="left">Used spatial pyramid pooling for deep feature extraction</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Shenzhen, Montgomery, private</td>
</tr> <tr>
<td valign="top" align="left">Sahlol et al. (<xref ref-type="bibr" rid="B176">2020</xref>)</td>
<td valign="top" align="left">Used Meta-heuristic approach for Deep feature selection</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Shenzen, Montgomery, PedPneumonia</td>
</tr> <tr>
<td valign="top" align="left">Rahman et al. (<xref ref-type="bibr" rid="B159">2020b</xref>)</td>
<td valign="top" align="left">Performed segmentation and used different visualization techniques</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Shenzhen, Montgomery, NIAID TB, RSNA</td>
</tr> <tr>
<td valign="top" align="left">Rajaraman and Antani (<xref ref-type="bibr" rid="B162">2020</xref>)</td>
<td valign="top" align="left">Performed tri-level classification and studied task adaptation</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">RSNA pneumonia, PedPneumonia, Indiana, Shenzhen</td>
</tr> <tr>
<td valign="top" align="left">Rajpurkar et al. (<xref ref-type="bibr" rid="B165">2020</xref>)</td>
<td valign="top" align="left">Developed a web-based system for TB affected HIV patients</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">CheXpert private dataset</td>
</tr> <tr>
<td valign="top" align="left">Zhang et al. (<xref ref-type="bibr" rid="B238">2020</xref>)</td>
<td valign="top" align="left">Used deep model with Attention based CNN (CBAM) module</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Shenzhen, Montgomery</td>
</tr> <tr>
<td valign="top" align="left">Rahman M. et al. (<xref ref-type="bibr" rid="B157">2021</xref>)</td>
<td valign="top" align="left">Merged publicly available CXR dataset with XGBoost as classifier</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Shenzhen, Montgomery</td>
</tr> <tr>
<td valign="top" align="left">Owais et al. (<xref ref-type="bibr" rid="B145">2020</xref>)</td>
<td valign="top" align="left">Used a feature ensemble by combining low and high level features</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Shenzhen, Montgomery</td>
</tr> <tr>
<td valign="top" align="left">Das et al. (<xref ref-type="bibr" rid="B36">2021</xref>)</td>
<td valign="top" align="left">Modified a pre-trained InceptionV3 for TB classification</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Shenzhen, Montgomery</td>
</tr> <tr>
<td valign="top" align="left">Munadi et al. (<xref ref-type="bibr" rid="B134">2020</xref>)</td>
<td valign="top" align="left">Used enhancement techniques to improve deep classification</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Shenzhen, Montgomery</td>
</tr> <tr>
<td valign="top" align="left">Oloko-Oba and Viriri (<xref ref-type="bibr" rid="B143">2020</xref>)</td>
<td valign="top" align="left">Used deep learning-based pipeline for classification</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Montgomery</td>
</tr> <tr>
<td valign="top" align="left">Ul Abideen et al. (<xref ref-type="bibr" rid="B207">2020</xref>)</td>
<td valign="top" align="left">Proposed the Bayesian CNN to deal with uncertain TB and non-TB cases that have low discernibility.</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Shenzhen, Montgomery</td>
</tr> <tr>
<td valign="top" align="left">Hwang et al. (<xref ref-type="bibr" rid="B77">2016</xref>)</td>
<td valign="top" align="left">Proposed a modified AlexNet-based model for end-to-end training. Also performed cross-database evaluations.</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Shenzhen, Montgomery</td>
</tr> <tr>
<td valign="top" align="left">Gozes and Greenspan (<xref ref-type="bibr" rid="B57">2019</xref>)</td>
<td valign="top" align="left">Proposed MetaChexNet, trained on CXRs and metadata of gender, age and patient positioning. Later, finetuned the model for TB classification</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">ChestXray14, Shenzhen, Montgomery</td>
</tr></tbody>
</table>
<table-wrap-foot>
<p>Pretraining (yes/no) refers to the use of weights of a deep model trained on ImageNet dataset. Private refers that the data used being in-house and is not released publicly.</p>
</table-wrap-foot>
</table-wrap>
</sec>
<sec>
<title>3.1.3. Patent review</title>
<p>Kaijin (<xref ref-type="bibr" rid="B92">2019</xref>) proposed a deep learning-based approach for segmentation and pulmonary TB detection in CXR images. Venkata Hari (<xref ref-type="bibr" rid="B211">2022</xref>) proposed a deep learning model for detecting TB in chest X-ray images. Chang-soo (<xref ref-type="bibr" rid="B23">2021</xref>) proposed an automatic chest X-ray image reader which involves reading data from the imaging device, segments the lung part, followed by gray level co-occurrence matrix-based feature extraction and finally discriminates it as normal, abnormal or TB. Minhwa et al. (<xref ref-type="bibr" rid="B128">2017</xref>) proposed a CAD-based system for diagnosing and predicting TB in CXR using deep learning.</p>
</sec>
<sec>
<title>3.1.4. Discussion</title>
<p>In most handcrafted approaches, the texture of CXR is used to define features, followed by any ML classifier. From the above, it is highlighted that the major focus for TB detection is on two datasets; Shenzhen and Montgomery. However, the two datasets contain below 1000 samples even when combined together. This results in poor generalization and needs a pretrained backbone network which is finetuned later. This is why pretrained models trained on the ImageNet dataset are widely used for TB classification from CXRs. Thus, there is a need for large-scale datasets for TB detection with segmentation masks and disease annotations to achieve model generalizability and interpretability.</p>
</sec>
</sec>
<sec>
<title>3.2. Pneumoconoisis</title>
<p>Pneumoconoisis is a broad term that describes lung diseases among industry workers due to overexposure to silica, coal, asbestos, and mixed dust. It is an irreversible and progressive occupational disorder prevalent worldwide and is becoming a major cause of death among workers. It is further categorized based on elements inhaled by the workers, such as silicosis (silica), brown lung (cotton and other fiber), pneumonoultramicroscopicsilicovolcanoconiosis (ash and dust), coal worker Pneumoconiosis (CWP) or black lung (asbestos), and popcorn lung (Diacetyl). People exposed to these substances are at a high risk of developing other lung diseases such as lung cancer, lung collapse, and TB.</p>
<sec>
<title>3.2.1. Pre-deep learning based approaches</title>
<p>Okumura et al. (<xref ref-type="bibr" rid="B142">2011</xref>) proposed a rule-based model for detecting the region of interests (ROIs) for nodule patterns based on the Fourier transform and an ANN-based approach for other ROIs which were not covered using the power spectrum analysis. The dataset is based on 11 normal and 12 abnormal cases of Pneumoconiosis, where normal cases were selected from an image database of the Japanese Society of Radiological Technology. Abnormal cases were selected randomly from the digital image database. Ledley et al. (<xref ref-type="bibr" rid="B104">1975</xref>) demonstrated the significance of textural information present in the CXR to detect the presence of coal work Pneumoconiosis (CWP). Hall et al. (<xref ref-type="bibr" rid="B60">1975</xref>) used the textural information present in CXRs and generated features based on spatial and histogram moments for six regions of a given segmented image. Authors performed classification based on maximum likelihood estimation and linear discriminant analysis (LDA). The authors further performed 4 class profusion classification for a given CXR in CWP workers. Yu et al. (<xref ref-type="bibr" rid="B230">2011</xref>) used the active shape modeling to segment out the lung from the CXR. The segmented image is divided into six non-overlapping zones as per the ILO guidance. On top of this, six separate SVM classifiers are built on the histogram and co-occurrence-based features generated from each zone. The authors also generated a chest-level classification by integrating the prediction results of the six regions. The experiments are carried out on a dataset of 850 PA CXRs with 600 normal and 250 abnormal cases collected from Shanghai Pulmonary Hospital, China. Murray et al. (<xref ref-type="bibr" rid="B137">2009</xref>) proposed based on an amplitude-modulation frequency-modulation (AM-FM) approach to extract the features and used partial least squares for classification. The authors extracted AM-FM features for multiple scales and used a classifier for each scale, later combining results from the individual classifiers. The authors performed the experiments on the CXRs collected from the Miners&#x00027; Colfax Medical Center and the Grant&#x00027;s Uranium Miners, Raton, New Mexico, for CWP detection. Xu et al. (<xref ref-type="bibr" rid="B225">2010</xref>) collected a private dataset of 427 CXR images, consisting of 252 and 175 images for normal and Pneumoconiosis, respectively. The authors performed segmentation using an active shape model followed by dividing the image into six sub-regions. For each subregion, five co-occurrence-based features are extracted. A separate SVM is trained for each subregion, followed by the staging of Pneumoconiosis using a separate SVM.</p>
</sec>
<sec>
<title>3.2.2. Deep learning based approach</title>
<p>Yang et al. (<xref ref-type="bibr" rid="B227">2021</xref>) proposed a deep learning-based approach for Pneumoconiosis detection. The proposed approach consists of a two-stage pipeline, UNet (Ronneberger et al., <xref ref-type="bibr" rid="B169">2015</xref>) for lung segmentation and pre-trained ResNet34 for feature extraction on the segmented image. The dataset is collected in-house and includes 1,760 CXR images for two classes; normal and Pneumoconiosis. Zhang L. et al. (<xref ref-type="bibr" rid="B237">2021</xref>) proposed a deep model for screening and staging Pneumoconiosis by dividing the given CXR into six subregions. This was followed by a CNN-based approach to detect the level of opacity in each subregion, and finally, a 4-class classification was performed to determine normal I, II, and III stages of Pneumoconiosis for a UNet-based segmented image. The results are obtained on the in-house data of 805 and 411 subjects for training and testing, respectively. Devnath et al. (<xref ref-type="bibr" rid="B41">2021</xref>) applied a deep transfer learning CheXNet (Rajpurkar et al., <xref ref-type="bibr" rid="B164">2017</xref>) model on a private dataset. The approach is based on the multilevel features extracted from the CheXNet and fed to a different configuration of SVMs. Wang X. et al. (<xref ref-type="bibr" rid="B218">2020</xref>) collected a dataset of 1881, including the 923 and 958 samples for Pneumoconiosis and normal, respectively. They used InceptionV3, a deep learning architecture to detect Pneumoconiosis in the given CXR to determine the potential of deep learning for assessing Pneumoconiosis. Wang D. et al. (<xref ref-type="bibr" rid="B213">2020</xref>) generated synthetic data for both normal and Pneumoconiosis using CycleGAN (Zhu et al., <xref ref-type="bibr" rid="B242">2017</xref>), followed by a CNN-based classifier. The author proposed a cascaded framework of pixel classifier for lung segmentation, CycleGAN, for generating training images and a CNN-based classifier. Wang et al. (<xref ref-type="bibr" rid="B219">2021</xref>) collected a set of in-house 5,424 CXRs, including normal and Pneumoconiosis cases, belonging to 4 different stages (0&#x02013;3). Authors used ResNet101 (He et al., <xref ref-type="bibr" rid="B65">2016</xref>) for detecting Pneumoconiosis on segmented images from the UNet segmentation model and showed improved results compared to radiologists. Sydney, and Wesley Medical Imaging, Queensland, Australia. Hao et al. (<xref ref-type="bibr" rid="B61">2021</xref>) collected data consisting of 706 images from Chongqing CDC, China, with 142 images positive for Pneumoconiosis. Authors trained two deep learning architectures, ResNet34 and DenseNet53 (Huang et al., <xref ref-type="bibr" rid="B72">2017</xref>) for the classification of CXRs into normal or Pneumoconiosis. <xref ref-type="table" rid="T4">Table 4</xref> summarizes the above work based on the method of feature extraction.</p>
<table-wrap position="float" id="T4">
<label>Table 4</label>
<caption><p>Review of the literature for Pneumoconiosis detection using CXRs.</p></caption> 
<table frame="hsides" rules="groups">
<thead>
<tr style="background-color:#919497;color:#ffffff">
<th valign="top" align="left"><bold>References</bold></th>
<th valign="top" align="left"><bold>Highlights</bold></th>
<th valign="top" align="left"><bold>Pretraining</bold></th>
<th valign="top" align="left"><bold>Dataset</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Okumura et al. (<xref ref-type="bibr" rid="B142">2011</xref>)</td>
<td valign="top" align="left">Used Fourier Transform to demonstrate the nodule pattern with Neural Nets for detection &#x00026;</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">JSRT, Private</td>
</tr> <tr>
<td valign="top" align="left">Hall et al. (<xref ref-type="bibr" rid="B60">1975</xref>)</td>
<td valign="top" align="left">Used textural for six regions to determine profusion level</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">Private</td>
</tr> <tr>
<td valign="top" align="left">Yu et al. (<xref ref-type="bibr" rid="B230">2011</xref>)</td>
<td valign="top" align="left">Used active shape model to segment lung, divided each lung into six regions. Features generated from each region are used to train SVM</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">Private</td>
</tr> <tr>
<td valign="top" align="left">Xu et al. (<xref ref-type="bibr" rid="B225">2010</xref>)</td>
<td valign="top" align="left">Used textural features generated from six lung regions with SVM for classification and staging</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">Private</td>
</tr> <tr>
<td valign="top" align="left">Yang et al. (<xref ref-type="bibr" rid="B227">2021</xref>)</td>
<td valign="top" align="left">Two stage pipeline with segmentation followed by classification</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Private</td>
</tr> <tr>
<td valign="top" align="left">Zhang L. et al. (<xref ref-type="bibr" rid="B237">2021</xref>)</td>
<td valign="top" align="left">Used Deep learning for screening and staging based on six lung regions</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Private</td>
</tr> <tr>
<td valign="top" align="left">Devnath et al. (<xref ref-type="bibr" rid="B41">2021</xref>)</td>
<td valign="top" align="left">Used Feature ensemble of multiple level deep features generated from pretrained model on CXR data</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">ChestXray14, private</td>
</tr> <tr>
<td valign="top" align="left">Wang X. et al. (<xref ref-type="bibr" rid="B218">2020</xref>)</td>
<td valign="top" align="left">Used InceptionNet for end-to-end classification</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">Private</td>
</tr> <tr>
<td valign="top" align="left">Wang D. et al. (<xref ref-type="bibr" rid="B213">2020</xref>)</td>
<td valign="top" align="left">Generated synthetic CXR samples and trained CNN with real and synthetic</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Chestxray14, Private</td>
</tr> <tr>
<td valign="top" align="left">Wang et al. (<xref ref-type="bibr" rid="B219">2021</xref>)</td>
<td valign="top" align="left">Performing Pneumoconioisis staging on segmented CXR images</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Private</td>
</tr> <tr>
<td valign="top" align="left">Hao et al. (<xref ref-type="bibr" rid="B61">2021</xref>)</td>
<td valign="top" align="left">Used two different deep models with different depths for feature generation, followed by classification</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Private</td>
</tr></tbody>
</table>
<table-wrap-foot>
<p>Pretraining (yes/no) refers to the use of weights of deep models trained on the ImageNet dataset. Private refers that the data used being in-house and is not released publicly.</p>
</table-wrap-foot>
</table-wrap>
</sec>
<sec>
<title>3.2.3. Patent review</title>
<p>Sahadevan (<xref ref-type="bibr" rid="B175">2002</xref>) proposed an approach to use high-resolution digital CXR images to detect early-stage lung cancer, Pneumoconiosis and pulmonary diseases. Wanli et al. (<xref ref-type="bibr" rid="B220">2021</xref>) proposed a deep learning-based approach for Pneumoconoisis detection using lung CXR image.</p>
</sec>
<sec>
<title>3.2.4. Discussion</title>
<p>From the above-cited work, it is clear that there is no publicly available dataset. The current work is done on the in-house datasets with fewer samples. This draws our attention to the fact that the automatic detection of Pneumoconiosis from CXRs requires publicly available datasets for developing robust, generalizable and efficient algorithms.</p>
</sec>
</sec>
<sec>
<title>3.3. Pneumonia</title>
<p>It is a viral or bacterial infection affecting the lungs and humans of all ages, including children. CXRs are widely used to examine the manifestation caused due to pneumonia infection.</p>
<p>Sousa et al. (<xref ref-type="bibr" rid="B192">2014</xref>) compared different machine learning models for the classification of pediatric CXRs into normal or pneumonia. Zhao et al. (<xref ref-type="bibr" rid="B240">2019</xref>) merged four different CXR datasets for pneumonia classification and performed lung and thoracic cavity segmentation using DeepLabv2 (Chen et al., <xref ref-type="bibr" rid="B25">2017a</xref>) and ResNet50 for pneumonia classification from CXRs on top of the segmented images. Tang et al. (<xref ref-type="bibr" rid="B200">2019a</xref>) used CycleGAN to generate synthetic data and proposed TUNA-Net to adapt adult to pediatric pneumonia classification from CXRs. Narayanan et al. (<xref ref-type="bibr" rid="B138">2020</xref>) used UNet for lung segmentation followed by a two-level classification viz; level 1 classifies given CXR into pneumonia or normal, and level 2 further classifies pneumonia CXR into either bacterial or viral class. Rajaraman et al. (<xref ref-type="bibr" rid="B163">2019</xref>) highlighted different visualization techniques for interpreting CNN-based pneumonia detection using CXRs. Ferreira et al. (<xref ref-type="bibr" rid="B50">2020</xref>) used VGG16 for classifying pediatric CXR into normal pneumonia and further classifying them as bacterial or viral. Zhang J. et al. (<xref ref-type="bibr" rid="B236">2021</xref>) proposed an EfficientNet-based confidence-aware anomaly detection model to differentiate viral pneumonia as a one-class classification from non-viral and normal classes (Elshennawy and Ibrahim, <xref ref-type="bibr" rid="B48">2020</xref>; Longjiang et al., <xref ref-type="bibr" rid="B119">2020</xref>; Yue et al., <xref ref-type="bibr" rid="B231">2020</xref>) used different deep learning models using a transfer learning approach to perform classification using CXRs for pneumonia. Mittal et al. (<xref ref-type="bibr" rid="B130">2020</xref>) used an ensemble of CNN and CapsuleNet (Sabour et al., <xref ref-type="bibr" rid="B173">2017</xref>) for detecting pneumonia from CXRs images using publicly available pediatric dataset (Stein et al., <xref ref-type="bibr" rid="B193">2018</xref>). Rajpurkar et al. (<xref ref-type="bibr" rid="B164">2017</xref>) proposed a pre-trained DenseNet121 model for classifying 14 findings present in CXRs in Chestxray14 dataset. The authors further performed a binary classification to detect pneumonia. <xref ref-type="table" rid="T5">Table 5</xref> summarizes the above based on the feature extraction methods.</p>
<table-wrap position="float" id="T5">
<label>Table 5</label>
<caption><p>Summarizes the literature for Pneumonia detection using CXR.</p></caption> 
<table frame="hsides" rules="groups">
<thead>
<tr style="background-color:#919497;color:#ffffff">
<th valign="top" align="left"><bold>References</bold></th>
<th valign="top" align="left"><bold>Highlights</bold></th>
<th valign="top" align="left"><bold>Pretraining</bold></th>
<th valign="top" align="left"><bold>Dataset</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Sousa et al. (<xref ref-type="bibr" rid="B192">2014</xref>)</td>
<td valign="top" align="left">Compared different ML classifiers for Pediatric Pneumonia</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">PedPneumonia</td>
</tr> <tr>
<td valign="top" align="left">Zhao et al. (<xref ref-type="bibr" rid="B240">2019</xref>)</td>
<td valign="top" align="left">Used Multiple datasets and performed semantic lung segmentation</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">PedPneumonia, RSNA-Pneumonia, Private</td>
</tr> <tr>
<td valign="top" align="left">Tang et al. (<xref ref-type="bibr" rid="B200">2019a</xref>)</td>
<td valign="top" align="left">Generated synthetic data and trained model for adult pneumonia, and later adapted that for pediatric pneumonia</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">RSNA, PedPneumonia</td>
</tr> <tr>
<td valign="top" align="left">Narayanan et al. (<xref ref-type="bibr" rid="B138">2020</xref>)</td>
<td valign="top" align="left">Lung segmentation followed by two level of classification</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">PedPneumonia</td>
</tr> <tr>
<td valign="top" align="left">Rajaraman et al. (<xref ref-type="bibr" rid="B163">2019</xref>)</td>
<td valign="top" align="left">Comparison of different visualization techniques for deep model explaination</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">PedPneumonia</td>
</tr> <tr>
<td valign="top" align="left">Ferreira et al. (<xref ref-type="bibr" rid="B50">2020</xref>)</td>
<td valign="top" align="left">A multistage CXR classification viz; healthy or pneumonia and viral or bacterial pneumonia</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">PedPneumonia</td>
</tr> <tr>
<td valign="top" align="left">Zhang J. et al. (<xref ref-type="bibr" rid="B236">2021</xref>)</td>
<td valign="top" align="left">EfficientNet-based confidence-aware anomaly detection model</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">PedPneumonia</td>
</tr> <tr>
<td valign="top" align="left">Mittal et al. (<xref ref-type="bibr" rid="B130">2020</xref>)</td>
<td valign="top" align="left">Used an ensemble of deep model (CNN) and CapsuleNet</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">PedPneumonia</td>
</tr> <tr>
<td valign="top" align="left">Rajpurkar et al. (<xref ref-type="bibr" rid="B164">2017</xref>)</td>
<td valign="top" align="left">Performed multilabel classification with CAM analysis</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Chestxray14</td>
</tr></tbody>
</table>
<table-wrap-foot>
<p>Pretraining (yes/no) refers to the use of weights of a deep model trained on the ImageNet dataset. Private refers that the data used is in-house and is not released publicly.</p>
</table-wrap-foot>
</table-wrap>
<sec>
<title>3.3.1. Patent review</title>
<p>Shaoliang et al. (<xref ref-type="bibr" rid="B183">2020</xref>) proposed a system for pneumonia detection from CXR using deep learning based on transfer learning.</p>
</sec>
<sec>
<title>3.3.2. Discussion</title>
<p>Most of the work is done around (Stein et al., <xref ref-type="bibr" rid="B193">2018</xref>) dataset in multi-class settings. However, there are challenges which need to be addressed other than the dataset challenge, which includes lung segmentation and model interpretability. Transfer learning is widely used to improve generalization for Pneumonia detection on CXRs. Pneumonia is a common manifestation of many lung disorders and is thus required to be detected in multilabel settings.</p>
</sec>
</sec>
<sec>
<title>3.4. COVID-19</title>
<p>COVID-19 is caused due to SARS-CoV-2 Coronavirus prevalent worldwide and is responsible for the ongoing pandemic. It is responsible for the death of more than 6 million people worldwide. Rt-PCR is an available test to detect the presence of COVID-19; however, using CXR is a rapid method for diagnosis and detecting the presence of pneumonia-like symptoms in the lungs.</p>
<sec>
<title>3.4.1. Pre-deep learning based approaches</title>
<p>Rajagopal (<xref ref-type="bibr" rid="B161">2021</xref>) used both transfer learning (pre-trained VGG16) and ML (SVM, XGBoost) trained on a deep features-based approach for three-class classification for COVID-19 detection. Jin et al. (<xref ref-type="bibr" rid="B88">2021</xref>) used a pre-trained AlexNet to generate the features on CXR images followed by feature selection and classification using SVM.</p>
</sec>
<sec>
<title>3.4.2. Deep learning based approaches</title>
<p>Chowdhury et al. (<xref ref-type="bibr" rid="B30">2020</xref>) proposed a dataset by merging publicly available datasets (Wang et al., <xref ref-type="bibr" rid="B217">2017</xref>; Mooney, <xref ref-type="bibr" rid="B132">2018</xref>; Cohen et al., <xref ref-type="bibr" rid="B32">2020</xref>; ISMIR, <xref ref-type="bibr" rid="B82">2020</xref>; Rahman et al., <xref ref-type="bibr" rid="B158">2020a</xref>; Wang L. et al., <xref ref-type="bibr" rid="B215">2020</xref>) for COVID-19 and used eight pretrained CNN models [MobileNetv2, SqueezeNet, ResNet18, ResNet101, DenseNet201, Inceptionv3, ResNet101, CheXNet (Rajpurkar et al., <xref ref-type="bibr" rid="B164">2017</xref>), and VGG19] for the three class classification; normal, viral pneumonia, and COVID-19 pneumonia. Khan et al. (<xref ref-type="bibr" rid="B98">2020</xref>) proposed CoroNet, a transfer learning-based approach using XceptionNet-based approach, trained end-to-end for classification of CXR images into normal, bacterial pneumonia, viral pneumonia, COVID-19 using publicly available datasets. Islam et al. (<xref ref-type="bibr" rid="B81">2020</xref>) proposed a CNN-LSTM based architecture for detecting COVID-19 from CXRs for a dataset of 4,575 images. Pham (<xref ref-type="bibr" rid="B151">2021</xref>) compared the fine-tuning approach with the recently developed deep architectures for 2-class and 3-class classification problems for COVID-19 detection in CXRs on three publicly available datasets. Al-Rakhami et al. (<xref ref-type="bibr" rid="B6">2021</xref>) extracted deep features from pre-trained models and performed classification using RNN. Duran-Lopez et al. (<xref ref-type="bibr" rid="B45">2020</xref>) proposed COVID-XNET, for detecting COVID-19 from CXR images based on CNN for binary classification. Gupta et al. (<xref ref-type="bibr" rid="B59">2021</xref>) proposed InstaCovNet-19, by stacking different fine-tuned deep models with variable depth as to increase model robustness for COVID-19 classification on CXRs. Punn and Agarwal (<xref ref-type="bibr" rid="B152">2021</xref>), Wang N. et al. (<xref ref-type="bibr" rid="B216">2020</xref>), Khasawneh et al. (<xref ref-type="bibr" rid="B99">2021</xref>), Jain et al. (<xref ref-type="bibr" rid="B84">2021</xref>), El Gannour et al. (<xref ref-type="bibr" rid="B47">2020</xref>), Panwar et al. (<xref ref-type="bibr" rid="B147">2020b</xref>), and Panwar et al. (<xref ref-type="bibr" rid="B146">2020a</xref>) used transfer learning based approach for differentiating COVID-19 from viral pneumonia and normal CXRs. Abbas (<xref ref-type="bibr" rid="B1">2021</xref>) proposed a CNN-based class decomposition approach, DeTraC, which aims to decompose classes into subclasses and assign new labels independent of each other within the datasets by adding a class decomposition layer and later adding back these subsets to generate final predictions. The authors used the COVID-19 Classification from CXR images on publicly available datasets. Gour and Jain (<xref ref-type="bibr" rid="B55">2020</xref>) proposed a stacked CNN-based approach using five different submodules from two different deep models; first fine-tuned VGG16 and second a 30-layered CNN, and the output is combined by logistic regression for three classifications for COVID-19 using CXRs. Malhotra et al. (<xref ref-type="bibr" rid="B126">2022</xref>) proposed COMiT-Net, a deep learning-based multitasking approach for COVID-19 detection from CXR, simultaneously performs semantic lung segmentation, and disease localization to improve model interpretability. Pereira et al. (<xref ref-type="bibr" rid="B150">2020</xref>) combined both handcrafted and deep learning-based features and performed two-level classification for COVID-19 detection. Rahman T. et al. (<xref ref-type="bibr" rid="B160">2021</xref>) compared the effect of different enhancement techniques and lung segmentation on classification tasks based on transfer learning for differentiating CXRs as COVID-19, normal, and Non-COVID. Li et al. (<xref ref-type="bibr" rid="B111">2020</xref>) developed COVID-MobileXpert, a knowledge distillation-based approach consisting of three models, one large teacher model, trained on a large CXR dataset and two student models; one finetuned on COVID-19 dataset to discriminate COVID-19 pneumonia from normal CXRs and another a small lightweight model to perform on-device screening for CXR snapshots. Ucar and Korkmaz (<xref ref-type="bibr" rid="B206">2020</xref>) proposed Bayes-Squeeznet, based on pretrained SqueezeNet and Bayesian optimization for COVID-19 detection in CXRs. Shi et al. (<xref ref-type="bibr" rid="B185">2021</xref>) proposed a knowledge distillation-based attention method with transfer learning for COVID-19 detection from CT and CXRs. Saha et al. (<xref ref-type="bibr" rid="B174">2021</xref>) proposed EMCNet, based on extracting deep features from CXRs and training different machine learning classifiers. Mahmud et al. (<xref ref-type="bibr" rid="B125">2020</xref>) proposed a CovXNet, based on training a deep model on different resolution CXR data, Stacked CovXNet, and later finetune it on COVID-19 and non-COVID-19 CXR data as a target task. <xref ref-type="table" rid="T6">Table 6</xref> summarizes the above work for COVID-19 detection using CXRs.</p>
<table-wrap position="float" id="T6">
<label>Table 6</label>
<caption><p>Review of the literature for COVID19 detection using CXRs.</p></caption> 
<table frame="hsides" rules="groups">
<thead>
<tr style="background-color:#919497;color:#ffffff">
<th valign="top" align="left"><bold>References</bold></th>
<th valign="top" align="left"><bold>Highlights</bold></th>
<th valign="top" align="left"><bold>Pretraining</bold></th>
<th valign="top" align="left"><bold>Dataset</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Rajagopal (<xref ref-type="bibr" rid="B161">2021</xref>)</td>
<td valign="top" align="left">Combined deep learning and ML classifier</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">PedPneumonia, COVID-CXR, <ext-link ext-link-type="uri" xlink:href="https://github.com/agchung">https://github.com/agchung</ext-link></td>
</tr> <tr>
<td valign="top" align="left">Jin et al. (<xref ref-type="bibr" rid="B88">2021</xref>)</td>
<td valign="top" align="left">Used deep feature followed by feature selection with SVM</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">PedPneumonia, COVID-CXR</td>
</tr> <tr>
<td valign="top" align="left">Chowdhury et al. (<xref ref-type="bibr" rid="B30">2020</xref>)</td>
<td valign="top" align="left">Used deep ensemble feature generation</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Mutiple datasets with different disorders</td>
</tr> <tr>
<td valign="top" align="left">Khan et al. (<xref ref-type="bibr" rid="B98">2020</xref>)</td>
<td valign="top" align="left">XceptionNet based end-to-end training</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">PedPneumonia, COVID-CXR, COVIDDGR</td>
</tr> <tr>
<td valign="top" align="left">Islam et al. (<xref ref-type="bibr" rid="B81">2020</xref>)</td>
<td valign="top" align="left">Used a combination of LSTM-CNN-based architecture</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Combination of</td>
</tr>
<tr>
<td valign="top" align="left">publicly available data Pham (<xref ref-type="bibr" rid="B151">2021</xref>)</td>
<td valign="top" align="left">Used a multi-level classification approach for two and three disease classes</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">COVID-CXR, PedPneumonia, COVID-19 (kaggle), ActualMed (github)</td>
</tr> <tr>
<td valign="top" align="left">Al-Rakhami et al. (<xref ref-type="bibr" rid="B6">2021</xref>)</td>
<td valign="top" align="left">Approach combines CNNs with sequential deep model</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Data collected from various available sources</td>
</tr> <tr>
<td valign="top" align="left">Duran-Lopez et al. (<xref ref-type="bibr" rid="B45">2020</xref>)</td>
<td valign="top" align="left">Proposed COVID-XNet, a custom deep learning model for binary classification</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">BIMVC, COVID-CXR</td>
</tr> <tr>
<td valign="top" align="left">Gupta et al. (<xref ref-type="bibr" rid="B59">2021</xref>)</td>
<td valign="top" align="left">Proposed InstaCovNet-19, with ensemble generated from deep features</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">Chowdhury et al. (<xref ref-type="bibr" rid="B30">2020</xref>), COVID-CXR</td>
</tr> <tr>
<td valign="top" align="left">Abbas (<xref ref-type="bibr" rid="B1">2021</xref>)</td>
<td valign="top" align="left">Class decomposition into sub-classes with pre-trained models</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">JSRT, COVID-CXR</td>
</tr> <tr>
<td valign="top" align="left">Gour and Jain (<xref ref-type="bibr" rid="B55">2020</xref>)</td>
<td valign="top" align="left">Submodule stacking from pretrained and customized deep models</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">COVID-CXR, ActualMed, PedPneumonia</td>
</tr> <tr>
<td valign="top" align="left">Malhotra et al. (<xref ref-type="bibr" rid="B126">2022</xref>)</td>
<td valign="top" align="left">Multi-task approach with segmentation, disease classification and</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">CheXpert, Chestxray14, BIMVC-COVID19, Various online sources</td>
</tr> <tr>
<td valign="top" align="left">Pereira et al. (<xref ref-type="bibr" rid="B150">2020</xref>)</td>
<td valign="top" align="left">Feature ensemble of handcrafted and deep features</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">COVID-CXR, Chestxray14, Radiopedia Encyclopedia</td>
</tr> <tr>
<td valign="top" align="left">Rahman T. et al. (<xref ref-type="bibr" rid="B160">2021</xref>)</td>
<td valign="top" align="left">Employed and compared different enhancement technique for performance improvement</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">PedPneumonia, BIMCV&#x0002B;COVID19</td>
</tr> <tr>
<td valign="top" align="left">Li et al. (<xref ref-type="bibr" rid="B111">2020</xref>)</td>
<td valign="top" align="left">On-device detection approach for CXR snapshots</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">PedPneumonia, COVID-CXR</td>
</tr> <tr>
<td valign="top" align="left">Ucar and Korkmaz (<xref ref-type="bibr" rid="B206">2020</xref>)</td>
<td valign="top" align="left">Used Bayesian optimization with deep models for differentiating Pneumonia</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">PedPneumonia, COVID-CXR</td>
</tr> <tr>
<td valign="top" align="left">Shi et al. (<xref ref-type="bibr" rid="B185">2021</xref>)</td>
<td valign="top" align="left">Knowledge transfer in the form of attention from teacher to student network</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">COVID-CXR, SIRM</td>
</tr> <tr>
<td valign="top" align="left">Saha et al. (<xref ref-type="bibr" rid="B174">2021</xref>)</td>
<td valign="top" align="left">Used deep features with different ML classifiers</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">COVID-CXR, SIRM, PedPneumonia, Chestxray14,</td>
</tr> <tr>
<td valign="top" align="left">Mahmud et al. (<xref ref-type="bibr" rid="B125">2020</xref>)</td>
<td valign="top" align="left">Used feature stacking generated from different resolutions</td>
<td valign="top" align="left">Yes</td>
<td valign="top" align="left">PedPneumonia, private</td>
</tr></tbody>
</table>
<table-wrap-foot>
<p>Pretraining (yes/no) refers to use of weights of deep model trained on ImageNet dataset. Private refers that the data used is in-house and is not released publicly.</p>
</table-wrap-foot>
</table-wrap>
</sec>
<sec>
<title>3.4.3. Patent review</title>
<p>Shankar et al. (<xref ref-type="bibr" rid="B182">2022</xref>) proposed a deep learning-based SVM approach for classifying chest X-rays affected with COVID-19 or normal.</p>
</sec>
<sec>
<title>3.4.4. Discussion</title>
<p>The research is very recent, and papers produced on different datasets are generated either with fewer samples or combining more than one dataset. The CXR data released post-pandemic is collected from multiple centers across the globe. Further, only a fewer works have incorporated inherent model interpretability. To the best of our knowledge, no work has been established for segmentation, report generation, or disease localization and the primary focus is on the classification task.</p>
</sec>
</sec>
</sec>
<sec id="s4">
<title>4. Datasets</title>
<p>Several chest X-ray datasets have been released over the past. These datasets are either made available in DICOM, PNG or JPEG format. The labeling is either done with the help of experts in this domain or label extraction methods using the natural language processing techniques from the reports associated with each image. Moreover, a few datasets also include the local labels as disease annotations for a given sample. Authors have also included lung field masks available as ground truth for performing segmentation and associated tasks. In this section, we include the publicly available datasets used in the literature. The statistics are also summarized in <xref ref-type="table" rid="T7">Table 7</xref>. <xref ref-type="fig" rid="F8">Figure 8</xref> illustrates the samples from the different CXR datasets mentioned below.</p>
<list list-type="bullet">
<list-item><p>JSRT: Shiraishi et al. (<xref ref-type="bibr" rid="B186">2000</xref>) introduced the dataset in the year 2000, consisting of 247 images for two classes; malignant and benign. The resolution of each image is 2048 X 2048. The dataset can be downloaded from <ext-link ext-link-type="uri" xlink:href="http://db.jsrt.or.jp/eng.php">http://db.jsrt.or.jp/eng.php</ext-link>.</p></list-item>
<list-item><p>Open-i (O) : Demner-Fushman et al. (<xref ref-type="bibr" rid="B38">2012</xref>) proposed the chest X-ray dataset consisting of 3955 samples for 3955 subjects. Images are available in DICOM format. The findings are available in the form of reports made available by the radiologists. The dataset is collected from Indiana Network for Patient care (McDonald et al., <xref ref-type="bibr" rid="B127">2005</xref>). The dataset can be downloaded from <ext-link ext-link-type="uri" xlink:href="https://openi.nlm.nih.gov/">https://openi.nlm.nih.gov/</ext-link>.</p></list-item>
<list-item><p>NLST : The dataset available collected from the NLST screening trails (Team, <xref ref-type="bibr" rid="B202">2011</xref>). The dataset consists of 26,732 subjects for CXRs, and a subset of the dataset is available on request from <ext-link ext-link-type="uri" xlink:href="https://biometry.nci.nih.gov/cdas/learn/nlst/images/">https://biometry.nci.nih.gov/cdas/learn/nlst/images/</ext-link>.</p></list-item>
<list-item><p>Shenzhen: Jaeger et al. (<xref ref-type="bibr" rid="B83">2014</xref>) introduced the dataset in the year 2014, consisting of 662 CXRs belonging to two classes; Normal and TB. The dataset was collected from Shenzhen No.3 Hospital in Shenzhen, Guangdong providence, China, in September 2012. The samples are shared publicly with original full resolution and include lung segmentation masks. The dataset can be downloaded from <ext-link ext-link-type="uri" xlink:href="https://openi.nlm.nih.gov/imgs/collections/ChinaSet_AllFiles.zip">https://openi.nlm.nih.gov/imgs/collections/ChinaSet_AllFiles.zip</ext-link>.</p></list-item>
<list-item><p>Montgomery: Jaeger et al. (<xref ref-type="bibr" rid="B83">2014</xref>) introduced the dataset in the year 2014, consisting of 138 CXRs belonging to two classes; Normal and TB. The dataset is collected from the tuberculosis control program of the Department of Health and Human Services of Montgomery County, MD, USA. It also includes lung segmentation masks, which are shared as original full-resolution images. The dataset can be downloaded from <ext-link ext-link-type="uri" xlink:href="https://openi.nlm.nih.gov/imgs/collections/NLM-MontgomeryCXRSet.zip">https://openi.nlm.nih.gov/imgs/collections/NLM-MontgomeryCXRSet.zip</ext-link>.</p></list-item>
<list-item><p>KIT: Ryoo and Kim (<xref ref-type="bibr" rid="B172">2014</xref>) proposed the dataset in year 2014. It consists of 10,848 DICOM CXRs with 7020 for normal and 3828 for TB. The dataset is collected from the Korean Institute of TB.</p></list-item>
<list-item><p>Indiana: Demner-Fushman et al. (<xref ref-type="bibr" rid="B39">2016</xref>) introduced the dataset in year 2015. The dataset is collected from the Indiana University hospital network. The dataset includes 3996 radiology reports and 8121 associated images. The dataset can be downloaded from <ext-link ext-link-type="uri" xlink:href="https://openi.nlm.nih.gov/">https://openi.nlm.nih.gov/</ext-link>.</p></list-item>
<list-item><p>Chestxray8: Wang et al. (<xref ref-type="bibr" rid="B217">2017</xref>) released the dataset in year 2017. It includes 108,948 frontal-view CXRs of 32,717 unique patients with eight different findings. The dataset is labeled report parsing (NLP) associated with each sample. The dataset can be downloaded from <ext-link ext-link-type="uri" xlink:href="https://nihcc.app.box.com/v/ChestXray-NIHCC">https://nihcc.app.box.com/v/ChestXray-NIHCC</ext-link>.</p></list-item>
<list-item><p>Chestxray14: Wang et al. (<xref ref-type="bibr" rid="B217">2017</xref>) published the dataset in 2017 consisting of 112,120 CXR samples from 30,805 subjects. Dataset consists of 1, 024 &#x000D7; 1, 024 image resolution images collected from the National Institute of Health (NIH), US. It contains labels for the 14 findings, automatically generated from the reports using NLP. The dataset is publicly available and can be downloaded from <ext-link ext-link-type="uri" xlink:href="https://www.kaggle.com/nih-chest-xrays/data">https://www.kaggle.com/nih-chest-xrays/data</ext-link>.</p></list-item>
<list-item><p>RSNA-Pneumonia: It&#x00027;s the dataset generated from the samples ChestXray14 dataset for pneumonia detection. It contains a total of 30,000 CXRs with pneumonia annotations with a 1, 024 &#x000D7; 1, 024 resolution. The annotations include lung opacities, resulting in samples with three classes normal, lung opacity, and not normal (Stein et al., <xref ref-type="bibr" rid="B193">2018</xref>). The dataset can be downloaded from <ext-link ext-link-type="uri" xlink:href="https://www.kaggle.com/c/rsna-pneumonia-detection-challenge/data">https://www.kaggle.com/c/rsna-pneumonia-detection-challenge/data</ext-link>.</p></list-item>
<list-item><p>Ped-Pneumonia: Kermany et al. (<xref ref-type="bibr" rid="B96">2018a</xref>) published the dataset in 2018, consisting of 5856 pediatric CXRs. The data is collected from Guangzhou Women and Children&#x00027;s Medical Center, Guangzhou, China. The dataset is labeled as viral and bacterial pneumonia. It also contains samples as normal. The dataset can be downloaded from <ext-link ext-link-type="uri" xlink:href="https://data.mendeley.com/datasets/rscbjbr9sj/2">https://data.mendeley.com/datasets/rscbjbr9sj/2</ext-link>.</p></list-item>
<list-item><p>CheXpert: Irvin et al. (<xref ref-type="bibr" rid="B80">2019</xref>) published one of the largest chest X-ray datasets consisting of 224,316 images with a total of 65,240 subjects in the year 2017. It took the authors almost 15 years to collect the dataset from Stanford Hospital, US. The dataset contains labels as presence, absence, uncertainty, and no mention of 12 abnormalities, no findings, and the existence of support devices. All these labels are generated automatically from radiology reports using a rule-based labeler (NLP). The dataset can be downloaded from <ext-link ext-link-type="uri" xlink:href="https://stanfordmlgroup.github.io/competitions/chexpert/">https://stanfordmlgroup.github.io/competitions/chexpert/</ext-link>.</p></list-item>
<list-item><p>CXR14-Rad-Labels: This (<xref ref-type="bibr" rid="B204">2020</xref>) introduced the dataset as the subset of the ChestXray14, consisting of 4 labels for 4,374 studies and 1,709 subjects. The annotations are provided by the cohort of radiologists and are made available along with agreement labels.</p></list-item>
<list-item><p>MIMIC-CXR: Johnson et al. (<xref ref-type="bibr" rid="B90">2019</xref>) published the dataset in the year 2019 with 371,920 CXRs collected from 64588 subjects admitted to the emergency department of Beth Israel Deaconess Medical Center. It took authors almost 5 years to collect the dataset, and it is made available in two versions; V1 and V2. V1 contains images with 8-bit grayscale images in full resolution, and V2 contains DICOM images with anonymized radiology reports. The labels are automatically generated by report parsing. The dataset can be downloaded from <ext-link ext-link-type="uri" xlink:href="https://physionet.org/content/mimic-cxr/">https://physionet.org/content/mimic-cxr/</ext-link>.</p></list-item>
<list-item><p>SIIM-ACR: Anuar (<xref ref-type="bibr" rid="B9">2019</xref>) is Kaggle challenge dataset for pneumothorax detection and segmentation. It is believed by some researchers that the data samples are taken from the ChestXray14 dataset; however, no official confirmation is made about this. CXRs are available as DICOM images with 1, 024 &#x000D7; 1, 024 resolution.</p></list-item>
<list-item><p>Padchest: Bustos et al. (<xref ref-type="bibr" rid="B18">2020</xref>) published the collected dataset in year 2020, consisting of 160,868 CXRs, 109,931 studies and 67,000 subjects. It took the authors almost 8 years to collect the dataset from the San Juan Hospital, Spain. The dataset is labeled using domain experts for a set of 27,593 images, and for the rest of the data, an RNN was trained to generate the labels from reports.</p></list-item>
<list-item><p>BIMCV: Vay&#x000E1; et al. (<xref ref-type="bibr" rid="B209">2020</xref>) introduced the dataset for COVID-19 in year 2020. It includes of CXRs, CT scans and laboratory test results. The dataset is collected from Valencian Region Medical ImageBank (BIMCV). It consists of 3,293 CXRs from 1,305 COVID-19-positive subjects.</p></list-item>
<list-item><p>COVID abnormality annotation for X-Rays (CAAXR): Mittal et al. (<xref ref-type="bibr" rid="B131">2022</xref>) proposed the dataset with annotations on the existing BIMCV-COVID-19&#x0002B; dataset performed by the radiologists. The dataset contains annotations for different findings such as atelectasis, consolidation, pleural effusion, edema and others. CAAXR contains a total of 1,749 images with 3,943 annotations. The dataset can be downloaded from <ext-link ext-link-type="uri" xlink:href="https://osf.io/b35xu/">https://osf.io/b35xu/</ext-link> and <ext-link ext-link-type="uri" xlink:href="http://covbase4all.igib.res.in/">http://covbase4all.igib.res.in/</ext-link>.</p></list-item>
<list-item><p>COVIDDSL: The dataset was released in 2020 for COVID-19 detection (Hospitales, <xref ref-type="bibr" rid="B71">2020</xref>). The dataset is collected from the HM Hospitales group in Spain and includes CXRs from 1725 subjects along with detailed results from laboratory testing, vital signs etc.</p></list-item>
<list-item><p>COVIDGR: Tabik et al. (<xref ref-type="bibr" rid="B197">2020</xref>) released the dataset, collected from Hospital Universitario Cl&#x000ED;nico San Cecilio, Granada, Spain. It consists of 852 PA CXRs, with labels for positive and negative COVID-19. The dataset also includes the severity of COVID-19 in positive cases.</p></list-item>
<list-item><p>COVID-CXR: Cohen et al. (<xref ref-type="bibr" rid="B32">2020</xref>) released the dataset for COVID-19 with a total of 930 CXRs. The dataset includes samples from a large variety of places. It includes data collected from different methods, including screenshots from the research papers. The dataset is labeled as the label mentioned in the source and is available in PNG or JPEG format. The dataset can be downloaded from <ext-link ext-link-type="uri" xlink:href="https://github.com/ieee8023/covid-chestxray-dataset">https://github.com/ieee8023/covid-chestxray-dataset</ext-link>.</p></list-item>
<list-item><p>VinDr-CXR: Nguyen et al. (<xref ref-type="bibr" rid="B139">2020</xref>) proposed the dataset collected from the two major hospitals of Vietnam from 2018 to 2020. The dataset includes 18,000 CXRs, 15,000 samples for training and 3,000 for testing. The annotations are made manually by 17 expert radiologists for 22 local labels and six global labels. The samples for the training set are labeled by three radiologists, while the testing set is labeled independently by five radiologists. Images in the dataset are available in DICOM format and can be downloaded from <ext-link ext-link-type="uri" xlink:href="https://vindr.ai/datasets/cxr">https://vindr.ai/datasets/cxr</ext-link> after signing the license agreement.</p></list-item>
<list-item><p>Brax: Reis (<xref ref-type="bibr" rid="B167">2022</xref>) introduced the dataset which includes 40,967 CXRs, 24,959 imaging studies for 19,351 subjects, collected from the Hospital Israelita Albert Einstein, Brazil. The dataset is labeled for 14 radiological findings using report parsing (NLP). Dataset is made available in both DICOM and PNG format. The dataset can be downloaded from <ext-link ext-link-type="uri" xlink:href="https://physionet.org/content/brax/1.0.0/">https://physionet.org/content/brax/1.0.0/</ext-link>.</p></list-item>
<list-item><p>Belarus: is used in many papers and consists of 300 CXR images. However, the download link is not available and also further details about the dataset are missing as well.</p></list-item>
</list>
<table-wrap position="float" id="T7">
<label>Table 7</label>
<caption><p>Illustrates the available CXR datasets in the literature.</p></caption> 
<table frame="hsides" rules="groups">
<thead>
<tr style="background-color:#919497;color:#ffffff">
<th valign="top" align="left"><bold>Name</bold></th>
<th valign="top" align="left"><bold>Number of Images (I)/Patients (P)</bold></th>
<th valign="top" align="left"><bold>View position</bold></th>
<th valign="top" align="left"><bold>Global labels</bold></th>
<th valign="top" align="left"><bold>Local labels</bold></th>
<th valign="top" align="left"><bold>Image format</bold></th>
<th valign="top" align="left"><bold>Labeling method</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">JSRT (Shiraishi et al., <xref ref-type="bibr" rid="B186">2000</xref>)</td>
<td valign="top" align="left">I: 247</td>
<td valign="top" align="left">PA: 247</td>
<td valign="top" align="left">3</td>
<td valign="top" align="left">N/A</td>
<td valign="top" align="left">DICOM</td>
<td valign="top" align="left">Radiologist</td>
</tr> <tr>
<td valign="top" align="left">Open-i (O) (Demner-Fushman et al., <xref ref-type="bibr" rid="B38">2012</xref>)</td>
<td valign="top" align="left">I: 7910</td>
<td valign="top" align="left">PA: 3955, LL: 3955</td>
<td valign="top" align="left">N/A</td>
<td valign="top" align="left">N/A</td>
<td valign="top" align="left">DICOM</td>
<td valign="top" align="left">Radiologist</td>
</tr> <tr>
<td valign="top" align="left">NLST (Team, <xref ref-type="bibr" rid="B202">2011</xref>)</td>
<td valign="top" align="left">I: 5493</td>
<td valign="top" align="left" colspan="5">No public information is available. The dataset was reported by Lu et al. (<xref ref-type="bibr" rid="B120">2019</xref>)</td>
</tr> <tr>
<td valign="top" align="left">Shenzhen (Jaeger et al., <xref ref-type="bibr" rid="B83">2014</xref>)</td>
<td valign="top" align="left">I: 340</td>
<td valign="top" align="left">PA: 340</td>
<td valign="top" align="left">2</td>
<td valign="top" align="left">N/A</td>
<td valign="top" align="left">PNG</td>
<td valign="top" align="left">Radiologist</td>
</tr> <tr>
<td valign="top" align="left">Montgomery (Jaeger et al., <xref ref-type="bibr" rid="B83">2014</xref>)</td>
<td valign="top" align="left">I: 138</td>
<td valign="top" align="left">PA: 138</td>
<td valign="top" align="left">2</td>
<td valign="top" align="left">N/A</td>
<td valign="top" align="left">PNG</td>
<td valign="top" align="left">Radiologist</td>
</tr> <tr>
<td valign="top" align="left">Indiana (Demner-Fushman et al., <xref ref-type="bibr" rid="B39">2016</xref>)</td>
<td valign="top" align="left">I: 7466</td>
<td valign="top" align="left">PA: 3807, LL: 3659</td>
<td valign="top" align="left">N/A</td>
<td valign="top" align="left">N/A</td>
<td valign="top" align="left">N/A</td>
<td valign="top" align="left">Radiology reports</td>
</tr> <tr>
<td valign="top" align="left">Chestxray8 (Wang et al., <xref ref-type="bibr" rid="B217">2017</xref>)</td>
<td valign="top" align="left">I: 108K&#x0002B;, P: 32,717</td>
<td valign="top" align="left">N/A</td>
<td valign="top" align="left">N/A</td>
<td valign="top" align="left">8</td>
<td valign="top" align="left">PNG</td>
<td valign="top" align="left">Report parsing</td>
</tr> <tr>
<td valign="top" align="left">Chestxray14 (Wang et al., <xref ref-type="bibr" rid="B217">2017</xref>)</td>
<td valign="top" align="left">I: 112K, P: 31K</td>
<td valign="top" align="left">PA: 67K, AP: 45K</td>
<td valign="top" align="left">No</td>
<td valign="top" align="left">14</td>
<td valign="top" align="left">PNG</td>
<td valign="top" align="left">Report parsing</td>
</tr> <tr>
<td valign="top" align="left">RSNA-Pneumonia (Stein et al., <xref ref-type="bibr" rid="B193">2018</xref>)</td>
<td valign="top" align="left">I: 30K</td>
<td valign="top" align="left">PA: 16K, AP: 14K</td>
<td valign="top" align="left">1</td>
<td valign="top" align="left">N/A</td>
<td valign="top" align="left">DICOM</td>
<td valign="top" align="left">Radiologist</td>
</tr> <tr>
<td valign="top" align="left">Ped-Pneumonia (Kermany et al., <xref ref-type="bibr" rid="B96">2018a</xref>)</td>
<td valign="top" align="left">I: 5856</td>
<td valign="top" align="left">N/A</td>
<td valign="top" align="left">2</td>
<td valign="top" align="left">N/A</td>
<td valign="top" align="left">JPEG</td>
<td valign="top" align="left">Radiologist</td>
</tr> <tr>
<td valign="top" align="left">CheXpert (Irvin et al., <xref ref-type="bibr" rid="B80">2019</xref>)</td>
<td valign="top" align="left">P: 65K, I: 224K</td>
<td valign="top" align="left">PA: 29K, AP: 16K, LL: 32K</td>
<td valign="top" align="left">N/A</td>
<td valign="top" align="left">14</td>
<td valign="top" align="left">JPEG</td>
<td valign="top" align="left">Report parsing Cohort of Radiologists</td>
</tr> <tr>
<td valign="top" align="left">CXR14-Rad-Labels (This, <xref ref-type="bibr" rid="B204">2020</xref>)</td>
<td valign="top" align="left">P: 1709, I: 4374</td>
<td valign="top" align="left">AP: 3244, PA: 1132</td>
<td valign="top" align="left">4</td>
<td valign="top" align="left">N/A</td>
<td valign="top" align="left">PNG</td>
<td valign="top" align="left">Radiologist</td>
</tr> <tr>
<td valign="top" align="left">MIMIC-CXR (Johnson et al., <xref ref-type="bibr" rid="B90">2019</xref>)</td>
<td valign="top" align="left">P: 65K, I: 372K</td>
<td valign="top" align="left">PA&#x0002B;AP: 250K, LL: 122K</td>
<td valign="top" align="left">N/A</td>
<td valign="top" align="left">14</td>
<td valign="top" align="left">JPEG(V1) DICOM(V2)</td>
<td valign="top" align="left">Report Parsing</td>
</tr> <tr>
<td valign="top" align="left">SIIM-ACR (Anuar, <xref ref-type="bibr" rid="B9">2019</xref>)</td>
<td valign="top" align="left">I: 16K, P: 16K</td>
<td valign="top" align="left">PA: 11K, AP: 4799</td>
<td valign="top" align="left">1</td>
<td valign="top" align="left">N/A</td>
<td valign="top" align="left">DICOM</td>
<td valign="top" align="left">Radiologist</td>
</tr> <tr>
<td valign="top" align="left">Padchest (Bustos et al., <xref ref-type="bibr" rid="B18">2020</xref>)</td>
<td valign="top" align="left">P: 67K, I: 160K</td>
<td valign="top" align="left">PA: 96K, AP: 20K, LL: 51K</td>
<td valign="top" align="left">N/A</td>
<td valign="top" align="left">193</td>
<td valign="top" align="left">DICOM</td>
<td valign="top" align="left">Report parsing Radiologist Interpretation of reports</td>
</tr> <tr>
<td valign="top" align="left">BIMCV (Vay&#x000E1; et al., <xref ref-type="bibr" rid="B209">2020</xref>)</td>
<td valign="top" align="left">P: 9129, I: 25,554</td>
<td valign="top" align="left">PA: 8,748, AP: 10,469, LL: 6,337</td>
<td valign="top" align="left">1</td>
<td valign="top" align="left">N/A</td>
<td valign="top" align="left">PNG</td>
<td valign="top" align="left">Laboratory Reports</td>
</tr> <tr>
<td valign="top" align="left">CAAXR (Mittal et al., <xref ref-type="bibr" rid="B131">2022</xref>)</td>
<td valign="top" align="left">P: 1749, I: 1749</td>
<td valign="top" align="left">Not mentioned</td>
<td valign="top" align="left">1</td>
<td valign="top" align="left">N/A</td>
<td valign="top" align="left">PNG</td>
<td valign="top" align="left">Cohort of radiologists</td>
</tr> <tr>
<td valign="top" align="left">COVIDSSL (Hospitales, <xref ref-type="bibr" rid="B71">2020</xref>)</td>
<td valign="top" align="left">P: 1,725</td>
<td valign="top" align="left">Mostly AP</td>
<td valign="top" align="left">1</td>
<td valign="top" align="left">N/A</td>
<td valign="top" align="left">DICOM</td>
<td valign="top" align="left">Laboratory Reports</td>
</tr> <tr>
<td valign="top" align="left">COVIDGR (Tabik et al., <xref ref-type="bibr" rid="B197">2020</xref>)</td>
<td valign="top" align="left">I: 852</td>
<td valign="top" align="left">PA: 852</td>
<td valign="top" align="left">2</td>
<td valign="top" align="left">N/A</td>
<td valign="top" align="left">JPEG</td>
<td valign="top" align="left">Radiologist</td>
</tr> <tr>
<td valign="top" align="left">COVID-CXR (Cohen et al., <xref ref-type="bibr" rid="B32">2020</xref>)</td>
<td valign="top" align="left">I: 866, P: 449</td>
<td valign="top" align="left">PA: 344, AP: 438, LL: 84</td>
<td valign="top" align="left">4</td>
<td valign="top" align="left">N/A</td>
<td valign="top" align="left">JPEG</td>
<td valign="top" align="left">Radiologist</td>
</tr> <tr>
<td valign="top" align="left">VinDr-CXR (Nguyen et al., <xref ref-type="bibr" rid="B139">2020</xref>)</td>
<td valign="top" align="left">I: 18K</td>
<td valign="top" align="left">PA: 18K</td>
<td valign="top" align="left">6</td>
<td valign="top" align="left">22</td>
<td valign="top" align="left">DICOM</td>
<td valign="top" align="left">Radiologist</td>
</tr> <tr>
<td valign="top" align="left">Brax (Reis, <xref ref-type="bibr" rid="B167">2022</xref>)</td>
<td valign="top" align="left">P: 19,351, I: 40,967</td>
<td valign="top" align="left">Numbers are not mentioned</td>
<td valign="top" align="left">N/A</td>
<td valign="top" align="left">14</td>
<td valign="top" align="left">DICOM &#x0002B; PNG</td>
<td valign="top" align="left">Report parsing</td>
</tr> <tr>
<td valign="top" align="left">Belarus (Rosenthal et al., <xref ref-type="bibr" rid="B170">2017</xref>)</td>
<td valign="top" align="left">I: 306, P:169</td>
<td valign="top" align="left" colspan="5">No other information is available</td>
</tr></tbody>
</table>
<table-wrap-foot>
<p>The table presents the description of each dataset with the number of images, patients, the available format of images, view position and labeling (annotation) method. The global label refers to the label single label assigned to the image for multiclass settings while local labels refer to multiple labels assigned to a single image for different findings present in multilabel settings. PA stands for Posterior to anterior, AP stands for Anterior to posterior and LL stands for lateral view. K refers to 1,000. N/A stands for not available.</p>
</table-wrap-foot>
</table-wrap>
<fig id="F8" position="float">
<label>Figure 8</label>
<caption><p>Showcases the sample examples of the CXRs from different datasets. The samples belong to Shenzhen <bold>(A, B)</bold>, Montgomery <bold>(C, D)</bold>, JSRT <bold>(E, F)</bold>, Chestxray14 <bold>(G, H)</bold>, VinDr-CXR <bold>(I, J)</bold>, CheXpert <bold>(K, L)</bold>, RSNA Pneumonia <bold>(M, N)</bold>, Covid-CXR <bold>(O, P)</bold>, PedPneumonia <bold>(Q, R)</bold>, and MIMIC-CXR <bold>(S, T)</bold>. The samples across different datasets highlight a wide variety in terms of quality, contrast, brightness and original image size.</p></caption>
<graphic mimetype="image" mime-subtype="tiff" xlink:href="fdata-06-1120989-g0008.tif"/>
</fig>
<sec>
<title>4.1. Discussion</title>
<p>Generating large datasets in the medical domain is always a challenging process due to data privacy concerns and the need for expert annotators. While several existing datasets have enabled different research threads for CXR-based image analysis for disorders such as TB and pneumonia, the number of annotated samples in these datasets is less for modern deep learning based algorithm development. Further, local ground truth labeling plays an important role in disease classification and detection, and improves explainability. Existing datasets, in general, lack variability in terms of sensors and demographics. For many thoracic disorders, such as Pneumoconiosis, COPD, and lung cancer, there is a lack of publicly available datasets. On the other hand, datasets for the recent COVID-19 pandemic are collected from different hospitals across the globe with fewer samples and limited labels. Only a few datasets have associated local labels; for instance, Chestxray14 and CheXpert. These labels are generated using the report parsing method and results in high label noise. This may increase higher chances of missing labels due to the absence of findings in radiology reports on which the report parser (NLP algorithm) is designed. This draws the attention to carefully handling the labeling process while releasing the datasets to avoid any errors during deep model training.</p>
</sec>
</sec>
<sec id="s5">
<title>5. Evaluation metrics</title>
<p>This section covers different metrics used to evaluate the proposed approach in the existing literature. <xref ref-type="table" rid="T8">Table 8</xref> summarizes the various metrics that are used to evaluate different tasks in CXR-based image analysis.</p>
<table-wrap position="float" id="T8">
<label>Table 8</label>
<caption><p>Summarizes the metrics used for assessing the performance of different tasks performed by an ML/DL model.</p></caption> 
<table frame="hsides" rules="groups">
<thead>
<tr style="background-color:&#x00023;919498;color:&#x00023;ffffff">
<th valign="top" align="left"><bold>Image enhancement</bold></th>
<th valign="top" align="left" colspan="2"><bold>PSNR</bold></th>
<th valign="top" align="left" colspan="2"><bold>SSIM</bold></th>
<th valign="top" align="left" colspan="2"><bold>MSE</bold></th>
<th valign="top" align="left" colspan="2"><bold>MAXERR</bold></th>
<th valign="top" align="left"><bold>L2rat</bold></th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Segmentation</td>
<td valign="top" align="left">Intersection over Union (IOU)</td>
<td/>
<td/>
<td valign="top" align="left">Dice Coefficient</td>
<td/>
<td valign="top" align="left">Pixel accuracy</td>
<td/>
</tr> <tr>
<td valign="top" align="left">Classification</td>
<td valign="top" align="left">Sensitivity</td>
<td valign="top" align="left">Specificity</td>
<td valign="top" align="left">Accuracy</td>
<td valign="top" align="left" colspan="2">Precision</td>
<td valign="top" align="left" colspan="2">F1-score</td>
<td valign="top" align="left" colspan="2">AUC-ROC Curve</td>
</tr> <tr>
<td valign="top" align="left">Fairness</td>
<td valign="top" align="left">Demographic Parity</td>
<td valign="top" align="left">Equalized odds</td>
<td valign="top" align="left">Degree of bias</td>
<td valign="top" align="left">Disparate impact</td>
<td valign="top" align="left">Predictive Rate Parity</td>
<td valign="top" align="left">Equal opportunity</td>
<td valign="top" align="left">Treatment Equality</td>
<td valign="top" align="left">Individual Fairness</td>
<td valign="top" align="left">Counterfactual fairness</td>
</tr> <tr>
<td valign="top" align="left">Image captioning</td>
<td valign="top" align="left" colspan="2">BLEU</td>
<td valign="top" align="left" colspan="2">METEOR</td>
<td valign="top" align="left" colspan="2">ROGUE-L</td>
<td valign="top" align="left">CIDEr</td>
<td valign="top" align="left" colspan="2">SPICE</td>
</tr></tbody>
</table>
</table-wrap>
<sec>
<title>5.1. Image enhancement task</title>
<p>To assess the quality of images for different enhancement techniques, the difference between the original and enhanced image is calculated using the following metrics.</p>
<list list-type="bullet">
<list-item><p>Peak signal to noise ratio (PSNR): It is a quality assessment metric and is expressed as the ratio of the maximum possible power of the original signal to the power of the noisy signal.</p></list-item>
<list-item><p>Structural Similarity Index (SSIM): It is a quality measure used to compare the similarity between two images.</p></list-item>
<list-item><p>Mean squared error (MSE): It is a quality assessment measure and is defined as the accumulative sum of square error between enhanced and original images.</p></list-item>
<list-item><p>MAXERR: It is the maximum absolute squared error of the specified enhanced image with a size equal to that of the original image (Huynh-Thu and Ghanbari, <xref ref-type="bibr" rid="B75">2008</xref>).</p></list-item>
<list-item><p>L2rat: It is defined as the squared norm of the enhanced image to the original image (Huynh-Thu and Ghanbari, <xref ref-type="bibr" rid="B75">2008</xref>).</p></list-item>
</list>
</sec>
<sec>
<title>5.2. Segmentation task</title>
<p>Segmentation approaches aim to find the ROI in a given image. In order to evaluate the segmentation algorithms for generating the prediction mask, and compare that with the ground truth mask, the following performance metrics are used :</p>
<list list-type="bullet">
<list-item><p>Intersection over Union (IOU): It is also called as Jaccard Index. It is defined as the ratio of intersection over the union of area for the predicted mask to the area of the ground truth mask. The IOU value lies between 0 for poor overlap and 1 for complete overlap. Values above 0.5 are considered decent for the algorithm. It is defined as;</p></list-item>
</list>
<disp-formula id="E1"><mml:math id="M1"><mml:mi>I</mml:mi><mml:mi>O</mml:mi><mml:mi>U</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>d</mml:mi><mml:mi>i</mml:mi><mml:mi>c</mml:mi><mml:mi>t</mml:mi><mml:mi>e</mml:mi><mml:mi>d</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>k</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>a</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>a</mml:mi><mml:mo>&#x02229;</mml:mo><mml:mi>g</mml:mi><mml:mi>r</mml:mi><mml:mi>o</mml:mi><mml:mi>u</mml:mi><mml:mi>n</mml:mi><mml:mi>d</mml:mi><mml:mi>t</mml:mi><mml:mi>r</mml:mi><mml:mi>u</mml:mi><mml:mi>t</mml:mi><mml:mi>h</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>k</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>a</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>a</mml:mi></mml:mrow><mml:mrow><mml:mi>p</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>d</mml:mi><mml:mi>i</mml:mi><mml:mi>c</mml:mi><mml:mi>t</mml:mi><mml:mi>e</mml:mi><mml:mi>d</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>k</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>a</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>a</mml:mi><mml:mo>&#x0222A;</mml:mo><mml:mi>g</mml:mi><mml:mi>r</mml:mi><mml:mi>o</mml:mi><mml:mi>u</mml:mi><mml:mi>n</mml:mi><mml:mi>d</mml:mi><mml:mi>t</mml:mi><mml:mi>r</mml:mi><mml:mi>u</mml:mi><mml:mi>t</mml:mi><mml:mi>h</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>m</mml:mi><mml:mi>a</mml:mi><mml:mi>s</mml:mi><mml:mi>k</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>a</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>a</mml:mi></mml:mrow></mml:mfrac></mml:math></disp-formula>
<list list-type="bullet">
<list-item><p>Dice Coefficient: It is also defined as an F1 score. It is defined as the ratio of twice the area of overlap between the predicted mask and ground truth mask to the total number of pixels for both masks. It is similar to the IOU. Mathematically, it is defined as</p></list-item>
</list>
<disp-formula id="E2"><mml:math id="M2"><mml:mi>D</mml:mi><mml:mi>i</mml:mi><mml:mi>c</mml:mi><mml:mi>e</mml:mi><mml:mi>C</mml:mi><mml:mi>o</mml:mi><mml:mi>e</mml:mi><mml:mi>f</mml:mi><mml:mi>f</mml:mi><mml:mi>i</mml:mi><mml:mi>c</mml:mi><mml:mi>i</mml:mi><mml:mi>e</mml:mi><mml:mi>n</mml:mi><mml:mi>t</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mo stretchy='false'>(</mml:mo><mml:mn>2</mml:mn><mml:mo>&#x02217;</mml:mo><mml:mi>A</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>a</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>o</mml:mi><mml:mi>f</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>o</mml:mi><mml:mi>v</mml:mi><mml:mi>e</mml:mi><mml:mi>r</mml:mi><mml:mi>l</mml:mi><mml:mi>a</mml:mi><mml:mi>p</mml:mi><mml:mo stretchy='false'>)</mml:mo></mml:mrow><mml:mrow><mml:mi>s</mml:mi><mml:mi>u</mml:mi><mml:mi>m</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>o</mml:mi><mml:mi>f</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>p</mml:mi><mml:mi>i</mml:mi><mml:mi>x</mml:mi><mml:mi>e</mml:mi><mml:mi>l</mml:mi><mml:mi>s</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>c</mml:mi><mml:mi>o</mml:mi><mml:mi>m</mml:mi><mml:mi>b</mml:mi><mml:mi>i</mml:mi><mml:mi>n</mml:mi><mml:mi>e</mml:mi><mml:mi>d</mml:mi></mml:mrow></mml:mfrac></mml:math></disp-formula>
<list list-type="bullet">
<list-item><p>Pixel accuracy: It is another metric for evaluating semantic segmentation. It is defined as the percentage of pixels that are correctly classified. It can give misleading results for the minor class. Mathematically, it is defined as the ratio of correctly classified pixels to the sum of all the pixels. For a binary image, it is defined as;</p></list-item>
</list>
<disp-formula id="E3"><mml:math id="M3"><mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:mi>P</mml:mi><mml:mi>i</mml:mi><mml:mi>x</mml:mi><mml:mi>e</mml:mi><mml:mi>l</mml:mi><mml:mi>A</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:mi>u</mml:mi><mml:mi>r</mml:mi><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>y</mml:mi><mml:mo>=</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mfrac><mml:mrow><mml:mi>T</mml:mi><mml:mi>r</mml:mi><mml:mi>u</mml:mi><mml:mi>e</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>P</mml:mi><mml:mi>o</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi><mml:mi>e</mml:mi><mml:mo>+</mml:mo><mml:mi>T</mml:mi><mml:mi>r</mml:mi><mml:mi>u</mml:mi><mml:mi>e</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>N</mml:mi><mml:mi>e</mml:mi><mml:mi>g</mml:mi><mml:mi>a</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi><mml:mi>r</mml:mi><mml:mi>u</mml:mi><mml:mi>e</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>P</mml:mi><mml:mi>o</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi><mml:mi>e</mml:mi><mml:mo>+</mml:mo><mml:mi>T</mml:mi><mml:mi>r</mml:mi><mml:mi>u</mml:mi><mml:mi>e</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>N</mml:mi><mml:mi>e</mml:mi><mml:mi>g</mml:mi><mml:mi>a</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi><mml:mi>e</mml:mi><mml:mo>+</mml:mo><mml:mi>F</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:mi>s</mml:mi><mml:mi>e</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>P</mml:mi><mml:mi>o</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi><mml:mi>e</mml:mi><mml:mo>+</mml:mo><mml:mi>F</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:mi>s</mml:mi><mml:mi>e</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>N</mml:mi><mml:mi>e</mml:mi><mml:mi>g</mml:mi><mml:mi>a</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi><mml:mi>e</mml:mi></mml:mrow></mml:mfrac></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
</sec>
<sec>
<title>5.3. Classification task</title>
<p>To evaluate the ML model for the classification task, the following metrics are widely used in the literature.</p>
<list list-type="bullet">
<list-item><p>Sensitivity: aka recall, is the proportion of the actual positive samples that are correctly identified as positive. It indicates what percent of actual disease affected patients were detected by the model. Mathematically, it is defined as:</p></list-item>
</list>
<disp-formula id="E4"><mml:math id="M4"><mml:mtable columnalign="left"><mml:mtr><mml:mtd><mml:mrow><mml:mi>S</mml:mi><mml:mi>e</mml:mi><mml:mi>n</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>y</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>R</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:mi>l</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>T</mml:mi><mml:mi>r</mml:mi><mml:mi>u</mml:mi><mml:mi>e</mml:mi><mml:mtext>&#x000A0;</mml:mtext><mml:mi>P</mml:mi><mml:mi>o</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi><mml:mi>r</mml:mi><mml:mi>u</mml:mi><mml:mi>e</mml:mi><mml:mtext>&#x000A0;</mml:mtext><mml:mi>P</mml:mi><mml:mi>o</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:mo>+</mml:mo><mml:mi>F</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:mi>s</mml:mi><mml:mi>e</mml:mi><mml:mtext>&#x000A0;</mml:mtext><mml:mi>N</mml:mi><mml:mi>e</mml:mi><mml:mi>g</mml:mi><mml:mi>a</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi></mml:mrow></mml:mfrac><mml:mtext>&#x000A0;</mml:mtext></mml:mrow></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<list list-type="bullet">
<list-item><p>Specificity: aka true negative rate, refers to the fraction of the samples&#x00027; actual negative cases from all the predicted negative cases. It indicates what percent of the disease-negative patients are detected as positive (False positive) Mathematically, It can be defined as:</p></list-item>
</list>
<disp-formula id="E5"><mml:math id="M5"><mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:mi>S</mml:mi><mml:mi>p</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>i</mml:mi><mml:mi>f</mml:mi><mml:mi>i</mml:mi><mml:mi>c</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>y</mml:mi><mml:mo stretchy='false'>(</mml:mo><mml:mi>T</mml:mi><mml:mi>r</mml:mi><mml:mi>u</mml:mi><mml:mi>e</mml:mi><mml:mtext>&#x000A0;</mml:mtext><mml:mi>N</mml:mi><mml:mi>e</mml:mi><mml:mi>g</mml:mi><mml:mi>a</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi><mml:mi>e</mml:mi><mml:mtext>&#x000A0;</mml:mtext><mml:mi>R</mml:mi><mml:mi>a</mml:mi><mml:mi>t</mml:mi><mml:mi>e</mml:mi><mml:mo stretchy='false'>)</mml:mo><mml:mo>=</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mfrac><mml:mrow><mml:mi>T</mml:mi><mml:mi>r</mml:mi><mml:mi>u</mml:mi><mml:mi>e</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>N</mml:mi><mml:mi>e</mml:mi><mml:mi>g</mml:mi><mml:mi>a</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi><mml:mi>r</mml:mi><mml:mi>u</mml:mi><mml:mi>e</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>N</mml:mi><mml:mi>e</mml:mi><mml:mi>g</mml:mi><mml:mi>a</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:mo>+</mml:mo><mml:mi>F</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:mi>s</mml:mi><mml:mi>e</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>P</mml:mi><mml:mi>o</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi></mml:mrow></mml:mfrac></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<list list-type="bullet">
<list-item><p>Accuracy: It is defined as the number of correctly classified samples from the total number of samples. It shows often the model predicts the class labels accurately. However, it can be misleading sometimes, and class wise accuracy is preferred over overall accuracy.</p></list-item>
</list>
<disp-formula id="E6"><mml:math id="M6"><mml:mtable columnalign='left'><mml:mtr><mml:mtd><mml:mi>A</mml:mi><mml:mi>c</mml:mi><mml:mi>c</mml:mi><mml:mi>u</mml:mi><mml:mi>r</mml:mi><mml:mi>a</mml:mi><mml:mi>c</mml:mi><mml:mi>y</mml:mi><mml:mo>=</mml:mo></mml:mtd></mml:mtr><mml:mtr><mml:mtd><mml:mfrac><mml:mrow><mml:mi>T</mml:mi><mml:mi>r</mml:mi><mml:mi>u</mml:mi><mml:mi>e</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>P</mml:mi><mml:mi>o</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:mo>+</mml:mo><mml:mi>T</mml:mi><mml:mi>r</mml:mi><mml:mi>u</mml:mi><mml:mi>e</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>N</mml:mi><mml:mi>e</mml:mi><mml:mi>g</mml:mi><mml:mi>a</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi><mml:mi>r</mml:mi><mml:mi>u</mml:mi><mml:mi>e</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>P</mml:mi><mml:mi>o</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:mo>+</mml:mo><mml:mi>F</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:mi>s</mml:mi><mml:mi>e</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>N</mml:mi><mml:mi>e</mml:mi><mml:mi>g</mml:mi><mml:mi>a</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi><mml:mi>e</mml:mi><mml:mi>s</mml:mi><mml:mo>+</mml:mo><mml:mi>T</mml:mi><mml:mi>r</mml:mi><mml:mi>u</mml:mi><mml:mi>e</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>N</mml:mi><mml:mi>e</mml:mi><mml:mi>g</mml:mi><mml:mi>a</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi><mml:mi>e</mml:mi><mml:mo>+</mml:mo><mml:mi>F</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:mi>s</mml:mi><mml:mi>e</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>P</mml:mi><mml:mi>o</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi><mml:mi>e</mml:mi></mml:mrow></mml:mfrac></mml:mtd></mml:mtr></mml:mtable></mml:math></disp-formula>
<list list-type="bullet">
<list-item><p>Precision: Also known as a positive predictive value, is the ratio of positive samples that are accurately predicted. It emphasizes how many correctly predicted samples are actually TB positive. It is majorly used in cases where false positive are of more importance than false negatives. Mathematically it is defined as:</p></list-item>
</list>
<disp-formula id="E7"><mml:math id="M7"><mml:mi>P</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>i</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>o</mml:mi><mml:mi>n</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>T</mml:mi><mml:mi>r</mml:mi><mml:mi>u</mml:mi><mml:mi>e</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>P</mml:mi><mml:mi>o</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi><mml:mi>r</mml:mi><mml:mi>u</mml:mi><mml:mi>e</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>P</mml:mi><mml:mi>o</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi><mml:mi>e</mml:mi><mml:mo>+</mml:mo><mml:mi>F</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:mi>s</mml:mi><mml:mi>e</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>P</mml:mi><mml:mi>o</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi><mml:mi>e</mml:mi></mml:mrow></mml:mfrac></mml:math></disp-formula>
<list list-type="bullet">
<list-item><p>F1-score: It is defined as the harmonic mean of precision and recall. It reaches the maximum value when both precision and recall are equal. It is of high use in cases where both false positives and true negatives are of equal concern. Mathematically, it is defined as</p></list-item>
</list>
<disp-formula id="E8"><mml:math id="M8"><mml:mi>F</mml:mi><mml:mn>1</mml:mn><mml:mo>&#x02212;</mml:mo><mml:mi>s</mml:mi><mml:mi>c</mml:mi><mml:mi>o</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mo>=</mml:mo><mml:mn>2</mml:mn><mml:mo>&#x02217;</mml:mo><mml:mfrac><mml:mrow><mml:mi>P</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>i</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>o</mml:mi><mml:mi>n</mml:mi><mml:mo>&#x02217;</mml:mo><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:mi>l</mml:mi></mml:mrow><mml:mrow><mml:mi>P</mml:mi><mml:mi>r</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>i</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>o</mml:mi><mml:mi>n</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mo>+</mml:mo><mml:mi>R</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:mi>l</mml:mi></mml:mrow></mml:mfrac></mml:math></disp-formula>
<list list-type="bullet">
<list-item><p>AUC-ROC Curve: It tells the probability of separating samples of negative class from positive class samples based on different thresholds. For different thresholds, a plot is obtained for different values of True Positive Rate (TPR) and their corresponding False Positive Rate (FPR) values. For example, it is not always necessary to have a particular threshold such as 0.5 and classify a patient as a positive for disease if value is &#x0003E;0.5 and negative if value is &#x0003C;0.5. A set of different thresholds is used to find an optimal threshold, where both positive and negative patients are classified best by the model.</p></list-item>
</list>
<disp-formula id="E9"><mml:math id="M9"><mml:mi>T</mml:mi><mml:mi>P</mml:mi><mml:mi>R</mml:mi><mml:mo>=</mml:mo><mml:mi>S</mml:mi><mml:mi>e</mml:mi><mml:mi>n</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>y</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>T</mml:mi><mml:mi>r</mml:mi><mml:mi>u</mml:mi><mml:mi>e</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>P</mml:mi><mml:mi>o</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mi>T</mml:mi><mml:mi>r</mml:mi><mml:mi>u</mml:mi><mml:mi>e</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>P</mml:mi><mml:mi>o</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi><mml:mi>e</mml:mi><mml:mo>+</mml:mo><mml:mi>F</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:mi>s</mml:mi><mml:mi>e</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>N</mml:mi><mml:mi>e</mml:mi><mml:mi>g</mml:mi><mml:mi>a</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi><mml:mi>e</mml:mi></mml:mrow></mml:mfrac></mml:math></disp-formula>
<disp-formula id="E10"><mml:math id="M10"><mml:mi>F</mml:mi><mml:mi>P</mml:mi><mml:mi>R</mml:mi><mml:mo>=</mml:mo><mml:mn>1</mml:mn><mml:mo>&#x02212;</mml:mo><mml:mi>S</mml:mi><mml:mi>p</mml:mi><mml:mi>e</mml:mi><mml:mi>c</mml:mi><mml:mi>i</mml:mi><mml:mi>f</mml:mi><mml:mi>i</mml:mi><mml:mi>c</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>y</mml:mi><mml:mo>=</mml:mo><mml:mfrac><mml:mrow><mml:mi>F</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:mi>s</mml:mi><mml:mi>e</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>P</mml:mi><mml:mi>o</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi><mml:mi>e</mml:mi></mml:mrow><mml:mrow><mml:mi>F</mml:mi><mml:mi>a</mml:mi><mml:mi>l</mml:mi><mml:mi>s</mml:mi><mml:mi>e</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>P</mml:mi><mml:mi>o</mml:mi><mml:mi>s</mml:mi><mml:mi>i</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi><mml:mi>e</mml:mi><mml:mo>+</mml:mo><mml:mi>T</mml:mi><mml:mi>r</mml:mi><mml:mi>u</mml:mi><mml:mi>e</mml:mi><mml:mtext>&#x0205F;</mml:mtext><mml:mi>N</mml:mi><mml:mi>e</mml:mi><mml:mi>g</mml:mi><mml:mi>a</mml:mi><mml:mi>t</mml:mi><mml:mi>i</mml:mi><mml:mi>v</mml:mi><mml:mi>e</mml:mi></mml:mrow></mml:mfrac></mml:math></disp-formula>
</sec>
<sec>
<title>5.4. Fairness metrics</title>
<p>DL models are black boxes and act differently across protected attributes such as age, gender, race, or socio-economic status. Fair or bias-free decisions show zero affinity of the model toward any individual or subgroup in the population set based on any inherent characteristics. To evaluate a deep model for exhibiting disparities across subgroups, fairness metrics demonstrate whether the decisions are fair or not for the protected attributes. These allow us to avoid any ill-treatment toward any subgroup after the deployment of the model in real-world settings.</p>
<p>To assess the model performance for different protected attributes in the population, the following are a few fairness metrics used in the literature for measuring bias or assessing the fairness of AI Systems.</p>
<list list-type="bullet">
<list-item><p>Demographic parity: It is defined as the probability of being classified with the favorable label and is independent of group membership (protected and unprotected). It is also known as Statistical Parity (Zafar et al., <xref ref-type="bibr" rid="B233">2017</xref>). For a disease classification problem, demographic parity is witnessed if the samples are not equally classified independent of the membership of being male or female.</p></list-item>
<list-item><p>Equalized odds: It is defined as both false-positive and true-positive rates for protected and unprotected groups being the same. It is also known as Separation, Positive Rate Parity (Zafar et al., <xref ref-type="bibr" rid="B233">2017</xref>). For a For a disease classification problem, if training data patients and are males only and all females as normal samples and equalized odds is satisfied if the model equally classifies or misclassifies the positive samples irrespective of whether that&#x00027;s male or female at the test time.</p></list-item>
<list-item><p>Degree of bias: It is defined as the standard deviation of classification accuracy across different subgroups of a demographic group.</p></list-item>
<list-item><p>Disparate impact: It is defined as the ratio of probabilities of being classified with the favorable label between protected and unprotected groups close to one. For instance, for a disease classification problem, if the model is favoring males over females and thus showing disparate impact.</p></list-item>
<list-item><p>Predictive rate parity: It is defined as the fraction of correct positive predictions that is the same for protected and unprotected groups (Chouldechova, <xref ref-type="bibr" rid="B29">2017</xref>). For example, the predictive parity rate for the disease classification is achieved if the precision for both subgroups (e.g., male and female) is the same. Predictive rate parity is also known as predictive parity.</p></list-item>
<list-item><p>Equal opportunity: It is defined as the true positive rate being the same between protected and unprotected groups (Hardt et al., <xref ref-type="bibr" rid="B63">2016</xref>). For example, for a disease classification problem, if disease-positive patients are only males and all females as normal samples. Equal opportunity is achieved if the model still predicts samples equally irrespective of whether they are male or female (protected attributes)</p></list-item>
<list-item><p>Treatment equality: It is defined if both protected and unprotected groups have an equal ratio of false negatives, and false positives (Berk et al., <xref ref-type="bibr" rid="B15">2021</xref>).</p></list-item>
<list-item><p>Individual fairness: It is defined as the metric which treats similar individuals similarly (Dwork et al., <xref ref-type="bibr" rid="B46">2012</xref>). For instance, Individual fairness is satisfied if samples from two different individuals with the same severity for a disease are equally treated by the model for disease classification.</p></list-item>
<list-item><p>Counterfactual fairness: It considers a model to be fair for a particular individual or group if its prediction in the real world is the same as that in the counterfactual world where the individual(s) had belonged to a different demographic group. It provides a way to check the possible way to interpret the causes of bias and the impact of replacing only the sensitive attributes (Russell et al., <xref ref-type="bibr" rid="B171">2017</xref>).</p></list-item>
</list>
</sec>
<sec>
<title>5.5. Report generation</title>
<p>To evaluate the report/caption generation for images, the following are the widely used evaluation metrics. All these metrics find the similarity (n-gram matching) solely between the ground truth and predicted captions without taking the image into consideration.</p>
<list list-type="bullet">
<list-item><p>BLEU: Bilingual Evaluation Understudy measures the quality of the translated sentences with reference to the similarity between predicted and labels caption, based on the <italic>n-gram</italic> matching rule. Its value lies between 0 and 1 (Papineni et al., <xref ref-type="bibr" rid="B148">2002</xref>). It is based on the n-gram co-occurrence frequency between the reference and predicted captions.</p></list-item>
<list-item><p>METEOR: Metric for Evaluating Translation with Explicit Ordering calculates the precision and recall and then takes a harmonic mean for the query image caption (Banerjee and Lavie, <xref ref-type="bibr" rid="B13">2005</xref>). Unlike BLEU, it measures the word-to-word matching and calculates recall for accurate word matching.</p></list-item>
<list-item><p>ROGUE-L: Recall-oriented Understudy for Gisting Evaluation is used to evaluate the co-occurrence of <italic>n-tuples</italic> in the abstracts. It is the evaluation method to calculate the machine&#x00027;s fluency of translation (Lin and Hovy, <xref ref-type="bibr" rid="B115">2003</xref>). It uses the concept of dynamic programming to find the longest common subsequence between the reference and predicted caption and to uses it to calculate the recall to determine the similarity between the two captions. Higher the value of ROGU-L, better the model, however, it doesn&#x00027;t consider the grammatical accuracy or the semantic level of description.</p></list-item>
<list-item><p>CIDEr: Consensus-based Image Description Evaluation calculates the similarity between the reference and predicted caption by considering each sentence as a document. The Cosine angle of the word frequency-inverse document frequency (TF-IDF) vector is calculated. The final result is obtained by averaging the similarity of tuples of different lengths (Vedantam et al., <xref ref-type="bibr" rid="B210">2015</xref>).</p></list-item>
<list-item><p>SPICE: Semantic Propositional Image Caption Evaluation uses the graph-based semantic representation to encode the objects, attributes, and relationships in the description sentence and evaluate the description sentence at the semantic level (Anderson et al., <xref ref-type="bibr" rid="B7">2016</xref>). It faces challenges with repetitive sentences, however, generates captions with a high correlation with human judgement.</p></list-item>
</list>
</sec>
</sec>
<sec id="s6">
<title>6. Open problems</title>
<p>Based on the literature review, here we present the open challenges in AI-based CXR analysis that require attention from the research community.</p>
<list list-type="bullet">
<list-item><p>Unavailability of data: Due to the inaccessibility of publicly available datasets for many lung diseases such as the detection of Pneumoconoisis from CXRs, it is challenging to create large-scale models for different lung diseases. In addition, a number of datasets are from a few specific countries like the USA. In order to build generalizable models, it is important to create large-scale datasets with diversity.</p></list-item>
<list-item><p>Small sample size problem and interoperability: Existing work is done on fewer in-house collected chest X-ray samples. Developing a robust and generalizable deep learning-based model requires a huge amount of training data. The datasets are very small in size compared to general object detection problems (for instance, the ImageNet dataset). Since the scanners might vary according to locations, deep models need to be aware and invariant of the dependency of learning a specific portion of the dataset, specifically for the datasets where data is collected from different hospitals.</p></list-item>
<list-item><p>Multilabel and limited label problem: A given chest X-ray of the patient suffering from Pneumoconoisis or TB develops multiple manifestations such as nodules, emphysema, tissue scarring, and fibrosis, which results in multilabel problems. On top of the limited accessible data, data labeling is also a challenge and requires detailed inputs from domain experts. Chest diseases are mainly focused on the lung fields; however, ground mask to segment the CXRs is scanty in the literature. Domain experts such as chest radiologists and pulmonologists must be consulted for data annotation and labeling, and encourage collaboration with more hospitals, radiologists and pulmonologists.</p></list-item>
<list-item><p>Low-quality images: The data collected may not always be of high quality. Samples also suffer alignment problems, which sometimes need to be fixed. Handling noisy data contributes to another challenge for algorithm design. A robust AI-based pipeline is needed to handle noise and image registration for lung disease detection.</p></list-item>
<list-item><p>Lung disease correlation and co-occurrence: The presence of Pneumoconoisis and its related diseases, such as TB, share similar pathology, often resulting in misdiagnosis. Two diseases can be associated with the same patient, for instance, Silicotuberculosis (silicosis and TB). A similar problem is faced with pneumonia with its three variants; viral, bacterial and COVID-19.</p></list-item>
<list-item><p>Trusted AI: Building trust in machine intelligence, especially for medical diagnoses, is crucial. Data bias among different demographics and sensors can result in inaccurate diagnostic decisions. Moreover, data privacy for accessing any patient data is of utmost priority. In addition, incorporating algorithmic explainability is a significant task to handle. Explainability in models can play an essential role in developing automated disease detection solutions to ease the workload in hospitals, decrease the chances of misdiagnoses, and encourage building trust in the diagnostic assistants. In particular, deep models face data bias and adversarial attacks in machine intelligence-based prediction. To harness the efficacy of deep models for automatic disease detection using CXRs, there is a need to build trustable systems with high fairness, interpretability and robustness.</p></list-item>
</list>
</sec>
<sec sec-type="conclusions" id="s7">
<title>7. Conclusion</title>
<p>CXR based image analysis is being used for detecting the presence of diseases such as TB, Pneumonia, Pneumoconiosis and COVID-19. This paper presents a detailed literature survey of AI-based CXR analysis tasks such as enhancement, segmentation, detection, classification, image and report generation along with different models for detecting associated diseases. We also present the summary of datasets and metrics used in the literature as well as the open problems in this domain. It is our assertion that there is a vast scope for improving automatic and efficient algorithm development for CXR-based image analysis. The advent of AI/ML techniques, particularly deep learning models, provides a scope of responsible, interpretable, privacy friendly digital assistance for thoracic disorders and addresses several open problems/challenges. Furthermore, novel CXR datasets must be prepared and released to encourage development of novel approaches for various disorders.</p>
</sec>
<sec sec-type="author-contributions" id="s8">
<title>Author contributions</title>
<p>YA conducted the literature review. YA, RS, and MV wrote the paper. All authors finalized the manuscript.</p>
</sec>
</body>
<back>
<sec sec-type="funding-information" id="s9">
<title>Funding</title>
<p>This research was supported by a grant from MIETY, the Government of India. MV is partially supported through the Swarnajayanti Fellowship.</p>
</sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s10">
<title>Publisher&#x00027;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<fn-group>
<fn id="fn0001"><p><sup>1</sup><ext-link ext-link-type="uri" xlink:href="https://www.healthimages.com/mri-vs-ct-scan/">https://www.healthimages.com/mri-vs-ct-scan/</ext-link></p></fn>
<fn id="fn0002"><p><sup>2</sup><ext-link ext-link-type="uri" xlink:href="https://www.nibib.nih.gov/science-education/science-topics/magnetic-resonance-imaging-mri">https://www.nibib.nih.gov/science-education/science-topics/magnetic-resonance-imaging-mri</ext-link></p></fn>
<fn id="fn0003"><p><sup>3</sup><ext-link ext-link-type="uri" xlink:href="https://radiopaedia.org/articles/chest-radiograph">https://radiopaedia.org/articles/chest-radiograph</ext-link></p></fn>
<fn id="fn0004"><p><sup>4</sup><ext-link ext-link-type="uri" xlink:href="https://www.radiologymasterclass.co.uk/tutorials/chest/chest_quality/chest_xray_quality_rotation">https://www.radiologymasterclass.co.uk/tutorials/chest/chest_quality/chest_xray_quality_rotation</ext-link></p></fn>
</fn-group>
<ref-list>
<title>References</title>
<ref id="B1">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Abbas</surname> <given-names>A.</given-names></name> <name><surname>Abdelsamea</surname> <given-names>M. M.</given-names></name> <name><surname>Gaber</surname> <given-names>M. M.</given-names></name></person-group> (<year>2021</year>). <article-title>Classification of covid-19 in chest x-ray images using detrac deep convolutional neural network</article-title>. <source>Appl. Intell</source>. <volume>51</volume>, <fpage>854</fpage>&#x02013;<lpage>864</lpage>. <pub-id pub-id-type="doi">10.1101/2020.03.30.20047456</pub-id><pub-id pub-id-type="pmid">34764548</pub-id></citation></ref>
<ref id="B2">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Abdullah-Al-Wadud</surname> <given-names>M.</given-names></name> <name><surname>Kabir</surname> <given-names>M. H.</given-names></name> <name><surname>Dewan</surname> <given-names>M. A. A.</given-names></name> <name><surname>Chae</surname> <given-names>O.</given-names></name></person-group> (<year>2007</year>). <article-title>A dynamic histogram equalization for image contrast enhancement</article-title>. <source>IEEE Trans. Consum. Electron</source>. <volume>53</volume>, <fpage>593</fpage>&#x02013;<lpage>600</lpage>. <pub-id pub-id-type="doi">10.1109/TCE.2007.381734</pub-id></citation>
</ref>
<ref id="B3">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Abin</surname> <given-names>D.</given-names></name> <name><surname>Thepade</surname> <given-names>S. D.</given-names></name> <name><surname>Mankar</surname> <given-names>H.</given-names></name> <name><surname>Raut</surname> <given-names>S.</given-names></name> <name><surname>Yadav</surname> <given-names>A.</given-names></name></person-group> (<year>2022</year>). <article-title>&#x0201C;Blending of contrast enhancement techniques for chest x-ray pneumonia images,&#x0201D;</article-title> in <source>2022 International Conference on Electronics and Renewable Systems (ICEARS)</source>, 981&#x02013;985. <pub-id pub-id-type="doi">10.1109/ICEARS53579.2022.9752286</pub-id></citation>
</ref>
<ref id="B4">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ahmed</surname> <given-names>F.</given-names></name> <name><surname>Hossain</surname> <given-names>E.</given-names></name></person-group> (<year>2013</year>). <article-title>Automated facial expression recognition using gradient-based ternary texture patterns</article-title>. <source>Chin. J. Eng</source>. 2013, 831747. <pub-id pub-id-type="doi">10.1155/2013/831747</pub-id></citation>
</ref>
<ref id="B5">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Alfadhli</surname> <given-names>F. H. O.</given-names></name> <name><surname>Mand</surname> <given-names>A. A.</given-names></name> <name><surname>Sayeed</surname> <given-names>M. S.</given-names></name> <name><surname>Sim</surname> <given-names>K. S.</given-names></name> <name><surname>Al-Shabi</surname> <given-names>M.</given-names></name></person-group> (<year>2017</year>). <article-title>&#x0201C;Classification of tuberculosis with surf spatial pyramid features,&#x0201D;</article-title> in <source>2017 International Conference on Robotics, Automation and Sciences (ICORAS)</source> (<publisher-loc>Melaka</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>5</lpage>.</citation>
</ref>
<ref id="B6">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Al-Rakhami</surname> <given-names>M. S.</given-names></name> <name><surname>Islam</surname> <given-names>M. M.</given-names></name> <name><surname>Islam</surname> <given-names>M. Z.</given-names></name> <name><surname>Asraf</surname> <given-names>A.</given-names></name> <name><surname>Sodhro</surname> <given-names>A. H.</given-names></name> <name><surname>Ding</surname> <given-names>W.</given-names></name></person-group> (<year>2021</year>). <article-title>Diagnosis of COVID-19 from x-rays using combined cnn-rnn architecture with transfer learning</article-title>. <source>MedRxiv</source>, 2020&#x02013;08. <pub-id pub-id-type="doi">10.1101/2020.08.24.20181339</pub-id><pub-id pub-id-type="pmid">36772394</pub-id></citation></ref>
<ref id="B7">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Anderson</surname> <given-names>P.</given-names></name> <name><surname>Fernando</surname> <given-names>B.</given-names></name> <name><surname>Johnson</surname> <given-names>M.</given-names></name> <name><surname>Gould</surname> <given-names>S.</given-names></name></person-group> (<year>2016</year>). <article-title>&#x0201C;SPICE: semantic propositional image caption evaluation,&#x0201D;</article-title> in <source>Computer Vision - ECCV 2016 - 14th European Conference, Amsterdam, The Netherlands, October 11&#x02013;14, 2016, Proceedings, Part V, Vol. 9909</source> (<publisher-loc>Springer</publisher-loc>), <fpage>382</fpage>&#x02013;<lpage>398</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-319-46454-1_24</pub-id></citation>
</ref>
<ref id="B8">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Annangi</surname> <given-names>P.</given-names></name> <name><surname>Thiruvenkadam</surname> <given-names>S.</given-names></name> <name><surname>Raja</surname> <given-names>A.</given-names></name> <name><surname>Xu</surname> <given-names>H.</given-names></name> <name><surname>Sun</surname> <given-names>X.</given-names></name> <name><surname>Mao</surname> <given-names>L.</given-names></name></person-group> (<year>2010</year>). <article-title>&#x0201C;A region based active contour method for x-ray lung segmentation using prior shape and low level features,&#x0201D;</article-title> in <source>2010 IEEE International Symposium on Biomedical Imaging: From Nano to Macro</source> (<publisher-loc>Rotterdam</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>892</fpage>&#x02013;<lpage>895</lpage>.</citation>
</ref>
<ref id="B9">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Anuar</surname> <given-names>A.</given-names></name></person-group> (<year>2019</year>). Siim-acr Pneumothorax Segmentation. Availavle online at: <ext-link ext-link-type="uri" xlink:href="https://github.com/sneddy/pneumothorax-segmentation">https://github.com/sneddy/pneumothorax-segmentation</ext-link></citation>
</ref>
<ref id="B10">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ashizawa</surname> <given-names>K.</given-names></name> <name><surname>MaCMahon</surname> <given-names>H.</given-names></name> <name><surname>Ishida</surname> <given-names>T.</given-names></name> <name><surname>Nakamura</surname> <given-names>K.</given-names></name> <name><surname>Vyborny</surname> <given-names>C. J.</given-names></name> <name><surname>Katsuragawa</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>1999</year>). <article-title>Effect of an artificial neural network on radiologists&#x00027; performance in the differential diagnosis of interstitial lung disease using chest radiographs</article-title>. <source>AJR Am. J. Roentgenol</source>. <volume>172</volume>, <fpage>1311</fpage>&#x02013;<lpage>1315</lpage>. <pub-id pub-id-type="doi">10.2214/ajr.172.5.10227508</pub-id><pub-id pub-id-type="pmid">10227508</pub-id></citation></ref>
<ref id="B11">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ayaz</surname> <given-names>M.</given-names></name> <name><surname>Shaukat</surname> <given-names>F.</given-names></name> <name><surname>Raja</surname> <given-names>G.</given-names></name></person-group> (<year>2021</year>). <article-title>Ensemble learning based automatic detection of tuberculosis in chest x-ray images using hybrid feature descriptors</article-title>. <source>Phys. Eng. Sci. Med</source>. <volume>44</volume>, <fpage>183</fpage>&#x02013;<lpage>194</lpage>. <pub-id pub-id-type="doi">10.1007/s13246-020-00966-0</pub-id><pub-id pub-id-type="pmid">33459996</pub-id></citation></ref>
<ref id="B12">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Azad</surname> <given-names>R.</given-names></name> <name><surname>Asadi-Aghbolaghi</surname> <given-names>M.</given-names></name> <name><surname>Fathy</surname> <given-names>M.</given-names></name> <name><surname>Escalera</surname> <given-names>S.</given-names></name></person-group> (<year>2019</year>). <article-title>&#x0201C;Bi-directional convlstm u-net with densley connected convolutions,&#x0201D;</article-title> in <source>Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops</source> (<publisher-loc>Seoul</publisher-loc>: <publisher-name>IEEE</publisher-name>).</citation>
</ref>
<ref id="B13">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Banerjee</surname> <given-names>S.</given-names></name> <name><surname>Lavie</surname> <given-names>A.</given-names></name></person-group> (<year>2005</year>). <article-title>&#x0201C;Meteor: an automatic metric for mt evaluation with improved correlation with human judgments,&#x0201D;</article-title> in <source>Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization&#x00040;ACL 2005</source> (<publisher-loc>Ann Arbor, MI</publisher-loc>: <publisher-name>Association for Computational Linguistics</publisher-name>), <fpage>65</fpage>&#x02013;<lpage>72</lpage>.</citation>
</ref>
<ref id="B14">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bay</surname> <given-names>H.</given-names></name> <name><surname>Ess</surname> <given-names>A.</given-names></name> <name><surname>Tuytelaars</surname> <given-names>T.</given-names></name> <name><surname>Gool</surname> <given-names>L. V.</given-names></name></person-group> (<year>2008</year>). <article-title>Speeded-up robust features (SURF)</article-title>. <source>Comput. Vis. Image Underst.</source> <volume>110</volume>, <fpage>346</fpage>&#x02013;<lpage>359</lpage>. <pub-id pub-id-type="doi">10.1016/j.cviu.2007.09.014</pub-id></citation>
</ref>
<ref id="B15">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Berk</surname> <given-names>R.</given-names></name> <name><surname>Heidari</surname> <given-names>H.</given-names></name> <name><surname>Jabbari</surname> <given-names>S.</given-names></name> <name><surname>Kearns</surname> <given-names>M.</given-names></name> <name><surname>Roth</surname> <given-names>A.</given-names></name></person-group> (<year>2021</year>). <article-title>Fairness in criminal justice risk assessments: the state of the art</article-title>. <source>Sociol. Methods Res</source>. <volume>50</volume>, <fpage>3</fpage>&#x02013;<lpage>44</lpage>. <pub-id pub-id-type="doi">10.1177/0049124118782533</pub-id></citation>
</ref>
<ref id="B16">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Blain</surname> <given-names>M.</given-names></name> <name><surname>Kassin</surname> <given-names>M. T.</given-names></name> <name><surname>Varble</surname> <given-names>N.</given-names></name> <name><surname>Wang</surname> <given-names>X.</given-names></name> <name><surname>Xu</surname> <given-names>Z.</given-names></name> <name><surname>Xu</surname> <given-names>D.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>Determination of disease severity in covid-19 patients using deep learning in chest x-ray images</article-title>. <source>Diagn. Intervent. Radiol</source>. 27, 20. <pub-id pub-id-type="doi">10.5152/dir.2020.20205</pub-id><pub-id pub-id-type="pmid">32815519</pub-id></citation></ref>
<ref id="B17">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Boykov</surname> <given-names>Y.</given-names></name> <name><surname>Funka-Lea</surname> <given-names>G.</given-names></name></person-group> (<year>2006</year>). <article-title>Graph cuts and efficient nd image segmentation</article-title>. <source>Int. J. Comput Vis</source>. <volume>70</volume>, <fpage>109</fpage>&#x02013;<lpage>131</lpage>. <pub-id pub-id-type="doi">10.1007/s11263-006-7934-5</pub-id></citation>
</ref>
<ref id="B18">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Bustos</surname> <given-names>A.</given-names></name> <name><surname>Pertusa</surname> <given-names>A.</given-names></name> <name><surname>Salinas</surname> <given-names>J.-M.</given-names></name> <name><surname>de la Iglesia-Vay&#x000E1;</surname> <given-names>M.</given-names></name></person-group> (<year>2020</year>). <article-title>Padchest: a large chest x-ray image dataset with multi-label annotated reports</article-title>. <source>Med. Image Anal</source>. 66, 101797. <pub-id pub-id-type="doi">10.1016/j.media.2020.101797</pub-id><pub-id pub-id-type="pmid">32877839</pub-id></citation></ref>
<ref id="B19">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Candemir</surname> <given-names>S.</given-names></name> <name><surname>Jaeger</surname> <given-names>S.</given-names></name> <name><surname>Palaniappan</surname> <given-names>K.</given-names></name> <name><surname>Musco</surname> <given-names>J. P.</given-names></name> <name><surname>Singh</surname> <given-names>R. K.</given-names></name> <name><surname>Xue</surname> <given-names>Z.</given-names></name> <etal/></person-group>. (<year>2013</year>). <article-title>Lung segmentation in chest radiographs using anatomical atlases with nonrigid registration</article-title>. <source>IEEE Trans. Med. Imaging</source> <volume>33</volume>, <fpage>577</fpage>&#x02013;<lpage>590</lpage>. <pub-id pub-id-type="doi">10.1109/TMI.2013.2290491</pub-id><pub-id pub-id-type="pmid">24239990</pub-id></citation></ref>
<ref id="B20">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cao</surname> <given-names>F.</given-names></name> <name><surname>Zhao</surname> <given-names>H.</given-names></name></person-group> (<year>2021</year>). <article-title>Automatic lung segmentation algorithm on chest x-ray images based on fusion variational auto-encoder and three-terminal attention mechanism</article-title>. <source>Symmetry</source> <volume>13</volume>, <fpage>814</fpage>. <pub-id pub-id-type="doi">10.3390/sym13050814</pub-id></citation>
</ref>
<ref id="B21">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chandalia</surname> <given-names>A.</given-names></name> <name><surname>Gupta</surname> <given-names>H.</given-names></name></person-group> (<year>2022</year>). <source>Deep learning method to Detect Chest x ray or ct Scan Images Based on Hybrid Yolo Model</source>. U.S. Patent No. IN202223019813.</citation>
</ref>
<ref id="B22">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chandra</surname> <given-names>T. B.</given-names></name> <name><surname>Verma</surname> <given-names>K.</given-names></name> <name><surname>Singh</surname> <given-names>B. K.</given-names></name> <name><surname>Jain</surname> <given-names>D.</given-names></name> <name><surname>Netam</surname> <given-names>S. S.</given-names></name></person-group> (<year>2020</year>). <article-title>Automatic detection of tuberculosis related abnormalities in chest x-ray images using hierarchical feature extraction scheme</article-title>. <source>Expert. Syst. Appl</source>. 158, 113514. <pub-id pub-id-type="doi">10.1016/j.eswa.2020.113514</pub-id></citation>
</ref>
<ref id="B23">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chang-soo</surname> <given-names>P.</given-names></name></person-group> (<year>2021</year>). <source>Apparatus for Diagnosis of Chest x-ray Employing Artificial Intelligence</source>. U.S. Patent No KR20210048010A.</citation>
</ref>
<ref id="B24">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chattopadhay</surname> <given-names>A.</given-names></name> <name><surname>Sarkar</surname> <given-names>A.</given-names></name> <name><surname>Howlader</surname> <given-names>P.</given-names></name> <name><surname>Balasubramanian</surname> <given-names>V. N.</given-names></name></person-group> (<year>2018</year>). <article-title>&#x0201C;Grad-cam&#x0002B;&#x0002B;: generalized gradient-based visual explanations for deep convolutional networks,&#x0201D;</article-title> in <source>2018 IEEE Winter Conference on Applications of computer Vision (WACV)</source> (<publisher-loc>Lake Tahoe, NV</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>839</fpage>&#x02013;<lpage>847</lpage>.<pub-id pub-id-type="pmid">36050386</pub-id></citation></ref>
<ref id="B25">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>L.-C.</given-names></name> <name><surname>Papandreou</surname> <given-names>G.</given-names></name> <name><surname>Kokkinos</surname> <given-names>I.</given-names></name> <name><surname>Murphy</surname> <given-names>K.</given-names></name> <name><surname>Yuille</surname> <given-names>A. L.</given-names></name></person-group> (<year>2017a</year>). <article-title>Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs</article-title>. <source>IEEE Trans. Pattern Anal. Mach. Intell</source>. <volume>40</volume>, <fpage>834</fpage>&#x02013;<lpage>848</lpage>. <pub-id pub-id-type="doi">10.1109/TPAMI.2017.2699184</pub-id><pub-id pub-id-type="pmid">28463186</pub-id></citation></ref>
<ref id="B26">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>L.-C.</given-names></name> <name><surname>Papandreou</surname> <given-names>G.</given-names></name> <name><surname>Schroff</surname> <given-names>F.</given-names></name> <name><surname>Adam</surname> <given-names>H.</given-names></name></person-group> (<year>2017b</year>). <article-title>Rethinking atrous convolution for semantic image segmentation</article-title>. <source>arXiv preprint</source> arXiv:1706.05587. <pub-id pub-id-type="doi">10.48550/arXiv.1706.05587</pub-id></citation>
</ref>
<ref id="B27">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chen</surname> <given-names>S.-D.</given-names></name> <name><surname>Ramli</surname> <given-names>A. R.</given-names></name></person-group> (<year>2003</year>). <article-title>Contrast enhancement using recursive mean-separate histogram equalization for scalable brightness preservation</article-title>. <source>IEEE Trans. Consum. Electron</source>. <volume>49</volume>, <fpage>1301</fpage>&#x02013;<lpage>1309</lpage>. <pub-id pub-id-type="doi">10.1109/TCE.2003.1261233</pub-id></citation>
</ref>
<ref id="B28">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cho</surname> <given-names>Y.</given-names></name> <name><surname>Kim</surname> <given-names>Y.-G.</given-names></name> <name><surname>Lee</surname> <given-names>S. M.</given-names></name> <name><surname>Seo</surname> <given-names>J. B.</given-names></name> <name><surname>Kim</surname> <given-names>N.</given-names></name></person-group> (<year>2020</year>). <article-title>Reproducibility of abnormality detection on chest radiographs using convolutional neural network in paired radiographs obtained within a short-term interval</article-title>. <source>Sci. Rep</source>. <volume>10</volume>, <fpage>1</fpage>&#x02013;<lpage>11</lpage>. <pub-id pub-id-type="doi">10.1038/s41598-020-74626-4</pub-id><pub-id pub-id-type="pmid">33060837</pub-id></citation></ref>
<ref id="B29">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chouldechova</surname> <given-names>A.</given-names></name></person-group> (<year>2017</year>). <article-title>Fair prediction with disparate impact: a study of bias in recidivism prediction instruments</article-title>. <source>Big Data</source> <volume>5</volume>, <fpage>153</fpage>&#x02013;<lpage>163</lpage>. <pub-id pub-id-type="doi">10.1089/big.2016.0047</pub-id><pub-id pub-id-type="pmid">28632438</pub-id></citation></ref>
<ref id="B30">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Chowdhury</surname> <given-names>M. E. H.</given-names></name> <name><surname>Rahman</surname> <given-names>T.</given-names></name> <name><surname>Khandakar</surname> <given-names>A.</given-names></name> <name><surname>Mazhar</surname> <given-names>R.</given-names></name> <name><surname>Kadir</surname> <given-names>M. A.</given-names></name> <name><surname>Mahbub</surname> <given-names>Z. B.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Can ai help in screening viral and COVID-19 pneumonia?</article-title> <source>IEEE Access</source> <volume>8</volume>, <fpage>132665</fpage>&#x02013;<lpage>132676</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2020.3010287</pub-id></citation>
</ref>
<ref id="B31">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Clarke</surname> <given-names>L. P.</given-names></name> <name><surname>Qian</surname> <given-names>W.</given-names></name> <name><surname>Mao</surname> <given-names>F.</given-names></name></person-group> (<year>2022</year>). <source>Computer-Assisted Method and Apparatus for the Detection of Lung Nodules</source>. U.S. Patent No US20220180514.</citation>
</ref>
<ref id="B32">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cohen</surname> <given-names>J. P.</given-names></name> <name><surname>Morrison</surname> <given-names>P.</given-names></name> <name><surname>Dao</surname> <given-names>L.</given-names></name> <name><surname>Roth</surname> <given-names>K.</given-names></name> <name><surname>Duong</surname> <given-names>T. Q.</given-names></name> <name><surname>Ghassemi</surname> <given-names>M.</given-names></name></person-group> (<year>2020</year>). <article-title>COVID-19 image data collection: prospective predictions are the future</article-title>. <source>arXiv preprint</source> arXiv:2006.11988. <pub-id pub-id-type="doi">10.48550/arXiv.2006.11988</pub-id></citation>
</ref>
<ref id="B33">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cootes</surname> <given-names>T. F.</given-names></name> <name><surname>Edwards</surname> <given-names>G. J.</given-names></name> <name><surname>Taylor</surname> <given-names>C. J.</given-names></name></person-group> (<year>2001</year>). <article-title>Active appearance models</article-title>. <source>IEEE Trans. Pattern Anal. Mach. Intell</source>. <volume>23</volume>, <fpage>681</fpage>&#x02013;<lpage>685</lpage>. <pub-id pub-id-type="doi">10.1109/34.927467</pub-id></citation>
</ref>
<ref id="B34">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Cootes</surname> <given-names>T. F.</given-names></name> <name><surname>Hill</surname> <given-names>A.</given-names></name> <name><surname>Taylor</surname> <given-names>C. J.</given-names></name> <name><surname>Haslam</surname> <given-names>J.</given-names></name></person-group> (<year>1994</year>). <article-title>Use of active shape models for locating structures in medical images</article-title>. <source>Image Vis. Comput</source>. <volume>12</volume>, <fpage>355</fpage>&#x02013;<lpage>365</lpage>. <pub-id pub-id-type="doi">10.1016/0262-8856(94)90060-4</pub-id></citation>
</ref>
<ref id="B35">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dalal</surname> <given-names>N.</given-names></name> <name><surname>Triggs</surname> <given-names>B.</given-names></name></person-group> (<year>2005</year>). <article-title>&#x0201C;Histograms of oriented gradients for human detection,&#x0201D;</article-title> in <source>2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR&#x00027;05), Vol. 1</source> (<publisher-loc>San Diego, CA</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>886</fpage>&#x02013;<lpage>893</lpage>.</citation>
</ref>
<ref id="B36">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Das</surname> <given-names>D.</given-names></name> <name><surname>Santosh</surname> <given-names>K.</given-names></name> <name><surname>Pal</surname> <given-names>U.</given-names></name></person-group> (<year>2021</year>). <article-title>&#x0201C;Inception-based deep learning architecture for tuberculosis screening using chest x-rays,&#x0201D;</article-title> in <source>2020 25th International Conference on Pattern Recognition (ICPR)</source> (<publisher-loc>Milan</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>3612</fpage>&#x02013;<lpage>3619</lpage>.</citation>
</ref>
<ref id="B37">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dasanayaka</surname> <given-names>C.</given-names></name> <name><surname>Dissanayake</surname> <given-names>M. B.</given-names></name></person-group> (<year>2021</year>). <article-title>Deep learning methods for screening pulmonary tuberculosis using chest x-rays</article-title>. <source>Comput. Methods Biomech. Biomed. Eng</source>. <volume>9</volume>, <fpage>39</fpage>&#x02013;<lpage>49</lpage>. <pub-id pub-id-type="doi">10.1080/21681163.2020.1808532</pub-id><pub-id pub-id-type="pmid">31479448</pub-id></citation></ref>
<ref id="B38">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Demner-Fushman</surname> <given-names>D.</given-names></name> <name><surname>Antani</surname> <given-names>S.</given-names></name> <name><surname>Simpson</surname> <given-names>M.</given-names></name> <name><surname>Thoma</surname> <given-names>G. R.</given-names></name></person-group> (<year>2012</year>). <article-title>Design and development of a multimodal biomedical information retrieval system</article-title>. <source>J. Comput. Sci. Eng</source>. <volume>6</volume>, <fpage>168</fpage>&#x02013;<lpage>177</lpage>. <pub-id pub-id-type="doi">10.5626/JCSE.2012.6.2.168</pub-id></citation>
</ref>
<ref id="B39">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Demner-Fushman</surname> <given-names>D.</given-names></name> <name><surname>Kohli</surname> <given-names>M. D.</given-names></name> <name><surname>Rosenman</surname> <given-names>M. B.</given-names></name> <name><surname>Shooshan</surname> <given-names>S. E.</given-names></name> <name><surname>Rodriguez</surname> <given-names>L.</given-names></name> <name><surname>Antani</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>2016</year>). <article-title>Preparing a collection of radiology examinations for distribution and retrieval</article-title>. <source>J. Am. Med. Inform. Assoc</source>. <volume>23</volume>, <fpage>304</fpage>&#x02013;<lpage>310</lpage>. <pub-id pub-id-type="doi">10.1093/jamia/ocv080</pub-id><pub-id pub-id-type="pmid">26133894</pub-id></citation></ref>
<ref id="B40">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Deng</surname> <given-names>J.</given-names></name> <name><surname>Dong</surname> <given-names>W.</given-names></name> <name><surname>Socher</surname> <given-names>R.</given-names></name> <name><surname>Li</surname> <given-names>L.-J.</given-names></name> <name><surname>Li</surname> <given-names>K.</given-names></name> <name><surname>Fei-Fei</surname> <given-names>L.</given-names></name></person-group> (<year>2009</year>). <article-title>&#x0201C;Imagenet: a large-scale hierarchical image database,&#x0201D;</article-title> in <source>2009 IEEE Conference on Computer Vision and Pattern Recognition</source> (<publisher-loc>Miami, FL</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>248</fpage>&#x02013;<lpage>255</lpage>.<pub-id pub-id-type="pmid">26886976</pub-id></citation></ref>
<ref id="B41">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Devnath</surname> <given-names>L.</given-names></name> <name><surname>Luo</surname> <given-names>S.</given-names></name> <name><surname>Summons</surname> <given-names>P.</given-names></name> <name><surname>Wang</surname> <given-names>D.</given-names></name></person-group> (<year>2021</year>). <article-title>Automated detection of pneumoconiosis with multilevel deep features learned from chest x-ray radiographs</article-title>. <source>Comput. Biol. Med</source>. 129, 104125. <pub-id pub-id-type="doi">10.1016/j.compbiomed.2020.104125</pub-id><pub-id pub-id-type="pmid">33310394</pub-id></citation></ref>
<ref id="B42">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Doi</surname> <given-names>K.</given-names></name> <name><surname>Aoyama</surname> <given-names>M.</given-names></name></person-group> (<year>2002</year>). <article-title>Automated computerized scheme for distinction between benign and malignant solitary pulmonary nodules on chest images</article-title>. <source>Med. Phys</source>. <volume>29</volume>, <fpage>701</fpage>&#x02013;<lpage>708</lpage>. <pub-id pub-id-type="doi">10.1118/1.1469630</pub-id><pub-id pub-id-type="pmid">12033565</pub-id></citation></ref>
<ref id="B43">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dosovitskiy</surname> <given-names>A.</given-names></name> <name><surname>Beyer</surname> <given-names>L.</given-names></name> <name><surname>Kolesnikov</surname> <given-names>A.</given-names></name> <name><surname>Weissenborn</surname> <given-names>D.</given-names></name> <name><surname>Zhai</surname> <given-names>X.</given-names></name> <name><surname>Unterthiner</surname> <given-names>T.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>An image is worth 16x16 words: Transformers for image recognition at scale</article-title>. <source>CoRR, abs</source>/2010.11929.</citation>
</ref>
<ref id="B44">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Duong</surname> <given-names>L. T.</given-names></name> <name><surname>Le</surname> <given-names>N. H.</given-names></name> <name><surname>Tran</surname> <given-names>T. B.</given-names></name> <name><surname>Ngo</surname> <given-names>V. M.</given-names></name> <name><surname>Nguyen</surname> <given-names>P. T.</given-names></name></person-group> (<year>2021</year>). <article-title>Detection of tuberculosis from chest x-ray images: boosting the performance with vision transformer and transfer learning</article-title>. <source>Expert. Syst. Appl</source>. 184, 115519. <pub-id pub-id-type="doi">10.1016/j.eswa.2021.115519</pub-id></citation>
</ref>
<ref id="B45">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Duran-Lopez</surname> <given-names>L.</given-names></name> <name><surname>Dominguez-Morales</surname> <given-names>J. P.</given-names></name> <name><surname>Corral-Jaime</surname> <given-names>J.</given-names></name> <name><surname>Vicente-Diaz</surname> <given-names>S.</given-names></name> <name><surname>Linares-Barranco</surname> <given-names>A.</given-names></name></person-group> (<year>2020</year>). <article-title>COVID-xnet: a custom deep learning system to diagnose and locate COVID-19 in chest x-ray images</article-title>. <source>Appl. Sci</source>. 10, 5683. <pub-id pub-id-type="doi">10.3390/app10165683</pub-id></citation>
</ref>
<ref id="B46">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Dwork</surname> <given-names>C.</given-names></name> <name><surname>Hardt</surname> <given-names>M.</given-names></name> <name><surname>Pitassi</surname> <given-names>T.</given-names></name> <name><surname>Reingold</surname> <given-names>O.</given-names></name> <name><surname>Zemel</surname> <given-names>R.</given-names></name></person-group> (<year>2012</year>). <article-title>&#x0201C;Fairness through awareness,&#x0201D;</article-title> in <source>Innovations in Theoretical Computer Science 2012</source> (<publisher-loc>Cambridge, MA</publisher-loc>: <publisher-name>ACM</publisher-name>), <fpage>214</fpage>&#x02013;<lpage>226</lpage>. <pub-id pub-id-type="doi">10.1145/2090236.2090255</pub-id></citation>
</ref>
<ref id="B47">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>El Gannour</surname> <given-names>O.</given-names></name> <name><surname>Hamida</surname> <given-names>S.</given-names></name> <name><surname>Cherradi</surname> <given-names>B.</given-names></name> <name><surname>Raihani</surname> <given-names>A.</given-names></name> <name><surname>Moujahid</surname> <given-names>H.</given-names></name></person-group> (<year>2020</year>). <article-title>&#x0201C;Performance evaluation of transfer learning technique for automatic detection of patients with COVID-19 on x-ray images,&#x0201D;</article-title> in <source>2020 IEEE 2nd International Conference on Electronics, Control, Optimization and Computer Science (ICECOCS)</source> (<publisher-loc>Kenitra</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>6</lpage>.</citation>
</ref>
<ref id="B48">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Elshennawy</surname> <given-names>N. M.</given-names></name> <name><surname>Ibrahim</surname> <given-names>D. M.</given-names></name></person-group> (<year>2020</year>). <article-title>Deep-pneumonia framework using deep learning models based on chest x-ray images</article-title>. <source>Diagnostics</source> <volume>10</volume>, <fpage>649</fpage>. <pub-id pub-id-type="doi">10.3390/diagnostics10090649</pub-id><pub-id pub-id-type="pmid">32872384</pub-id></citation></ref>
<ref id="B49">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Eslami</surname> <given-names>M.</given-names></name> <name><surname>Tabarestani</surname> <given-names>S.</given-names></name> <name><surname>Albarqouni</surname> <given-names>S.</given-names></name> <name><surname>Adeli</surname> <given-names>E.</given-names></name> <name><surname>Navab</surname> <given-names>N.</given-names></name> <name><surname>Adjouadi</surname> <given-names>M.</given-names></name></person-group> (<year>2020</year>). <article-title>Image-to-images translation for multi-task organ segmentation and bone suppression in chest x-ray radiography</article-title>. <source>IEEE Trans. Med. Imaging</source> <volume>39</volume>, <fpage>2553</fpage>&#x02013;<lpage>2565</lpage>. <pub-id pub-id-type="doi">10.1109/TMI.2020.2974159</pub-id><pub-id pub-id-type="pmid">32078541</pub-id></citation></ref>
<ref id="B50">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ferreira</surname> <given-names>J. R.</given-names></name> <name><surname>Cardenas</surname> <given-names>D. A. C.</given-names></name> <name><surname>Moreno</surname> <given-names>R. A.</given-names></name> <name><surname>de S&#x000E1; Rebelo</surname> <given-names>M. D. F.</given-names></name> <name><surname>Krieger</surname> <given-names>J. E.</given-names></name> <name><surname>Gutierrez</surname> <given-names>M. A.</given-names></name></person-group> (<year>2020</year>). <article-title>&#x0201C;Multi-view ensemble convolutional neural network to improve classification of pneumonia in low contrast chest x-ray images,&#x0201D;</article-title> in <source>2020 42nd annual international conference of the IEEE engineering in Medicine &#x00026;Biology Society (EMBC)</source> (<publisher-loc>Montreal, QC</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1238</fpage>&#x02013;<lpage>1241</lpage>.<pub-id pub-id-type="pmid">33018211</pub-id></citation></ref>
<ref id="B51">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Fischer</surname> <given-names>A. M.</given-names></name> <name><surname>Varga-Szemes</surname> <given-names>A.</given-names></name> <name><surname>Martin</surname> <given-names>S. S.</given-names></name> <name><surname>Sperl</surname> <given-names>J. I.</given-names></name> <name><surname>Sahbaee</surname> <given-names>P.</given-names></name> <name><surname>Neumann</surname> <given-names>D.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Artificial intelligence-based fully automated per lobe segmentation and emphysema-quantification based on chest computed tomography compared with global initiative for chronic obstructive lung disease severity of smokers</article-title>. <source>J. Thorac. Imaging</source> <volume>35</volume>, <fpage>S28</fpage>-S34. <pub-id pub-id-type="doi">10.1097/RT.I.0000000000000500</pub-id><pub-id pub-id-type="pmid">32235188</pub-id></citation></ref>
<ref id="B52">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gabor</surname> <given-names>D.</given-names></name></person-group> (<year>1946</year>). <article-title>Theory of communication. part 1: the analysis of information</article-title>. <source>J. Inst. Electr. Eng. III</source> <volume>93</volume>, <fpage>429</fpage>-441. <pub-id pub-id-type="doi">10.1049/ji-3-2.1946.0074</pub-id></citation>
</ref>
<ref id="B53">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Girshick</surname> <given-names>R.</given-names></name> <name><surname>Donahue</surname> <given-names>J.</given-names></name> <name><surname>Darrell</surname> <given-names>T.</given-names></name> <name><surname>Malik</surname> <given-names>J.</given-names></name></person-group> (<year>2014</year>). <article-title>&#x0201C;Rich feature hierarchies for accurate object detection and semantic segmentation,&#x0201D;</article-title> in <source>Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</source> (<publisher-loc>Columbus, OH</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>580</fpage>&#x02013;<lpage>587</lpage>.</citation>
</ref>
<ref id="B54">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gongye</surname> <given-names>C.</given-names></name> <name><surname>Li</surname> <given-names>H.</given-names></name> <name><surname>Zhang</surname> <given-names>X.</given-names></name> <name><surname>Sabbagh</surname> <given-names>M.</given-names></name> <name><surname>Yuan</surname> <given-names>G.</given-names></name> <name><surname>Lin</surname> <given-names>X.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>&#x0201C;New passive and active attacks on deep neural networks in medical applications,&#x0201D;</article-title> in <source>IEEE/ACM International Conference On Computer Aided Design, ICCAD 2020</source> (<publisher-loc>San Diego, CA</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1</fpage>&#x02013;<lpage>9</lpage>. <pub-id pub-id-type="doi">10.1145/3400302.3418782</pub-id></citation>
</ref>
<ref id="B55">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gour</surname> <given-names>M.</given-names></name> <name><surname>Jain</surname> <given-names>S.</given-names></name></person-group> (<year>2020</year>). <article-title>Stacked convolutional neural network for diagnosis of COVID-19 disease from x-ray images</article-title>. <source>arXiv preprint</source> arXiv:2006.13817. <pub-id pub-id-type="doi">10.48550/arXiv.2006.13817</pub-id></citation>
</ref>
<ref id="B56">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Govindarajan</surname> <given-names>S.</given-names></name> <name><surname>Swaminathan</surname> <given-names>R.</given-names></name></person-group> (<year>2021</year>). <article-title>Extreme learning machine based differentiation of pulmonary tuberculosis in chest radiographs using integrated local feature descriptors</article-title>. <source>Comput. Methods Programs Biomed</source>. 204, 106058. <pub-id pub-id-type="doi">10.1016/j.cmpb.2021.106058</pub-id><pub-id pub-id-type="pmid">33789212</pub-id></citation></ref>
<ref id="B57">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gozes</surname> <given-names>O.</given-names></name> <name><surname>Greenspan</surname> <given-names>H.</given-names></name></person-group> (<year>2019</year>). <article-title>&#x0201C;Deep feature learning from a hospital-scale chest x-ray dataset with application to tb detection on a small-scale dataset,&#x0201D;</article-title> in <source>2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)</source> (<publisher-loc>Berlin</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>4076</fpage>&#x02013;<lpage>4079</lpage>.<pub-id pub-id-type="pmid">31946767</pub-id></citation></ref>
<ref id="B58">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Guendel</surname> <given-names>S.</given-names></name> <name><surname>Ghesu</surname> <given-names>F.-C.</given-names></name> <name><surname>Gibson</surname> <given-names>E.</given-names></name> <name><surname>Sasa</surname> <given-names>G</given-names></name> <name><surname>Georgescu</surname> <given-names>B.</given-names></name> <name><surname>Comaniciu</surname> <given-names>D.</given-names></name></person-group> (<year>2020</year>). <article-title>Multi-task learning for chest x-ray abnormality classification</article-title>. <source>arXiv:1905.06362 [cs.CV</source>]. <pub-id pub-id-type="doi">10.48550/arXiv.1905.06362</pub-id><pub-id pub-id-type="pmid">34015595</pub-id></citation></ref>
<ref id="B59">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Gupta</surname> <given-names>A.</given-names></name> <name><surname>Gupta</surname> <given-names>S.</given-names></name> <name><surname>Katarya</surname> <given-names>R.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>Instacovnet-19: A deep learning classification model for the detection of COVID-19 patients using chest x-ray</article-title>. <source>Appl. Soft. Comput</source>. <volume>99</volume>, <fpage>106859</fpage>. <pub-id pub-id-type="doi">10.1016/j.asoc.2020.106859</pub-id><pub-id pub-id-type="pmid">33162872</pub-id></citation></ref>
<ref id="B60">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hall</surname> <given-names>E. L.</given-names></name> <name><surname>Crawford</surname> <given-names>W. O.</given-names></name> <name><surname>Roberts</surname> <given-names>F. E.</given-names></name></person-group> (<year>1975</year>). <article-title>Computer classification of pneumoconiosis from radiographs of coal workers</article-title>. <source>IEEE Trans. Biomed. Eng</source>. BME-<volume>22</volume>, <fpage>518</fpage>&#x02013;<lpage>527</lpage>. <pub-id pub-id-type="doi">10.1109/TBME.1975.324475</pub-id><pub-id pub-id-type="pmid">1102429</pub-id></citation></ref>
<ref id="B61">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hao</surname> <given-names>C.</given-names></name> <name><surname>Jin</surname> <given-names>N.</given-names></name> <name><surname>Qiu</surname> <given-names>C.</given-names></name> <name><surname>Ba</surname> <given-names>K.</given-names></name> <name><surname>Wang</surname> <given-names>X.</given-names></name> <name><surname>Zhang</surname> <given-names>H.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>Balanced convolutional neural networks for pneumoconiosis detection</article-title>. <source>Int. J. Environ. Res. Public Health</source> <volume>18</volume>, <fpage>9091</fpage>. <pub-id pub-id-type="doi">10.3390/ijerph18179091</pub-id><pub-id pub-id-type="pmid">34501684</pub-id></citation></ref>
<ref id="B62">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Harding</surname> <given-names>D. S.</given-names></name> <name><surname>Sridharan Kamalakanan</surname> <given-names>a. S. K.</given-names></name> <name><surname>Katsuhara</surname> <given-names>S.</given-names></name> <name><surname>Pike</surname> <given-names>J. H.</given-names></name> <name><surname>Sabir</surname> <given-names>M. F.</given-names></name> <etal/></person-group>. (<year>2015</year>). <source>Lung Segmentation and Bone Suppression Techniques for Radiographic Images</source>. U.S. Patent No WO2015157067.</citation>
</ref>
<ref id="B63">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hardt</surname> <given-names>M.</given-names></name> <name><surname>Price</surname> <given-names>E.</given-names></name> <name><surname>Srebro</surname> <given-names>N.</given-names></name></person-group> (<year>2016</year>). <article-title>&#x0201C;Equality of opportunity in supervised learning,&#x0201D;</article-title> in <source>Advances in Neural Information Processing Systems, Vol. 29</source>.</citation>
</ref>
<ref id="B64">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hasegawa</surname> <given-names>A.</given-names></name> <name><surname>Lo</surname> <given-names>S.-C. B.</given-names></name> <name><surname>Freedman</surname> <given-names>M. T.</given-names></name> <name><surname>Mun</surname> <given-names>S. K.</given-names></name></person-group> (<year>1994</year>). <article-title>Convolution neural-network-based detection of lung structures</article-title>. <source>Med. Imaging</source> <volume>2167</volume>, <fpage>654</fpage>&#x02013;<lpage>662</lpage>. <pub-id pub-id-type="doi">10.1117/12.175101</pub-id></citation>
</ref>
<ref id="B65">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>He</surname> <given-names>K.</given-names></name> <name><surname>Zhang</surname> <given-names>X.</given-names></name> <name><surname>Ren</surname> <given-names>S.</given-names></name> <name><surname>Sun</surname> <given-names>J.</given-names></name></person-group> (<year>2016</year>). <article-title>&#x0201C;Deep residual learning for image recognition,&#x0201D;</article-title> in <source>Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</source> (<publisher-loc>Las Vegas, NV</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>770</fpage>&#x02013;<lpage>778</lpage>.<pub-id pub-id-type="pmid">32166560</pub-id></citation></ref>
<ref id="B66">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hirano</surname> <given-names>H.</given-names></name> <name><surname>Minagi</surname> <given-names>A.</given-names></name> <name><surname>Takemoto</surname> <given-names>K.</given-names></name></person-group> (<year>2021</year>). <article-title>Universal adversarial attacks on deep neural networks for medical image classification</article-title>. <source>BMC Med. Imaging</source> <volume>21</volume>, <fpage>1</fpage>&#x02013;<lpage>13</lpage>. <pub-id pub-id-type="doi">10.1186/s12880-020-00530-y</pub-id><pub-id pub-id-type="pmid">35200740</pub-id></citation></ref>
<ref id="B67">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Homayounieh</surname> <given-names>F.</given-names></name> <name><surname>Digumarthy</surname> <given-names>S.</given-names></name> <name><surname>Ebrahimian</surname> <given-names>S.</given-names></name> <name><surname>Rueckel</surname> <given-names>J.</given-names></name> <name><surname>Hoppe</surname> <given-names>B. F.</given-names></name> <name><surname>Sabel</surname> <given-names>B. O.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>An artificial intelligence-based chest x-ray model on human nodule detection accuracy from a multicenter study</article-title>. <source>JAMA Netw. Open</source> <volume>4</volume>, <fpage>e2141096</fpage>-e2141096. <pub-id pub-id-type="doi">10.1001/jamanetworkopen.2021.41096</pub-id><pub-id pub-id-type="pmid">34964851</pub-id></citation></ref>
<ref id="B68">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hong</surname> <given-names>L.</given-names></name> <name><surname>Li</surname> <given-names>Y.</given-names></name> <name><surname>Shen</surname> <given-names>H.</given-names></name></person-group> (<year>2009a</year>). <source>Method and System for Diaphragm Segmentation IN CHest x-ray Radiographs</source>. U.S. Patent No US20090087072.</citation>
</ref>
<ref id="B69">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hong</surname> <given-names>L.</given-names></name> <name><surname>Li</surname> <given-names>Y.</given-names></name> <name><surname>Shen</surname> <given-names>H.</given-names></name></person-group> (<year>2009b</year>). <source>Method and System for Nodule Feature Extraction Using Background Contextual Information in Chest x-ray Images</source>. U.S. Patent No US20090103797.</citation>
</ref>
<ref id="B70">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hong</surname> <given-names>L.</given-names></name> <name><surname>Shen</surname> <given-names>H.</given-names></name></person-group> (<year>2008</year>). <source>Method and System for Locating Opaque Regions in Chest x-ray Radiographs</source>. U.S. Patent No US20080181481.</citation>
</ref>
<ref id="B71">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Hospitales</surname> <given-names>H.</given-names></name></person-group> (<year>2020</year>). <source>Covid Data Save Lives</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://www.hmhospitales.com/coronavirus/covid-data-save-lives">https://www.hmhospitales.com/coronavirus/covid-data-save-lives</ext-link></citation>
</ref>
<ref id="B72">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Huang</surname> <given-names>G.</given-names></name> <name><surname>Liu</surname> <given-names>Z.</given-names></name> <name><surname>Van Der Maaten</surname> <given-names>L.</given-names></name> <name><surname>Weinberger</surname> <given-names>K. Q.</given-names></name></person-group> (<year>2017</year>). <article-title>&#x0201C;Densely connected convolutional networks,&#x0201D;</article-title> in <source>Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</source> (<publisher-loc>Honolulu, HI</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>4700</fpage>&#x02013;<lpage>4708</lpage>.</citation>
</ref>
<ref id="B73">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Huo</surname> <given-names>Z.</given-names></name> <name><surname>Zhao</surname> <given-names>H.</given-names></name></person-group> (<year>2014</year>). <source>Clavicle Suppression in Radiographic Images.</source> U.S. Patent No. US20140140603.</citation>
</ref>
<ref id="B74">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hurt</surname> <given-names>B.</given-names></name> <name><surname>Yen</surname> <given-names>A.</given-names></name> <name><surname>Kligerman</surname> <given-names>S.</given-names></name> <name><surname>Hsiao</surname> <given-names>A.</given-names></name></person-group> (<year>2020</year>). <article-title>Augmenting interpretation of chest radiographs with deep learning probability maps</article-title>. <source>J. Thorac Imaging</source> <volume>35</volume>, <fpage>285</fpage>. <pub-id pub-id-type="doi">10.1097/RTI.0000000000000505</pub-id><pub-id pub-id-type="pmid">32205817</pub-id></citation></ref>
<ref id="B75">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Huynh-Thu</surname> <given-names>Q.</given-names></name> <name><surname>Ghanbari</surname> <given-names>M.</given-names></name></person-group> (<year>2008</year>). <article-title>Scope of validity of psnr in image/video quality assessment</article-title>. <source>Electron Lett</source>. <volume>44</volume>, <fpage>800</fpage>&#x02013;<lpage>801</lpage>. <pub-id pub-id-type="doi">10.1049/el:20080522</pub-id></citation>
</ref>
<ref id="B76">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hwang</surname> <given-names>E. J.</given-names></name> <name><surname>Park</surname> <given-names>S.</given-names></name> <name><surname>Jin</surname> <given-names>K.-N.</given-names></name> <name><surname>Im Kim</surname> <given-names>J.</given-names></name> <name><surname>Choi</surname> <given-names>S. Y.</given-names></name> <name><surname>Lee</surname> <given-names>J. H.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Development and validation of a deep learning-based automated detection algorithm for major thoracic diseases on chest radiographs</article-title>. <source>JAMA Netw. Open</source> <volume>2</volume>, <fpage>e191095</fpage>-e191095. <pub-id pub-id-type="doi">10.1001/jamanetworkopen.2019.1095</pub-id><pub-id pub-id-type="pmid">30901052</pub-id></citation></ref>
<ref id="B77">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hwang</surname> <given-names>S.</given-names></name> <name><surname>Kim</surname> <given-names>H.-E.</given-names></name> <name><surname>Jeong</surname> <given-names>J.</given-names></name> <name><surname>Kim</surname> <given-names>H.-J.</given-names></name></person-group> (<year>2016</year>). <article-title>A novel approach for tuberculosis screening based on deep convolutional neural networks</article-title>. <source>Med. Imaging</source> <volume>9785</volume>, <fpage>750</fpage>&#x02013;<lpage>757</lpage>. <pub-id pub-id-type="doi">10.1117/12.2216198</pub-id></citation>
</ref>
<ref id="B78">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Hwang</surname> <given-names>S.</given-names></name> <name><surname>Park</surname> <given-names>S.</given-names></name></person-group> (<year>2017</year>). <article-title>&#x0201C;Accurate lung segmentation via network-wise training of convolutional networks,&#x0201D;</article-title> in <source>Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support - Third International Workshop, DLMIA 2017, and 7th International Workshop, ML-CDS 2017, Held in Conjunction with MICCAI 2017, Proceedings</source> (<publisher-loc>Quebec, QC</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>92</fpage>&#x02013;<lpage>99</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-319-67558-9_11</pub-id></citation>
</ref>
<ref id="B79">
<citation citation-type="journal"><person-group person-group-type="author"><collab>IDC</collab></person-group> (<year>2014</year>). <source>The Digital Universe-Driving Data Growth in Healthcare</source>. <publisher-loc>Framingham, MA</publisher-loc>: <publisher-name>IDC</publisher-name>.</citation>
</ref>
<ref id="B80">
<citation citation-type="thesis"><person-group person-group-type="author"><name><surname>Irvin</surname> <given-names>J.</given-names></name> <name><surname>Rajpurkar</surname> <given-names>P.</given-names></name> <name><surname>Ko</surname> <given-names>M.</given-names></name> <name><surname>Yu</surname> <given-names>Y.</given-names></name> <name><surname>Ciurea-Ilcus</surname> <given-names>S.</given-names></name> <name><surname>Chute</surname> <given-names>C.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Chexpert: a large chest radiograph dataset with uncertainty labels and expert comparison</article-title>. <source>Proc. AAAI Conf. Artif. Intell</source>. <volume>33</volume>, <fpage>590</fpage>&#x02013;<lpage>597</lpage>. <pub-id pub-id-type="doi">10.1609/aaai.v33i01.3301590</pub-id></citation>
</ref>
<ref id="B81">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Islam</surname> <given-names>M. Z.</given-names></name> <name><surname>Islam</surname> <given-names>M. M.</given-names></name> <name><surname>Asraf</surname> <given-names>A.</given-names></name></person-group> (<year>2020</year>). <article-title>A combined deep cnn-lstm network for the detection of novel coronavirus (COVID-19) using x-ray images</article-title>. <source>Inf. Med. Unlocked</source> <volume>20</volume>, <fpage>100412</fpage>. <pub-id pub-id-type="doi">10.1016/j.imu.2020.100412</pub-id><pub-id pub-id-type="pmid">32835084</pub-id></citation></ref>
<ref id="B82">
<citation citation-type="web"><person-group person-group-type="author"><collab>ISMIR</collab></person-group> (<year>2020</year>). Italian Society of Medical and Interventional Radiology, Radiology, COVID-19 Database, 2020. Available online: <ext-link ext-link-type="uri" xlink:href="https://www.sirm.org/category/senza-categoria/covid-19/">https://www.sirm.org/category/senza-categoria/covid-19/</ext-link></citation>
</ref>
<ref id="B83">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jaeger</surname> <given-names>S.</given-names></name> <name><surname>Candemir</surname> <given-names>S.</given-names></name> <name><surname>Antani</surname> <given-names>S.</given-names></name> <name><surname>W&#x000E1;ng</surname> <given-names>Y.-X. J.</given-names></name> <name><surname>Lu</surname> <given-names>P.-X.</given-names></name> <name><surname>Thoma</surname> <given-names>G.</given-names></name></person-group> (<year>2014</year>). <article-title>Two public chest x-ray datasets for computer-aided screening of pulmonary diseases</article-title>. <source>Quant Imaging Med. Surg</source>. 4, 475. <pub-id pub-id-type="doi">10.3978/j.issn.2223-4292.2014.11.20</pub-id><pub-id pub-id-type="pmid">25525580</pub-id></citation></ref>
<ref id="B84">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jain</surname> <given-names>R.</given-names></name> <name><surname>Gupta</surname> <given-names>M.</given-names></name> <name><surname>Taneja</surname> <given-names>S.</given-names></name> <name><surname>Hemanth</surname> <given-names>D. J.</given-names></name></person-group> (<year>2021</year>). <article-title>Deep learning based detection and analysis of COVID-19 on chest x-ray images</article-title>. <source>Appl. Intell</source>. <volume>51</volume>, <fpage>1690</fpage>&#x02013;<lpage>1700</lpage>. <pub-id pub-id-type="doi">10.1007/s10489-020-01902-1</pub-id><pub-id pub-id-type="pmid">34764553</pub-id></citation></ref>
<ref id="B85">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jang</surname> <given-names>R.</given-names></name> <name><surname>Kim</surname> <given-names>N.</given-names></name> <name><surname>Jang</surname> <given-names>M.</given-names></name> <name><surname>Lee</surname> <given-names>K. H.</given-names></name> <name><surname>Lee</surname> <given-names>S. M.</given-names></name> <name><surname>Lee</surname> <given-names>K. H.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Assessment of the robustness of convolutional neural networks in labeling noise by using chest x-ray images from multiple centers</article-title>. <source>JMIR Med. Inform</source>. 8, e18089. <pub-id pub-id-type="doi">10.2196/18089</pub-id><pub-id pub-id-type="pmid">32749222</pub-id></citation></ref>
<ref id="B86">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>J&#x000E9;gou</surname> <given-names>S.</given-names></name> <name><surname>Drozdzal</surname> <given-names>M.</given-names></name> <name><surname>Vazquez</surname> <given-names>D.</given-names></name> <name><surname>Romero</surname> <given-names>A.</given-names></name> <name><surname>Bengio</surname> <given-names>Y.</given-names></name></person-group> (<year>2017</year>). <article-title>&#x0201C;The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation,&#x0201D;</article-title> in <source>Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops</source> (<publisher-loc>Honolulu, HI</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>11</fpage>&#x02013;<lpage>19</lpage>.</citation>
</ref>
<ref id="B87">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jiezhi</surname> <given-names>Z.</given-names></name> <name><surname>Zaiwen</surname> <given-names>G.</given-names></name> <name><surname>Hengze</surname> <given-names>Z.</given-names></name> <name><surname>Yiqiang</surname> <given-names>Z.</given-names></name></person-group> (<year>2018</year>). <source>X-ray Chest Radiography Image Quality Determination Method and Device</source>. U.S. Patent No CN113052795.</citation>
</ref>
<ref id="B88">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jin</surname> <given-names>W.</given-names></name> <name><surname>Dong</surname> <given-names>S.</given-names></name> <name><surname>Dong</surname> <given-names>C.</given-names></name> <name><surname>Ye</surname> <given-names>X.</given-names></name></person-group> (<year>2021</year>). <article-title>Hybrid ensemble model for differential diagnosis between COVID-19 and common viral pneumonia by chest x-ray radiograph</article-title>. <source>Comput. Biol. Med</source>. 131, 104252. <pub-id pub-id-type="doi">10.1016/j.compbiomed.2021.104252</pub-id><pub-id pub-id-type="pmid">33610001</pub-id></citation></ref>
<ref id="B89">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Jinpeng</surname> <given-names>L.</given-names></name> <name><surname>Jie</surname> <given-names>W.</given-names></name> <name><surname>Ting</surname> <given-names>C.</given-names></name></person-group> (<year>2020</year>). <source>X-ray Lung Disease Automatic Positioning Method Based on Weak Supervised Learning</source>. U.S. Patent No CN112116571.</citation>
</ref>
<ref id="B90">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Johnson</surname> <given-names>A. E.</given-names></name> <name><surname>Pollard</surname> <given-names>T. J.</given-names></name> <name><surname>Berkowitz</surname> <given-names>S. J.</given-names></name> <name><surname>Greenbaum</surname> <given-names>N. R.</given-names></name> <name><surname>Lungren</surname> <given-names>M. P.</given-names></name> <name><surname>Deng</surname> <given-names>C.-,y.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Mimic-cxr, a de-identified publicly available database of chest radiographs with free-text reports</article-title>. <source>Sci. Data</source> <volume>6</volume>, <fpage>1</fpage>&#x02013;<lpage>8</lpage>. <pub-id pub-id-type="doi">10.1038/s41597-019-0322-0</pub-id><pub-id pub-id-type="pmid">31831740</pub-id></citation></ref>
<ref id="B91">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kai</surname> <given-names>K.</given-names></name> <name><surname>Masaomi</surname> <given-names>T.</given-names></name> <name><surname>Kenji</surname> <given-names>F.</given-names></name></person-group> (<year>2019</year>). <source>Lesion Detection Method Using Artificial Intelligence, and System Therefor</source>. U.S. Patent No JP2019154943.</citation>
</ref>
<ref id="B92">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kaijin</surname> <given-names>X.</given-names></name></person-group> (<year>2019</year>). <source>Dr Image Pulmonary Tuberculosis Intelligent Segmentation and Detection Method Based on Deep Learning</source>. U.S. Patent No CN110782441.</citation>
</ref>
<ref id="B93">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kang</surname> <given-names>Z.</given-names></name> <name><surname>Rui</surname> <given-names>H.</given-names></name> <name><surname>Lianghong</surname> <given-names>Z.</given-names></name></person-group> (<year>2019</year>). <source>Deep Learning-Based Diagnosis and Referral of Diseases and Disorders</source>. U.S. Paten No WO2019157214.</citation>
</ref>
<ref id="B94">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Katsuragawa</surname> <given-names>S.</given-names></name> <name><surname>Doi</surname> <given-names>K.</given-names></name> <name><surname>MacMahon</surname> <given-names>H.</given-names></name></person-group> (<year>1988</year>). <article-title>Image feature analysis and computer-aided diagnosis in digital radiography: detection and characterization of interstitial lung disease in digital chest radiographs</article-title>. <source>Med. Phys</source>. <volume>15</volume>, <fpage>311</fpage>&#x02013;<lpage>319</lpage>. <pub-id pub-id-type="doi">10.1118/1.596224</pub-id><pub-id pub-id-type="pmid">3405134</pub-id></citation></ref>
<ref id="B95">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kaviani</surname> <given-names>S.</given-names></name> <name><surname>Han</surname> <given-names>K. J.</given-names></name> <name><surname>Sohn</surname> <given-names>I.</given-names></name></person-group> (<year>2022</year>). <article-title>Adversarial attacks and defenses on ai in medical imaging informatics: a survey</article-title>. <source>Expert Syst. Appl</source>. 2022, 116815. <pub-id pub-id-type="doi">10.1016/j.eswa.2022.116815</pub-id></citation>
</ref>
<ref id="B96">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kermany</surname> <given-names>D. S, Zhang, K.</given-names></name> <name><surname>Goldbaum</surname> <given-names>M.</given-names></name></person-group> (<year>2018a</year>). <source>Labeled Optical Coherence Tomography (oct) and Chest X-ray Images for Classification.</source> <pub-id pub-id-type="doi">10.17632/rscbjbr9sj.2</pub-id></citation>
</ref>
<ref id="B97">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kermany</surname> <given-names>D. S.</given-names></name> <name><surname>Goldbaum</surname> <given-names>M.</given-names></name> <name><surname>Cai</surname> <given-names>W.</given-names></name> <name><surname>Valentim</surname> <given-names>C. C.</given-names></name> <name><surname>Liang</surname> <given-names>H.</given-names></name> <name><surname>Baxter</surname> <given-names>S. L.</given-names></name> <etal/></person-group>. (<year>2018b</year>). <article-title>Identifying medical diagnoses and treatable diseases by image-based deep learning</article-title>. <source>Cell</source> <volume>172</volume>, <fpage>1122</fpage>&#x02013;<lpage>1131</lpage>. <pub-id pub-id-type="doi">10.1016/j.cell.2018.02.010</pub-id><pub-id pub-id-type="pmid">29474911</pub-id></citation></ref>
<ref id="B98">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Khan</surname> <given-names>A. I.</given-names></name> <name><surname>Shah</surname> <given-names>J. L.</given-names></name> <name><surname>Bhat</surname> <given-names>M. M.</given-names></name></person-group> (<year>2020</year>). <article-title>Coronet: A deep neural network for detection and diagnosis of COVID-19 from chest x-ray images</article-title>. <source>Comput. Methods Programs Biomed</source>. 196, 105581. <pub-id pub-id-type="doi">10.1016/j.cmpb.2020.105581</pub-id><pub-id pub-id-type="pmid">32534344</pub-id></citation></ref>
<ref id="B99">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Khasawneh</surname> <given-names>N.</given-names></name> <name><surname>Fraiwan</surname> <given-names>M.</given-names></name> <name><surname>Fraiwan</surname> <given-names>L.</given-names></name> <name><surname>Khassawneh</surname> <given-names>B.</given-names></name> <name><surname>Ibnian</surname> <given-names>A.</given-names></name></person-group> (<year>2021</year>). <article-title>Detection of covid-19 from chest x-ray images using deep convolutional neural networks</article-title>. <source>Sensors</source> <volume>21</volume>, <fpage>5940</fpage>. <pub-id pub-id-type="doi">10.3390/s21175940</pub-id><pub-id pub-id-type="pmid">34502829</pub-id></citation></ref>
<ref id="B100">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kim</surname> <given-names>J. R.</given-names></name> <name><surname>Shim</surname> <given-names>W. H.</given-names></name> <name><surname>Yoon</surname> <given-names>H. M.</given-names></name> <name><surname>Hong</surname> <given-names>S. H.</given-names></name> <name><surname>Lee</surname> <given-names>J. S.</given-names></name> <name><surname>Cho</surname> <given-names>Y. A.</given-names></name> <etal/></person-group>. (<year>2017</year>). <article-title>Computerized bone age estimation using deep learning based program: evaluation of the accuracy and efficiency</article-title>. <source>Am. J. Roentgenol</source>. <volume>209</volume>, <fpage>1374</fpage>&#x02013;<lpage>1380</lpage>. <pub-id pub-id-type="doi">10.2214/AJR.17.18224</pub-id><pub-id pub-id-type="pmid">28898126</pub-id></citation></ref>
<ref id="B101">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kim</surname> <given-names>Y.-G.</given-names></name> <name><surname>Cho</surname> <given-names>Y.</given-names></name> <name><surname>Wu</surname> <given-names>C.-J.</given-names></name> <name><surname>Park</surname> <given-names>S.</given-names></name> <name><surname>Jung</surname> <given-names>K.-H.</given-names></name> <name><surname>Seo</surname> <given-names>J. B.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Short-term reproducibility of pulmonary nodule and mass detection in chest radiographs: comparison among radiologists and four different computer-aided detections with convolutional neural net</article-title>. <source>Sci. Rep</source>. <volume>9</volume>, <fpage>1</fpage>&#x02013;<lpage>9</lpage>. <pub-id pub-id-type="doi">10.1038/s41598-019-55373-7</pub-id><pub-id pub-id-type="pmid">31822774</pub-id></citation></ref>
<ref id="B102">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kim</surname> <given-names>Y.-G.</given-names></name> <name><surname>Lee</surname> <given-names>S. M.</given-names></name> <name><surname>Lee</surname> <given-names>K. H.</given-names></name> <name><surname>Jang</surname> <given-names>R.</given-names></name> <name><surname>Seo</surname> <given-names>J. B.</given-names></name> <name><surname>Kim</surname> <given-names>N.</given-names></name></person-group> (<year>2020</year>). <article-title>Optimal matrix size of chest radiographs for computer-aided detection on lung nodule or mass with deep learning</article-title>. <source>Eur. Radiol</source>. <volume>30</volume>, <fpage>4943</fpage>&#x02013;<lpage>4951</lpage>. <pub-id pub-id-type="doi">10.1007/s00330-020-06892-9</pub-id><pub-id pub-id-type="pmid">32350657</pub-id></citation></ref>
<ref id="B103">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Kusakunniran</surname> <given-names>W.</given-names></name> <name><surname>Karnjanapreechakorn</surname> <given-names>S.</given-names></name> <name><surname>Siriapisith</surname> <given-names>T.</given-names></name> <name><surname>Borwarnginn</surname> <given-names>P.</given-names></name> <name><surname>Sutassananon</surname> <given-names>K.</given-names></name> <name><surname>Tongdee</surname> <given-names>T.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>Covid-19 detection and heatmap generation in chest x-ray images</article-title>. <source>J. Med. Imaging</source> <volume>8</volume>, <fpage>014001</fpage>. <pub-id pub-id-type="doi">10.1117/1.JMI.8.S1.014001</pub-id><pub-id pub-id-type="pmid">33457446</pub-id></citation></ref>
<ref id="B104">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ledley</surname> <given-names>R. S.</given-names></name> <name><surname>Huang</surname> <given-names>H.</given-names></name> <name><surname>Rotolo</surname> <given-names>L. S.</given-names></name></person-group> (<year>1975</year>). <article-title>A texture analysis method in classification of coal workers&#x00027; pneumoconiosis</article-title>. <source>Comput. Biol. Med</source>. <volume>5</volume>, <fpage>53</fpage>&#x02013;<lpage>67</lpage>. <pub-id pub-id-type="doi">10.1016/0010-4825(75)90018-9</pub-id><pub-id pub-id-type="pmid">1098844</pub-id></citation></ref>
<ref id="B105">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lei</surname> <given-names>R.</given-names></name> <name><surname>Xiaobao</surname> <given-names>W.</given-names></name> <name><surname>Tianshi</surname> <given-names>X.</given-names></name></person-group> (<year>2021</year>). <source>Lung Disease Auxiliary Diagnosis Cloud Platform Based on Deep Learning</source>. U.S. Patent No CN113192625.</citation>
</ref>
<ref id="B106">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lenga</surname> <given-names>M.</given-names></name> <name><surname>Schulz</surname> <given-names>H.</given-names></name> <name><surname>Saalbach</surname> <given-names>A.</given-names></name></person-group> (<year>2020</year>). <article-title>&#x0201C;Continual learning for domain adaptation in chest x-ray classification,&#x0201D;</article-title> in <source>International Conference on Medical Imaging with Deep Learning, MIDL 2020, Vol. 121</source> (<publisher-loc>Montreal, QC</publisher-loc>: <publisher-name>PMLR</publisher-name>), <fpage>413</fpage>&#x02013;<lpage>423</lpage>.</citation>
</ref>
<ref id="B107">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>B.</given-names></name> <name><surname>Kang</surname> <given-names>G.</given-names></name> <name><surname>Cheng</surname> <given-names>K.</given-names></name> <name><surname>Zhang</surname> <given-names>N.</given-names></name></person-group> (<year>2019</year>). <article-title>&#x0201C;Attention-guided convolutional neural network for detecting pneumonia on chest x-rays,&#x0201D;</article-title> in <source>2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)</source> (<publisher-loc>Berlin</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>4851</fpage>&#x02013;<lpage>4854</lpage>.<pub-id pub-id-type="pmid">31946947</pub-id></citation></ref>
<ref id="B108">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>D.</given-names></name> <name><surname>Lin</surname> <given-names>C. T.</given-names></name> <name><surname>Sulam</surname> <given-names>J.</given-names></name> <name><surname>Yi</surname> <given-names>P. H.</given-names></name></person-group> (<year>2022</year>). <article-title>Deep learning prediction of sex on chest radiographs: a potential contributor to biased algorithms</article-title>. <source>Emerg. Radiol</source>. <volume>29</volume>, <fpage>365</fpage>&#x02013;<lpage>370</lpage>. <pub-id pub-id-type="doi">10.1007/s10140-022-02019-3</pub-id><pub-id pub-id-type="pmid">35006495</pub-id></citation></ref>
<ref id="B109">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>L.</given-names></name> <name><surname>Zheng</surname> <given-names>Y.</given-names></name> <name><surname>Kallergi</surname> <given-names>M.</given-names></name> <name><surname>Clark</surname> <given-names>R. A.</given-names></name></person-group> (<year>2001</year>). <article-title>Improved method for automatic identification of lung regions on chest radiographs</article-title>. <source>Acad Radiol</source>. <volume>8</volume>, <fpage>629</fpage>&#x02013;<lpage>638</lpage>. <pub-id pub-id-type="doi">10.1016/S1076-6332(03)80688-8</pub-id><pub-id pub-id-type="pmid">11450964</pub-id></citation></ref>
<ref id="B110">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>X.</given-names></name> <name><surname>Cao</surname> <given-names>R.</given-names></name> <name><surname>Zhu</surname> <given-names>D.</given-names></name></person-group> (<year>2019</year>). <article-title>Vispi: automatic visual perception and interpretation of chest x-rays</article-title>. <source>arXiv preprint</source> arXiv:1906.05190. <pub-id pub-id-type="doi">10.48550/arXiv.1906.05190</pub-id></citation>
</ref>
<ref id="B111">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>X.</given-names></name> <name><surname>Li</surname> <given-names>C.</given-names></name> <name><surname>Zhu</surname> <given-names>D.</given-names></name></person-group> (<year>2020</year>). <article-title>Covid-mobilexpert: on-device COVID-19 screening using snapshots of chest x-ray</article-title>. <source>arXiv preprint</source> arXiv:2004.03042. <pub-id pub-id-type="doi">10.48550/arXiv.2004.03042</pub-id></citation>
</ref>
<ref id="B112">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>X.</given-names></name> <name><surname>Pan</surname> <given-names>D.</given-names></name> <name><surname>Zhu</surname> <given-names>D.</given-names></name></person-group> (<year>2021</year>). <article-title>&#x0201C;Defending against adversarial attacks on medical imaging ai system, classification or detection?&#x0201D;</article-title> in <source>2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)</source> (<publisher-loc>Nice</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1677</fpage>&#x02013;<lpage>1681</lpage>.</citation>
</ref>
<ref id="B113">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Li</surname> <given-names>X.</given-names></name> <name><surname>Zhu</surname> <given-names>D.</given-names></name></person-group> (<year>2020</year>). <article-title>&#x0201C;Robust detection of adversarial attacks on medical images,&#x0201D;</article-title> in <source>2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI)</source> (<publisher-loc>Iowa City, IA</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1154</fpage>&#x02013;<lpage>1158</lpage>.</citation>
</ref>
<ref id="B114">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liang</surname> <given-names>N.-Y.</given-names></name> <name><surname>Huang</surname> <given-names>G.-B.</given-names></name> <name><surname>Saratchandran</surname> <given-names>P.</given-names></name> <name><surname>Sundararajan</surname> <given-names>N.</given-names></name></person-group> (<year>2006</year>). <article-title>A fast and accurate online sequential learning algorithm for feedforward networks</article-title>. <source>IEEE Trans. Neural Netw</source>. <volume>17</volume>, <fpage>1411</fpage>&#x02013;<lpage>1423</lpage>. <pub-id pub-id-type="doi">10.1109/TNN.2006.880583</pub-id><pub-id pub-id-type="pmid">17131657</pub-id></citation></ref>
<ref id="B115">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lin</surname> <given-names>C.-Y.</given-names></name> <name><surname>Hovy</surname> <given-names>E.</given-names></name></person-group> (<year>2003</year>). <article-title>&#x0201C;Automatic evaluation of summaries using n-gram co-occurrence statistics,&#x0201D;</article-title> in <source>Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics</source>, eds. M. A. Hearst and M. Ostendorf (Edmonton: The Association for Computational Linguistics).</citation>
</ref>
<ref id="B116">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Little</surname> <given-names>K. J.</given-names></name> <name><surname>Reiser</surname> <given-names>I.</given-names></name> <name><surname>Liu</surname> <given-names>L.</given-names></name> <name><surname>Kinsey</surname> <given-names>T.</given-names></name> <name><surname>S&#x000E1;nchez</surname> <given-names>A. A.</given-names></name> <name><surname>Haas</surname> <given-names>K.</given-names></name> <etal/></person-group>. (<year>2017</year>). <article-title>Unified database for rejected image analysis across multiple vendors in radiography</article-title>. <source>J. Am. College Radiol</source>. <volume>14</volume>, <fpage>208</fpage>&#x02013;<lpage>216</lpage>. <pub-id pub-id-type="doi">10.1016/j.jacr.2016.07.011</pub-id><pub-id pub-id-type="pmid">27663061</pub-id></citation></ref>
<ref id="B117">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>L.</given-names></name> <name><surname>Fieguth</surname> <given-names>P.</given-names></name> <name><surname>Guo</surname> <given-names>Y.</given-names></name> <name><surname>Wang</surname> <given-names>X.</given-names></name> <name><surname>Pietik&#x000E4;inen</surname> <given-names>M.</given-names></name></person-group> (<year>2017</year>). <article-title>Local binary features for texture classification: taxonomy and experimental study</article-title>. <source>Pattern Recognit</source>. <volume>62</volume>, <fpage>135</fpage>&#x02013;<lpage>160</lpage>. <pub-id pub-id-type="doi">10.1016/j.patcog.2016.08.032</pub-id></citation>
</ref>
<ref id="B118">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Liu</surname> <given-names>L.</given-names></name> <name><surname>Lao</surname> <given-names>S.</given-names></name> <name><surname>Fieguth</surname> <given-names>P. W.</given-names></name> <name><surname>Guo</surname> <given-names>Y.</given-names></name> <name><surname>Wang</surname> <given-names>X.</given-names></name> <name><surname>Pietik&#x000E4;inen</surname> <given-names>M.</given-names></name></person-group> (<year>2016</year>). <article-title>Median robust extended local binary pattern for texture classification</article-title>. <source>IEEE Trans. Image Process</source>. <volume>25</volume>, <fpage>1368</fpage>&#x02013;<lpage>1381</lpage>. <pub-id pub-id-type="doi">10.1109/TIP.2016.2522378</pub-id><pub-id pub-id-type="pmid">26829791</pub-id></citation></ref>
<ref id="B119">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Longjiang</surname> <given-names>E.</given-names></name> <name><surname>Zhao</surname> <given-names>B.</given-names></name> <name><surname>Liu</surname> <given-names>H.</given-names></name> <name><surname>Zheng</surname> <given-names>C.</given-names></name> <name><surname>Song</surname> <given-names>X.</given-names></name> <name><surname>Cai</surname> <given-names>Y.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Image-based deep learning in diagnosing the etiology of pneumonia on pediatric chest x-rays</article-title>. <source>Pediatr. Pulmonol.</source> <volume>56</volume>, <fpage>1036</fpage>&#x02013;<lpage>1044</lpage>. <pub-id pub-id-type="doi">10.1002/ppul.25229</pub-id><pub-id pub-id-type="pmid">33331678</pub-id></citation></ref>
<ref id="B120">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lu</surname> <given-names>M. T.</given-names></name> <name><surname>Ivanov</surname> <given-names>A.</given-names></name> <name><surname>Mayrhofer</surname> <given-names>T.</given-names></name> <name><surname>Hosny</surname> <given-names>A.</given-names></name> <name><surname>Aerts</surname> <given-names>H. J.</given-names></name> <name><surname>Hoffmann</surname> <given-names>U.</given-names></name></person-group> (<year>2019</year>). <article-title>Deep learning to assess long-term mortality from chest radiographs</article-title>. <source>JAMA Netw. Open</source> <volume>2</volume>, <fpage>e197416</fpage>-e197416. <pub-id pub-id-type="doi">10.1001/jamanetworkopen.2019.7416</pub-id><pub-id pub-id-type="pmid">31322692</pub-id></citation></ref>
<ref id="B121">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Luojie</surname> <given-names>L.</given-names></name> <name><surname>Jinhua</surname> <given-names>M.</given-names></name></person-group> (<year>2018</year>). <source>A Lung Disease Detection Method Based on Deep Learning</source>. U.S. Paten No CN109598719.</citation>
</ref>
<ref id="B122">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Lyman</surname> <given-names>K.</given-names></name> <name><surname>Bernard</surname> <given-names>D.</given-names></name> <name><surname>Li Yao</surname> <given-names>D. A.</given-names></name> <name><surname>Covington</surname> <given-names>B.</given-names></name> <name><surname>Upton</surname> <given-names>A.</given-names></name></person-group> (<year>2019</year>). <source>Chest x-ray Differential Diagnosis System</source>. U.S. Patent No US20190066835.</citation>
</ref>
<ref id="B123">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ma</surname> <given-names>X.</given-names></name> <name><surname>Niu</surname> <given-names>Y.</given-names></name> <name><surname>Gu</surname> <given-names>L.</given-names></name> <name><surname>Wang</surname> <given-names>Y.</given-names></name> <name><surname>Zhao</surname> <given-names>Y.</given-names></name> <name><surname>Bailey</surname> <given-names>J.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>Understanding adversarial attacks on deep learning based medical image analysis systems</article-title>. <source>Pattern Recognit</source>. 110, 107332. <pub-id pub-id-type="doi">10.1016/j.patcog.2020.107332</pub-id></citation>
</ref>
<ref id="B124">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Madani</surname> <given-names>A.</given-names></name> <name><surname>Moradi</surname> <given-names>M.</given-names></name> <name><surname>Karargyris</surname> <given-names>A.</given-names></name> <name><surname>Syeda-Mahmood</surname> <given-names>T.</given-names></name></person-group> (<year>2018</year>). <article-title>&#x0201C;Semi-supervised learning with generative adversarial networks for chest x-ray classification with ability of data domain adaptation,&#x0201D;</article-title> in <source>2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018)</source> (<publisher-loc>Washington, DC</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1038</fpage>&#x02013;<lpage>1042</lpage>.</citation>
</ref>
<ref id="B125">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mahmud</surname> <given-names>T.</given-names></name> <name><surname>Rahman</surname> <given-names>M. A.</given-names></name> <name><surname>Fattah</surname> <given-names>S. A.</given-names></name></person-group> (<year>2020</year>). <article-title>Covxnet: a multi-dilation convolutional neural network for automatic COVID-19 and other pneumonia detection from chest x-ray images with transferable multi-receptive feature optimization</article-title>. <source>Comput. Biol. Med</source>. 122, 103869. <pub-id pub-id-type="doi">10.1016/j.compbiomed.2020.103869</pub-id><pub-id pub-id-type="pmid">32658740</pub-id></citation></ref>
<ref id="B126">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Malhotra</surname> <given-names>A.</given-names></name> <name><surname>Mittal</surname> <given-names>S.</given-names></name> <name><surname>Majumdar</surname> <given-names>P.</given-names></name> <name><surname>Chhabra</surname> <given-names>S.</given-names></name> <name><surname>Thakral</surname> <given-names>K.</given-names></name> <name><surname>Vatsa</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2022</year>). <article-title>Multi-task driven explainable diagnosis of COVID-19 using chest x-ray images</article-title>. <source>Pattern Recognit</source>. 122, 108243. <pub-id pub-id-type="doi">10.1016/j.patcog.2021.108243</pub-id><pub-id pub-id-type="pmid">34456368</pub-id></citation></ref>
<ref id="B127">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>McDonald</surname> <given-names>C. J.</given-names></name> <name><surname>Overhage</surname> <given-names>J. M.</given-names></name> <name><surname>Barnes</surname> <given-names>M.</given-names></name> <name><surname>Schadow</surname> <given-names>G.</given-names></name> <name><surname>Blevins</surname> <given-names>L.</given-names></name> <name><surname>Dexter</surname> <given-names>P. R.</given-names></name> <etal/></person-group>. (<year>2005</year>). <article-title>The indiana network for patient care: a working local health information infrastructure</article-title>. <source>Health Aff</source>. <volume>24</volume>, <fpage>1214</fpage>&#x02013;<lpage>1220</lpage>. <pub-id pub-id-type="doi">10.1377/hlthaff.24.5.1214</pub-id><pub-id pub-id-type="pmid">16162565</pub-id></citation></ref>
<ref id="B128">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Minhwa</surname> <given-names>L.</given-names></name> <name><surname>Hyoeun</surname> <given-names>K.</given-names></name> <name><surname>Sangheum</surname> <given-names>H.</given-names></name> <name><surname>Seungwook</surname> <given-names>P.</given-names></name> <name><surname>Jungin</surname> <given-names>L.</given-names></name> <name><surname>Minhong</surname> <given-names>J.</given-names></name></person-group> (<year>2017</year>). <source>System for Automatic Diagnosis and Prognosis of Tuberculosis by Cad-Based Digital x-ray</source>. U.S. Patent No WO2017069596.</citation>
</ref>
<ref id="B129">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Mitchell</surname> <given-names>C.</given-names></name></person-group> (<year>2012</year>). <source>World Radiography Day: Two-Thirds of the World&#x00027;s Population has No Access to Diagnostic Imaging</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://tinyurl.com/2p952776">https://tinyurl.com/2p952776</ext-link></citation>
</ref>
<ref id="B130">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mittal</surname> <given-names>A.</given-names></name> <name><surname>Kumar</surname> <given-names>D.</given-names></name> <name><surname>Mittal</surname> <given-names>M.</given-names></name> <name><surname>Saba</surname> <given-names>T.</given-names></name> <name><surname>Abunadi</surname> <given-names>I.</given-names></name> <name><surname>Rehman</surname> <given-names>A.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Detecting pneumonia using convolutions and dynamic capsule routing for chest x-ray images</article-title>. <source>Sensors</source> <volume>20</volume>, <fpage>1068</fpage>. <pub-id pub-id-type="doi">10.3390/s20041068</pub-id><pub-id pub-id-type="pmid">32075339</pub-id></citation></ref>
<ref id="B131">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mittal</surname> <given-names>S.</given-names></name> <name><surname>Venugopal</surname> <given-names>V. K.</given-names></name> <name><surname>Agarwal</surname> <given-names>V. K.</given-names></name> <name><surname>Malhotra</surname> <given-names>M.</given-names></name> <name><surname>Chatha</surname> <given-names>J. S.</given-names></name> <name><surname>Kapur</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>2022</year>). <article-title>A novel abnormality annotation database for covid-19 affected frontal lung x-rays</article-title>. <source>PLoS ONE</source> <volume>17</volume>, <fpage>e0271931</fpage>. <pub-id pub-id-type="doi">10.1101/2021.01.07.21249323</pub-id><pub-id pub-id-type="pmid">36240175</pub-id></citation></ref>
<ref id="B132">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Mooney</surname> <given-names>P.</given-names></name></person-group> (<year>2018</year>). <source>Chest x-ray Images (Pneumonia)</source>. Kaggle, Marzo.</citation>
</ref>
<ref id="B133">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Msonda</surname> <given-names>P.</given-names></name> <name><surname>Uymaz</surname> <given-names>S. A.</given-names></name> <name><surname>Karaa&#x0011F;a&#x000E7;</surname> <given-names>S. S.</given-names></name></person-group> (<year>2020</year>). <article-title>Spatial pyramid pooling in deep convolutional networks for automatic tuberculosis diagnosis</article-title>. <source>Traitement du Signal</source> <volume>2020</volume>, <fpage>370620</fpage>. <pub-id pub-id-type="doi">10.18280/ts.370620</pub-id></citation>
</ref>
<ref id="B134">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Munadi</surname> <given-names>K.</given-names></name> <name><surname>Muchtar</surname> <given-names>K.</given-names></name> <name><surname>Maulina</surname> <given-names>N.</given-names></name> <name><surname>Pradhan</surname> <given-names>B.</given-names></name></person-group> (<year>2020</year>). <article-title>Image enhancement for tuberculosis detection using deep learning</article-title>. <source>IEEE Access</source> <volume>8</volume>, <fpage>217897</fpage>&#x02013;<lpage>217907</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2020.3041867</pub-id></citation>
</ref>
<ref id="B135">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Murphy</surname> <given-names>K.</given-names></name></person-group> (<year>2019</year>). <article-title>How data will improve healthcare without adding staff or beds</article-title>. <source>Glob. Innov. Index.</source> <volume>2019</volume>, <fpage>129</fpage>&#x02013;<lpage>134</lpage>.</citation>
</ref>
<ref id="B136">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Murphy</surname> <given-names>K.</given-names></name> <name><surname>Smits</surname> <given-names>H.</given-names></name> <name><surname>Knoops</surname> <given-names>A. J.</given-names></name> <name><surname>Korst</surname> <given-names>M. B.</given-names></name> <name><surname>Samson</surname> <given-names>T.</given-names></name> <name><surname>Scholten</surname> <given-names>E. T.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>COVID-19 on chest radiographs: a multireader evaluation of an artificial intelligence system</article-title>. <source>Radiology</source> <volume>296</volume>, <fpage>E166</fpage>. <pub-id pub-id-type="doi">10.1148/radiol.2020201874</pub-id><pub-id pub-id-type="pmid">32384019</pub-id></citation></ref>
<ref id="B137">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Murray</surname> <given-names>V.</given-names></name> <name><surname>Pattichis</surname> <given-names>M. S.</given-names></name> <name><surname>Davis</surname> <given-names>H.</given-names></name> <name><surname>Barriga</surname> <given-names>E. S.</given-names></name> <name><surname>Soliz</surname> <given-names>P.</given-names></name></person-group> (<year>2009</year>). <article-title>&#x0201C;Multiscale am-fm analysis of pneumoconiosis x-ray images,&#x0201D;</article-title> in <source>2009 16th IEEE International Conference on Image Processing (ICIP)</source> (<publisher-loc>Cairo</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>4201</fpage>&#x02013;<lpage>4204</lpage>.</citation>
</ref>
<ref id="B138">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Narayanan</surname> <given-names>B. N.</given-names></name> <name><surname>Davuluru</surname> <given-names>V. S. P.</given-names></name> <name><surname>Hardie</surname> <given-names>R. C.</given-names></name></person-group> (<year>2020</year>). <article-title>Two-stage deep learning architecture for pneumonia detection and its diagnosis in chest radiographs</article-title>. <source>Med. Imaging</source> <volume>11318</volume>, <fpage>130</fpage>&#x02013;<lpage>139</lpage>. <pub-id pub-id-type="doi">10.1117/12.2547635</pub-id></citation>
</ref>
<ref id="B139">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Nguyen</surname> <given-names>H. Q.</given-names></name> <name><surname>Lam</surname> <given-names>K.</given-names></name> <name><surname>Le</surname> <given-names>L. T.</given-names></name> <name><surname>Pham</surname> <given-names>H. H.</given-names></name> <name><surname>Tran</surname> <given-names>D. Q.</given-names></name> <name><surname>Nguyen</surname> <given-names>D. B.</given-names></name> <etal/></person-group>. (<year>2020</year>). <source>Vindr-cxr: An Open Dataset of Chest X-rays with Radiologist&#x00027;s Annotations.</source> <pub-id pub-id-type="doi">10.48550/ARXIV.2012.15029</pub-id><pub-id pub-id-type="pmid">35858929</pub-id></citation></ref>
<ref id="B140">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Oh</surname> <given-names>Y.</given-names></name> <name><surname>Park</surname> <given-names>S.</given-names></name> <name><surname>Ye</surname> <given-names>J. C.</given-names></name></person-group> (<year>2020</year>). <article-title>Deep learning covid-19 features on cxr using limited training data sets</article-title>. <source>IEEE Trans. Med. Imaging</source> <volume>39</volume>, <fpage>2688</fpage>&#x02013;<lpage>2700</lpage>. <pub-id pub-id-type="doi">10.1109/TMI.2020.2993291</pub-id><pub-id pub-id-type="pmid">32396075</pub-id></citation></ref>
<ref id="B141">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ojala</surname> <given-names>T.</given-names></name> <name><surname>Pietik&#x000E4;inen</surname> <given-names>M.</given-names></name> <name><surname>Harwood</surname> <given-names>D.</given-names></name></person-group> (<year>1996</year>). <article-title>A comparative study of texture measures with classification based on featured distributions</article-title>. <source>Pattern Recognit</source>. <volume>29</volume>, <fpage>51</fpage>&#x02013;<lpage>59</lpage>. <pub-id pub-id-type="doi">10.1016/0031-3203(95)00067-4</pub-id></citation>
</ref>
<ref id="B142">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Okumura</surname> <given-names>E.</given-names></name> <name><surname>Kawashita</surname> <given-names>I.</given-names></name> <name><surname>Ishida</surname> <given-names>T.</given-names></name></person-group> (<year>2011</year>). <article-title>Computerized analysis of pneumoconiosis in digital chest radiography: effect of artificial neural network trained with power spectra</article-title>. <source>J. Digit. Imaging</source> <volume>24</volume>, <fpage>1126</fpage>&#x02013;<lpage>1132</lpage>. <pub-id pub-id-type="doi">10.1007/s10278-010-9357-7</pub-id><pub-id pub-id-type="pmid">21153856</pub-id></citation></ref>
<ref id="B143">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Oloko-Oba</surname> <given-names>M.</given-names></name> <name><surname>Viriri</surname> <given-names>S.</given-names></name></person-group> (<year>2020</year>). <article-title>&#x0201C;Tuberculosis abnormality detection in chest x-rays: A deep learning approach,&#x0201D;</article-title> in <source>Computer Vision and Graphics: International Conference, ICCVG 2020, Warsaw, Poland, September 14&#x02013;16, 2020, Proceedings</source> (<publisher-loc>Berlin, Heidelberg</publisher-loc>: <publisher-name>Springer-Verlag</publisher-name>), <fpage>121</fpage>&#x02013;<lpage>132</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-030-59006-2_11</pub-id></citation>
</ref>
<ref id="B144">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Opelt</surname> <given-names>A.</given-names></name> <name><surname>Pinz</surname> <given-names>A.</given-names></name> <name><surname>Zisserman</surname> <given-names>A.</given-names></name></person-group> (<year>2006</year>). <article-title>&#x0201C;Incremental learning of object detectors using a visual shape alphabet,&#x0201D;</article-title> in <source>2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR&#x00027;06), Vol. 1</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>3</fpage>&#x02013;<lpage>10</lpage>.</citation>
</ref>
<ref id="B145">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Owais</surname> <given-names>M.</given-names></name> <name><surname>Arsalan</surname> <given-names>M.</given-names></name> <name><surname>Mahmood</surname> <given-names>T.</given-names></name> <name><surname>Kim</surname> <given-names>Y. H.</given-names></name> <name><surname>Park</surname> <given-names>K. R.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Comprehensive computer-aided decision support framework to diagnose tuberculosis from chest x-ray images: data mining study</article-title>. <source>JMIR Med. Inform</source>. 8, e21790. <pub-id pub-id-type="doi">10.2196/21790</pub-id><pub-id pub-id-type="pmid">33284119</pub-id></citation></ref>
<ref id="B146">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Panwar</surname> <given-names>H.</given-names></name> <name><surname>Gupta</surname> <given-names>P.</given-names></name> <name><surname>Siddiqui</surname> <given-names>M. K.</given-names></name> <name><surname>Morales-Menendez</surname> <given-names>R.</given-names></name> <name><surname>Bhardwaj</surname> <given-names>P.</given-names></name> <name><surname>Singh</surname> <given-names>V.</given-names></name></person-group> (<year>2020a</year>). <article-title>A deep learning and grad-cam based color visualization approach for fast detection of COVID-19 cases using chest x-ray and ct-scan images</article-title>. <source>Chaos Solitons Fractals</source> <volume>140</volume>, <fpage>110190</fpage>. <pub-id pub-id-type="doi">10.1016/j.chaos.2020.110190</pub-id><pub-id pub-id-type="pmid">32836918</pub-id></citation></ref>
<ref id="B147">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Panwar</surname> <given-names>H.</given-names></name> <name><surname>Gupta</surname> <given-names>P.</given-names></name> <name><surname>Siddiqui</surname> <given-names>M. K.</given-names></name> <name><surname>Morales-Menendez</surname> <given-names>R.</given-names></name> <name><surname>Singh</surname> <given-names>V.</given-names></name></person-group> (<year>2020b</year>). <article-title>Application of deep learning for fast detection of COVID-19 in x-rays using ncovnet</article-title>. <source>Chaos Solitons Fractals</source> <volume>138</volume>, <fpage>109944</fpage>. <pub-id pub-id-type="doi">10.1016/j.chaos.2020.109944</pub-id><pub-id pub-id-type="pmid">32536759</pub-id></citation></ref>
<ref id="B148">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Papineni</surname> <given-names>K.</given-names></name> <name><surname>Roukos</surname> <given-names>S.</given-names></name> <name><surname>Ward</surname> <given-names>T.</given-names></name> <name><surname>Zhu</surname> <given-names>W.-J.</given-names></name></person-group> (<year>2002</year>). <article-title>&#x0201C;Bleu: a method for automatic evaluation of machine translation,&#x0201D;</article-title> in <source>Proceedings of the 40th annual meeting of the Association for Computational Linguistics</source> (<publisher-loc>Philadelphia, PA</publisher-loc>: <publisher-name>Association for Computational Linguistics</publisher-name>), <fpage>311</fpage>&#x02013;<lpage>318</lpage>. <pub-id pub-id-type="doi">10.3115/1073083.1073135</pub-id></citation>
</ref>
<ref id="B149">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pasa</surname> <given-names>F.</given-names></name> <name><surname>Golkov</surname> <given-names>V.</given-names></name> <name><surname>Pfeiffer</surname> <given-names>F.</given-names></name> <name><surname>Cremers</surname> <given-names>D.</given-names></name> <name><surname>Pfeiffer</surname> <given-names>D.</given-names></name></person-group> (<year>2019</year>). <article-title>Efficient deep network architectures for fast chest x-ray tuberculosis screening and visualization</article-title>. <source>Sci. Rep</source>. <volume>9</volume>, <fpage>1</fpage>&#x02013;<lpage>9</lpage>. <pub-id pub-id-type="doi">10.1038/s41598-019-42557-4</pub-id><pub-id pub-id-type="pmid">31000728</pub-id></citation></ref>
<ref id="B150">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pereira</surname> <given-names>R. M.</given-names></name> <name><surname>Bertolini</surname> <given-names>D.</given-names></name> <name><surname>Teixeira</surname> <given-names>L. O.</given-names></name> <name><surname>Silla</surname> <given-names>C. N.</given-names></name> <name><surname>Costa</surname> <given-names>Y. M. G.</given-names></name></person-group> (<year>2020</year>). <article-title>COVID-19 identification in chest x-ray images on flat and hierarchical classification scenarios</article-title>. <source>Comput. Methods Programs Biomed</source>. 194, 105532. <pub-id pub-id-type="doi">10.1016/j.cmpb.2020.105532</pub-id><pub-id pub-id-type="pmid">32446037</pub-id></citation></ref>
<ref id="B151">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Pham</surname> <given-names>T. D.</given-names></name></person-group> (<year>2021</year>). <article-title>Classification of COVID-19 chest x-rays with deep learning: new models or fine tuning?</article-title> <source>Health Inf. Scie. Syst</source>. <volume>9</volume>, <fpage>1</fpage>&#x02013;<lpage>11</lpage>. <pub-id pub-id-type="doi">10.1007/s13755-020-00135-3</pub-id><pub-id pub-id-type="pmid">33235710</pub-id></citation></ref>
<ref id="B152">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Punn</surname> <given-names>N. S.</given-names></name> <name><surname>Agarwal</surname> <given-names>S.</given-names></name></person-group> (<year>2021</year>). <article-title>Automated diagnosis of covid-19 with limited posteroanterior chest x-ray images using fine-tuned deep neural networks</article-title>. <source>Appl. Intell</source>. <volume>51</volume>, <fpage>2689</fpage>&#x02013;<lpage>2702</lpage>. <pub-id pub-id-type="doi">10.1007/s10489-020-01900-3</pub-id><pub-id pub-id-type="pmid">34764554</pub-id></citation></ref>
<ref id="B153">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Putha</surname> <given-names>P.</given-names></name> <name><surname>Tadepalli</surname> <given-names>M.</given-names></name> <name><surname>Reddy</surname> <given-names>B.</given-names></name> <name><surname>Raj</surname> <given-names>T.</given-names></name> <name><surname>Jagirdar</surname> <given-names>A.</given-names></name> <name><surname>Pooja Rao</surname> <given-names>A. P. W.</given-names></name></person-group> (<year>2022</year>). <source>Predicting Lung Cancer Risk</source>. U.S. Patent No US11276173.</citation>
</ref>
<ref id="B154">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Putha</surname> <given-names>P.</given-names></name> <name><surname>Tadepalli</surname> <given-names>M.</given-names></name> <name><surname>Reddy</surname> <given-names>B.</given-names></name> <name><surname>Raj</surname> <given-names>T.</given-names></name> <name><surname>Jagirdar</surname> <given-names>A.</given-names></name> <name><surname>Rao</surname> <given-names>P.</given-names></name> <etal/></person-group>. (<year>2021</year>). <source>Systems and Methods for Detection of Infectious Respiratory Diseases</source>. U.S. Patent No US20210327055.</citation>
</ref>
<ref id="B155">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Qiang</surname> <given-names>D.</given-names></name> <name><surname>Zebin</surname> <given-names>G.</given-names></name> <name><surname>Yuchen</surname> <given-names>G.</given-names></name> <name><surname>Nie</surname> <given-names>F.</given-names></name> <name><surname>Chao</surname> <given-names>T.</given-names></name></person-group> (<year>2020</year>). <source>Lung Disease Classification Method and Device, and Equipment</source>. U.S. Patent No CN111667469.</citation>
</ref>
<ref id="B156">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Radford</surname> <given-names>A.</given-names></name> <name><surname>Metz</surname> <given-names>L.</given-names></name> <name><surname>Chintala</surname> <given-names>S.</given-names></name></person-group> (<year>2015</year>). <article-title>Unsupervised representation learning with deep convolutional generative adversarial networks</article-title>. <source>arXiv preprint</source> arXiv:1511.06434. <pub-id pub-id-type="doi">10.48550/arXiv.1511.06434</pub-id></citation>
</ref>
<ref id="B157">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rahman</surname> <given-names>M.</given-names></name> <name><surname>Cao</surname> <given-names>Y.</given-names></name> <name><surname>Sun</surname> <given-names>X.</given-names></name> <name><surname>Li</surname> <given-names>B.</given-names></name> <name><surname>Hao</surname> <given-names>Y.</given-names></name></person-group> (<year>2021</year>). <article-title>Deep pre-trained networks as a feature extractor with xgboost to detect tuberculosis from chest x-ray</article-title>. <source>Comput. Electr. Eng</source>. 93, 107252. <pub-id pub-id-type="doi">10.1016/j.compeleceng.2021.107252</pub-id></citation>
</ref>
<ref id="B158">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Rahman</surname> <given-names>T.</given-names></name> <name><surname>Chowdhury</surname> <given-names>M.</given-names></name> <name><surname>Khandakar</surname> <given-names>A.</given-names></name></person-group> (<year>2020a</year>). <source>Covid-19 Radiography Database. Kaggle</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://www.kaggle.com/tawsifurrahman/covid19-radiography-database">https://www.kaggle.com/tawsifurrahman/covid19-radiography-database</ext-link></citation>
</ref>
<ref id="B159">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rahman</surname> <given-names>T.</given-names></name> <name><surname>Khandakar</surname> <given-names>A.</given-names></name> <name><surname>Kadir</surname> <given-names>M. A.</given-names></name> <name><surname>Islam</surname> <given-names>K. R.</given-names></name> <name><surname>Islam</surname> <given-names>K. F.</given-names></name> <name><surname>Mazhar</surname> <given-names>R.</given-names></name> <etal/></person-group>. (<year>2020b</year>). <article-title>Reliable tuberculosis detection using chest x-ray with deep learning, segmentation and visualization</article-title>. <source>IEEE Access</source> <volume>8</volume>, <fpage>191586</fpage>&#x02013;<lpage>191601</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2020.3031384</pub-id></citation>
</ref>
<ref id="B160">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rahman</surname> <given-names>T.</given-names></name> <name><surname>Khandakar</surname> <given-names>A.</given-names></name> <name><surname>Qiblawey</surname> <given-names>Y.</given-names></name> <name><surname>Tahir</surname> <given-names>A.</given-names></name> <name><surname>Kiranyaz</surname> <given-names>S.</given-names></name> <name><surname>Kashem</surname> <given-names>S. B. A.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>Exploring the effect of image enhancement techniques on COVID-19 detection using chest x-ray images</article-title>. <source>Comput. Biol. Med</source>. 132, 104319. <pub-id pub-id-type="doi">10.1016/j.compbiomed.2021.104319</pub-id><pub-id pub-id-type="pmid">33799220</pub-id></citation></ref>
<ref id="B161">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rajagopal</surname> <given-names>R.</given-names></name></person-group> (<year>2021</year>). <article-title>Comparative analysis of COVID-19 x-ray images classification using convolutional neural network, transfer learning, and machine learning classifiers using deep features. <italic>Pattern Recognit</italic></article-title>.. <source>Image Anal</source>. <volume>31</volume>, <fpage>313</fpage>&#x02013;<lpage>322</lpage>. <pub-id pub-id-type="doi">10.1134/S1054661821020140</pub-id></citation>
</ref>
<ref id="B162">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rajaraman</surname> <given-names>S.</given-names></name> <name><surname>Antani</surname> <given-names>S. K.</given-names></name></person-group> (<year>2020</year>). <article-title>Modality-specific deep learning model ensembles toward improving tb detection in chest radiographs</article-title>. <source>IEEE Access</source> <volume>8</volume>, <fpage>27318</fpage>&#x02013;<lpage>27326</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2020.2971257</pub-id><pub-id pub-id-type="pmid">32257736</pub-id></citation></ref>
<ref id="B163">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rajaraman</surname> <given-names>S.</given-names></name> <name><surname>Candemir</surname> <given-names>S.</given-names></name> <name><surname>Thoma</surname> <given-names>G.</given-names></name> <name><surname>Antani</surname> <given-names>S.</given-names></name></person-group> (<year>2019</year>). <article-title>Visualizing and explaining deep learning predictions for pneumonia detection in pediatric chest radiographs</article-title>. <source>Med. Imaging</source> <volume>10950</volume>, <fpage>200</fpage>&#x02013;<lpage>211</lpage>. <pub-id pub-id-type="doi">10.1117/12.2512752</pub-id></citation>
</ref>
<ref id="B164">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rajpurkar</surname> <given-names>P.</given-names></name> <name><surname>Irvin</surname> <given-names>J.</given-names></name> <name><surname>Zhu</surname> <given-names>K.</given-names></name> <name><surname>Yang</surname> <given-names>B.</given-names></name> <name><surname>Mehta</surname> <given-names>H.</given-names></name> <name><surname>Duan</surname> <given-names>T.</given-names></name> <etal/></person-group>. (<year>2017</year>). <article-title>Chexnet: radiologist-level pneumonia detection on chest x-rays with deep learning</article-title>. <source>arXiv preprint</source> arXiv:1711.05225. <pub-id pub-id-type="doi">10.48550/arXiv.1711.05225</pub-id></citation>
</ref>
<ref id="B165">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rajpurkar</surname> <given-names>P.</given-names></name> <name><surname>O&#x00027;Connell</surname> <given-names>C.</given-names></name> <name><surname>Schechter</surname> <given-names>A.</given-names></name> <name><surname>Asnani</surname> <given-names>N.</given-names></name> <name><surname>Li</surname> <given-names>J.</given-names></name> <name><surname>Kiani</surname> <given-names>A.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Chexaid: deep learning assistance for physician diagnosis of tuberculosis using chest x-rays in patients with hiv</article-title>. <source>NPJ Digital Med</source>. <volume>3</volume>, <fpage>1</fpage>&#x02013;<lpage>8</lpage>. <pub-id pub-id-type="doi">10.1038/s41746-020-00322-2</pub-id><pub-id pub-id-type="pmid">32964138</pub-id></citation></ref>
<ref id="B166">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Redmon</surname> <given-names>J.</given-names></name> <name><surname>Farhadi</surname> <given-names>A.</given-names></name></person-group> (<year>2017</year>). <article-title>&#x0201C;Yolo9000: better, faster, stronger,&#x0201D;</article-title> in <source>Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</source> (<publisher-loc>Honolulu, HI</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>7263</fpage>&#x02013;<lpage>7271</lpage>.</citation>
</ref>
<ref id="B167">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Reis</surname> <given-names>E. P.</given-names></name></person-group> (<year>2022</year>). <source>Brax, a Brazilian Labeled Chest x-ray Dataset</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://physionet.org/content/brax/1.0.0/">https://physionet.org/content/brax/1.0.0/</ext-link><pub-id pub-id-type="pmid">35948551</pub-id></citation></ref>
<ref id="B168">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ren</surname> <given-names>S.</given-names></name> <name><surname>He</surname> <given-names>K.</given-names></name> <name><surname>Girshick</surname> <given-names>R.</given-names></name> <name><surname>Sun</surname> <given-names>J.</given-names></name></person-group> (<year>2015</year>). <article-title>&#x0201C;Faster r-cnn: towards real-time object detection with region proposal networks,&#x0201D;</article-title> in <source>Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015</source> (<publisher-loc>Montreal, QC</publisher-loc>), <fpage>91</fpage>&#x02013;<lpage>99</lpage>.<pub-id pub-id-type="pmid">27295650</pub-id></citation></ref>
<ref id="B169">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ronneberger</surname> <given-names>O.</given-names></name> <name><surname>Fischer</surname> <given-names>P.</given-names></name> <name><surname>Brox</surname> <given-names>T.</given-names></name></person-group> (<year>2015</year>). <article-title>&#x0201C;U-net: convolutional networks for biomedical image segmentation,&#x0201D;</article-title> in <source>Medical Image Computing and Computer-Assisted Intervention - MICCAI 2015 &#x02013;18th International Conference Munich, Germany, October 5&#x02013;9, 2015, Proceedings, Part III</source> (<publisher-loc>Springer</publisher-loc>), <fpage>234</fpage>&#x02013;<lpage>241</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-319-24574-4_28</pub-id></citation>
</ref>
<ref id="B170">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Rosenthal</surname> <given-names>A.</given-names></name> <name><surname>Gabrielian</surname> <given-names>A.</given-names></name> <name><surname>Engle</surname> <given-names>E.</given-names></name> <name><surname>Hurt</surname> <given-names>D. E.</given-names></name> <name><surname>Alexandru</surname> <given-names>S.</given-names></name> <name><surname>Crudu</surname> <given-names>V.</given-names></name> <etal/></person-group>. (<year>2017</year>). <article-title>The tb portals: an open-access, web-based platform for global drug-resistant-tuberculosis data sharing and analysis</article-title>. <source>J. Clin Microbiol</source>. <volume>55</volume>, <fpage>3267</fpage>&#x02013;<lpage>3282</lpage>. <pub-id pub-id-type="doi">10.1128/JCM.01013-17</pub-id><pub-id pub-id-type="pmid">28904183</pub-id></citation></ref>
<ref id="B171">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Russell</surname> <given-names>C.</given-names></name> <name><surname>Kusner</surname> <given-names>M. J.</given-names></name> <name><surname>Loftus</surname> <given-names>J.</given-names></name> <name><surname>Silva</surname> <given-names>R.</given-names></name></person-group> (<year>2017</year>). <article-title>&#x0201C;When worlds collide: integrating different counterfactual assumptions in fairness,&#x0201D;</article-title> in <source>Advances in Neural Information Processing Systems, Vol. 30</source>.</citation>
</ref>
<ref id="B172">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ryoo</surname> <given-names>S.</given-names></name> <name><surname>Kim</surname> <given-names>H. J.</given-names></name></person-group> (<year>2014</year>). <article-title>Activities of the korean institute of tuberculosis</article-title>. <source>Osong Public Health Res. Perspect</source>. 5, S43-S49. <pub-id pub-id-type="doi">10.1016/j.phrp.2014.10.007</pub-id><pub-id pub-id-type="pmid">25861580</pub-id></citation></ref>
<ref id="B173">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sabour</surname> <given-names>S.</given-names></name> <name><surname>Frosst</surname> <given-names>N.</given-names></name> <name><surname>Hinton</surname> <given-names>G. E.</given-names></name></person-group> (<year>2017</year>). <article-title>&#x0201C;Dynamic routing between capsules,&#x0201D;</article-title> in <source>Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017</source> (<publisher-loc>Long Beach, CA</publisher-loc>), <fpage>3856</fpage>&#x02013;<lpage>3866</lpage>.</citation>
</ref>
<ref id="B174">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Saha</surname> <given-names>P.</given-names></name> <name><surname>Sadi</surname> <given-names>M. S.</given-names></name> <name><surname>Islam</surname> <given-names>M. M.</given-names></name></person-group> (<year>2021</year>). <article-title>Emcnet: Automated covid-19 diagnosis from x-ray images using convolutional neural network and ensemble of machine learning classifiers</article-title>. <source>Inform. Med. Unlocked</source> <volume>22</volume>, <fpage>100505</fpage>. <pub-id pub-id-type="doi">10.1016/j.imu.2020.100505</pub-id><pub-id pub-id-type="pmid">33363252</pub-id></citation></ref>
<ref id="B175">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sahadevan</surname> <given-names>V.</given-names></name></person-group> (<year>2002</year>). <source>High Resolution Digitized Image Analysis of Chest x-rays for Diagnosis of Difficult to Visualize Evolving Very Early Stage Lung Cancer, Pnumoconiosis and Pulmonary Diseases</source>. .S. Patent No US20020094119.</citation>
</ref>
<ref id="B176">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sahlol</surname> <given-names>A. T.</given-names></name> <name><surname>Abd Elaziz</surname> <given-names>M.</given-names></name> <name><surname>Tariq Jamal</surname> <given-names>A.</given-names></name> <name><surname>Dama&#x00161;evi&#x0010D;ius</surname> <given-names>R.</given-names></name> <name><surname>Farouk Hassan</surname> <given-names>O.</given-names></name></person-group> (<year>2020</year>). <article-title>A novel method for detection of tuberculosis in chest radiographs using artificial ecosystem-based optimisation of deep neural network features</article-title>. <source>Symmetry</source> <volume>12</volume>, <fpage>1146</fpage>. <pub-id pub-id-type="doi">10.3390/sym12071146</pub-id></citation>
</ref>
<ref id="B177">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Santosh</surname> <given-names>K.</given-names></name> <name><surname>Vajda</surname> <given-names>S.</given-names></name> <name><surname>Antani</surname> <given-names>S.</given-names></name> <name><surname>Thoma</surname> <given-names>G. R.</given-names></name></person-group> (<year>2016</year>). <article-title>Edge map analysis in chest x-rays for automatic pulmonary abnormality screening</article-title>. <source>Int. J. Comput. Assist. Radiol. Surg</source>. <volume>11</volume>, <fpage>1637</fpage>&#x02013;<lpage>1646</lpage>. <pub-id pub-id-type="doi">10.1007/s11548-016-1359-6</pub-id><pub-id pub-id-type="pmid">26995600</pub-id></citation></ref>
<ref id="B178">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Schultheiss</surname> <given-names>M.</given-names></name> <name><surname>Schober</surname> <given-names>S. A.</given-names></name> <name><surname>Lodde</surname> <given-names>M.</given-names></name> <name><surname>Bodden</surname> <given-names>J.</given-names></name> <name><surname>Aichele</surname> <given-names>J.</given-names></name> <name><surname>M&#x000FC;ller-Leisse</surname> <given-names>C.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>A robust convolutional neural network for lung nodule detection in the presence of foreign bodies</article-title>. <source>Sci. Rep</source>. <volume>10</volume>, <fpage>1</fpage>&#x02013;<lpage>9</lpage>. <pub-id pub-id-type="doi">10.1038/s41598-020-69789-z</pub-id><pub-id pub-id-type="pmid">32737389</pub-id></citation></ref>
<ref id="B179">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Selvaraju</surname> <given-names>R. R.</given-names></name> <name><surname>Cogswell</surname> <given-names>M.</given-names></name> <name><surname>Das</surname> <given-names>A.</given-names></name> <name><surname>Vedantam</surname> <given-names>R.</given-names></name> <name><surname>Parikh</surname> <given-names>D.</given-names></name> <name><surname>Batra</surname> <given-names>D.</given-names></name></person-group> (<year>2017</year>). <article-title>&#x0201C;Grad-cam: visual explanations from deep networks via gradient-based localization,&#x0201D;</article-title> in <source>Proceedings of the IEEE International Conference on Computer Vision</source> (<publisher-loc>Venice</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>618</fpage>&#x02013;<lpage>626</lpage>.</citation>
</ref>
<ref id="B180">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Settles</surname> <given-names>M.</given-names></name></person-group> (<year>2005</year>). <source>An Introduction to Particle Swarm Optimization</source>. <publisher-loc>Idaho</publisher-loc>: <publisher-name>Department of Computer Science, University of Idaho</publisher-name>.</citation>
</ref>
<ref id="B181">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Seyyed-Kalantari</surname> <given-names>L.</given-names></name> <name><surname>Liu</surname> <given-names>G.</given-names></name> <name><surname>McDermott</surname> <given-names>M.</given-names></name> <name><surname>Chen</surname> <given-names>I. Y.</given-names></name> <name><surname>Ghassemi</surname> <given-names>M.</given-names></name></person-group> (<year>2021</year>). <article-title>&#x0201C;Chexclusion: fairness gaps in deep chest x-ray classifiers,&#x0201D;</article-title> in <source>BIOCOMPUTING 2021: Proceedings of the Pacific Symposium</source> (<publisher-loc>Kohala Coast, HI</publisher-loc>: <publisher-name>World Scientific</publisher-name>), <fpage>232</fpage>&#x02013;<lpage>243</lpage>. <pub-id pub-id-type="doi">10.1142/9789811232701_0022</pub-id><pub-id pub-id-type="pmid">33691020</pub-id></citation></ref>
<ref id="B182">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shankar</surname> <given-names>S.</given-names></name> <name><surname>Devi</surname> <given-names>M. R. J</given-names></name> <name><surname>Ananthi</surname> <given-names>S.</given-names></name> <name><surname>Lokes</surname> <given-names>M. R. S.</given-names></name> <name><surname>K</surname> <given-names>V.</given-names></name> <etal/></person-group>. (<year>2022</year>). <source>Ai in Imaging Data Acquisition, Segmentation and Diagnosis for COVID-19</source>. U.S. Patent No IN202241024227.</citation>
</ref>
<ref id="B183">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shaoliang</surname> <given-names>P.</given-names></name> <name><surname>Xiongjun</surname> <given-names>Z.</given-names></name> <name><surname>Xiaoqi</surname> <given-names>W.</given-names></name> <name><surname>Deshan</surname> <given-names>Z.</given-names></name> <name><surname>Li</surname> <given-names>L.</given-names></name> <name><surname>Yingjie</surname> <given-names>J. C.</given-names></name></person-group> (<year>2020</year>). <source>Multidirectional x-ray Chest Radiography Pneumonia Diagnosis Method Based on Deep Learning</source>. U.S. Patent No CN111951246B.</citation>
</ref>
<ref id="B184">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sherrier</surname> <given-names>R. H.</given-names></name> <name><surname>Johnson</surname> <given-names>G.</given-names></name></person-group> (<year>1987</year>). <article-title>Regionally adaptive histogram equalization of the chest</article-title>. <source>IEEE Trans. Med. Imaging</source> <volume>6</volume>, <fpage>1</fpage>&#x02013;<lpage>7</lpage>. <pub-id pub-id-type="doi">10.1109/TMI.1987.4307791</pub-id><pub-id pub-id-type="pmid">18230420</pub-id></citation></ref>
<ref id="B185">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shi</surname> <given-names>W.</given-names></name> <name><surname>Tong</surname> <given-names>L.</given-names></name> <name><surname>Zhu</surname> <given-names>Y.</given-names></name> <name><surname>Wang</surname> <given-names>M. D.</given-names></name></person-group> (<year>2021</year>). <article-title>Covid-19 automatic diagnosis with radiographic imaging: explainable attention transfer deep neural networks</article-title>. <source>IEEE J. Biomed. Health Inform</source>. <volume>25</volume>, <fpage>2376</fpage>&#x02013;<lpage>2387</lpage>. <pub-id pub-id-type="doi">10.1109/JBHI.2021.3074893</pub-id><pub-id pub-id-type="pmid">33882010</pub-id></citation></ref>
<ref id="B186">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Shiraishi</surname> <given-names>J.</given-names></name> <name><surname>Katsuragawa</surname> <given-names>S.</given-names></name> <name><surname>Ikezoe</surname> <given-names>J.</given-names></name> <name><surname>Matsumoto</surname> <given-names>T.</given-names></name> <name><surname>Kobayashi</surname> <given-names>T.</given-names></name> <name><surname>Komatsu</surname> <given-names>K.-,i.</given-names></name> <etal/></person-group>. (<year>2000</year>). <article-title>Development of a digital image database for chest radiographs with and without a lung nodule: receiver operating characteristic analysis of radiologists&#x00027; detection of pulmonary nodules</article-title>. <source>Am. J. Roentgenol</source>. <volume>174</volume>, <fpage>71</fpage>&#x02013;<lpage>74</lpage>. <pub-id pub-id-type="doi">10.2214/ajr.174.1.1740071</pub-id><pub-id pub-id-type="pmid">10628457</pub-id></citation></ref>
<ref id="B187">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sim</surname> <given-names>Y.</given-names></name> <name><surname>Chung</surname> <given-names>M. J.</given-names></name> <name><surname>Kotter</surname> <given-names>E.</given-names></name> <name><surname>Yune</surname> <given-names>S.</given-names></name> <name><surname>Kim</surname> <given-names>M.</given-names></name> <name><surname>Do</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Deep convolutional neural network-based software improves radiologist detection of malignant lung nodules on chest radiographs</article-title>. <source>Radiology</source> <volume>294</volume>, <fpage>199</fpage>&#x02013;<lpage>209</lpage>. <pub-id pub-id-type="doi">10.1148/radiol.2019182465</pub-id><pub-id pub-id-type="pmid">31714194</pub-id></citation></ref>
<ref id="B188">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Singh</surname> <given-names>A.</given-names></name> <name><surname>Lall</surname> <given-names>B.</given-names></name> <name><surname>Panigrahi</surname> <given-names>B. K.</given-names></name> <name><surname>Agrawal</surname> <given-names>A.</given-names></name> <name><surname>Agrawal</surname> <given-names>A.</given-names></name> <name><surname>Thangakunam</surname> <given-names>B.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>Deep lf-net: Semantic lung segmentation from indian chest radiographs including severely unhealthy images</article-title>. <source>Biomed. Signal Process. Control</source> <volume>68</volume>, <fpage>102666</fpage>. <pub-id pub-id-type="doi">10.1016/j.bspc.2021.102666</pub-id></citation>
</ref>
<ref id="B189">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Singh</surname> <given-names>R.</given-names></name> <name><surname>Kalra</surname> <given-names>M. K.</given-names></name> <name><surname>Nitiwarangkul</surname> <given-names>C.</given-names></name> <name><surname>Patti</surname> <given-names>J. A.</given-names></name> <name><surname>Homayounieh</surname> <given-names>F.</given-names></name> <name><surname>Padole</surname> <given-names>A.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>Deep learning in chest radiography: detection of findings and presence of change</article-title>. <source>PLoS ONE</source> <volume>13</volume>, <fpage>e0204155</fpage>. <pub-id pub-id-type="doi">10.1371/journal.pone.0204155</pub-id><pub-id pub-id-type="pmid">30286097</pub-id></citation></ref>
<ref id="B190">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Smilkov</surname> <given-names>D.</given-names></name> <name><surname>Thorat</surname> <given-names>N.</given-names></name> <name><surname>Kim</surname> <given-names>B.</given-names></name> <name><surname>Vi&#x000E9;gas</surname> <given-names>F.</given-names></name> <name><surname>Wattenberg</surname> <given-names>M.</given-names></name></person-group> (<year>2017</year>). <article-title>Smoothgrad: removing noise by adding noise</article-title>. <source>arXiv preprint</source> arXiv:1706.03825. <pub-id pub-id-type="doi">10.48550/arXiv.1706.03825</pub-id></citation>
</ref>
<ref id="B191">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Soleymanpour</surname> <given-names>E.</given-names></name> <name><surname>Pourreza</surname> <given-names>H. R.</given-names></name> <name><surname>Yazdi</surname> <given-names>M. S.</given-names></name> <etal/></person-group>. (<year>2011</year>). <article-title>Fully automatic lung segmentation and rib suppression methods to improve nodule detection in chest radiographs</article-title>. <source>J. Med. Signals Sens</source>. 1, 191. <pub-id pub-id-type="doi">10.4103/2228-7477.95412</pub-id><pub-id pub-id-type="pmid">22606675</pub-id></citation></ref>
<ref id="B192">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sousa</surname> <given-names>R. T.</given-names></name> <name><surname>Marques</surname> <given-names>O.</given-names></name> <name><surname>Curado</surname> <given-names>G. T.</given-names></name> <name><surname>Costa</surname> <given-names>R. M. D.</given-names></name> <name><surname>Soares</surname> <given-names>A. S.</given-names></name> <etal/></person-group>. (<year>2014</year>). <article-title>&#x0201C;Evaluation of classifiers to a childhood pneumonia computer-aided diagnosis system,&#x0201D;</article-title> in <source>2014 IEEE 27th International Symposium on Computer-Based Medical Systems</source> (<publisher-loc>New York, NY</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>477</fpage>&#x02013;<lpage>478</lpage>.</citation>
</ref>
<ref id="B193">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Stein</surname> <given-names>A.</given-names></name> <name><surname>Wu</surname> <given-names>C.</given-names></name> <name><surname>Carr</surname> <given-names>C.</given-names></name> <name><surname>Shih</surname> <given-names>G.</given-names></name> <name><surname>Dulkowski</surname> <given-names>J.</given-names></name> <name><surname>Kalpathy</surname></name> <etal/></person-group>. (<year>2018</year>). <source>Rsna Pneumonia Detection Challenge</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://kaggle.com/competitions/rsna-pneumonia-detection-challenge">https://kaggle.com/competitions/rsna-pneumonia-detection-challenge</ext-link></citation>
</ref>
<ref id="B194">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Subramanian</surname> <given-names>V.</given-names></name> <name><surname>Wang</surname> <given-names>H.</given-names></name> <name><surname>Wu</surname> <given-names>J. T.</given-names></name> <name><surname>Wong</surname> <given-names>K. C.</given-names></name> <name><surname>Sharma</surname> <given-names>A.</given-names></name> <name><surname>Syeda-Mahmood</surname> <given-names>T.</given-names></name></person-group> (<year>2019</year>). <article-title>&#x0201C;Automated detection and type classification of central venous catheters in chest x-rays,&#x0201D;</article-title> in <source>Medical Image Computing and Computer Assisted Intervention - MICCAI 2019&#x02013;22nd International Conference, Shenzhen, China, October 13&#x02013;17, 2019, Proceedings, Part VI</source> (<publisher-loc>Springer</publisher-loc>), <fpage>522</fpage>&#x02013;<lpage>530</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-030-32226-7_58</pub-id></citation>
</ref>
<ref id="B195">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Sundaram</surname> <given-names>S.</given-names></name> <name><surname>Hulkund</surname> <given-names>N.</given-names></name></person-group> (<year>2021</year>). <article-title>Gan-based data augmentation for chest x-ray classification</article-title>. <source>arXiv preprint</source> arXiv:2107.02970. <pub-id pub-id-type="doi">10.48550/arXiv.2107.02970</pub-id></citation>
</ref>
<ref id="B196">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Syeda-Mahmood</surname> <given-names>T.</given-names></name> <name><surname>Wong</surname> <given-names>K. C.</given-names></name> <name><surname>Gur</surname> <given-names>Y.</given-names></name> <name><surname>Wu</surname> <given-names>J. T.</given-names></name> <name><surname>Jadhav</surname> <given-names>A.</given-names></name> <name><surname>Kashyap</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>&#x0201C;Chest x-ray report generation through fine-grained label learning,&#x0201D;</article-title> in <source>Medical Image Computing and Computer Assisted Intervention - MICCAI 2020&#x02014;23rd International Conference</source> (<publisher-loc>Lima</publisher-loc>: <publisher-name>Springer</publisher-name>), <fpage>561</fpage>&#x02013;<lpage>571</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-030-59713-9_54</pub-id></citation>
</ref>
<ref id="B197">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tabik</surname> <given-names>S.</given-names></name> <name><surname>G&#x000F3;mez-R&#x000ED;os</surname> <given-names>A.</given-names></name> <name><surname>Mart&#x000ED;n-Rodr&#x000ED;guez</surname> <given-names>J. L.</given-names></name> <name><surname>Sevillano-Garc&#x000ED;a</surname> <given-names>I.</given-names></name> <name><surname>Rey-Area</surname> <given-names>M.</given-names></name> <name><surname>Charte</surname> <given-names>D.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Covidgr dataset and covid-sdnet methodology for predicting covid-19 based on chest x-ray images</article-title>. <source>IEEE J. Biomed. Health Inform</source>. <volume>24</volume>, <fpage>3595</fpage>&#x02013;<lpage>3605</lpage>. <pub-id pub-id-type="doi">10.1109/JBHI.2020.3037127</pub-id><pub-id pub-id-type="pmid">33170789</pub-id></citation></ref>
<ref id="B198">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Takemiya</surname> <given-names>R.</given-names></name> <name><surname>Kido</surname> <given-names>S.</given-names></name> <name><surname>Hirano</surname> <given-names>Y.</given-names></name> <name><surname>Mabu</surname> <given-names>S.</given-names></name></person-group> (<year>2019</year>). <article-title>Detection of pulmonary nodules on chest x-ray images using R-CNN</article-title>. <source>Int. Forum Med. Imaging</source> <volume>11050</volume>, <fpage>147</fpage>&#x02013;<lpage>152</lpage>. <pub-id pub-id-type="doi">10.1117/12.2521652</pub-id></citation>
</ref>
<ref id="B199">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tan</surname> <given-names>M.</given-names></name> <name><surname>Le</surname> <given-names>Q. V.</given-names></name></person-group> (<year>2019</year>). <article-title>Efficientnet: rethinking model scaling for convolutional neural networks</article-title>. <source>CoRR</source>, abs/1905.11946. <pub-id pub-id-type="doi">10.48550/arXiv.1905.11946</pub-id></citation>
</ref>
<ref id="B200">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tang</surname> <given-names>Y.-B, Tang, Y.</given-names></name> <name><surname>Sandfort</surname> <given-names>V.</given-names></name> <name><surname>Xiao</surname> <given-names>J.</given-names></name> <name><surname>Summers</surname> <given-names>R. M.</given-names></name></person-group> (<year>2019a</year>). <article-title>&#x0201C;Tuna-net: task-oriented unsupervised adversarial network for disease recognition in cross-domain chest x-rays,&#x0201D;</article-title> in <source>International Conference on Medical Image Computing and Computer-Assisted Intervention</source> (<publisher-loc>Springer</publisher-loc>), <fpage>431</fpage>&#x02013;<lpage>440</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-030-32226-7_48</pub-id></citation>
</ref>
<ref id="B201">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Tang</surname> <given-names>Y.-B.</given-names></name> <name><surname>Tang</surname> <given-names>Y.-X.</given-names></name> <name><surname>Xiao</surname> <given-names>J.</given-names></name> <name><surname>Summers</surname> <given-names>R. M.</given-names></name></person-group> (<year>2019b</year>). <article-title>&#x0201C;Xlsor: a robust and accurate lung segmentor on chest x-rays using criss-cross attention and customized radiorealistic abnormalities generation,&#x0201D;</article-title> in <source>International Conference on Medical Imaging with Deep Learning</source> (<publisher-loc>London</publisher-loc>: <publisher-name>PMLR</publisher-name>), <fpage>457</fpage>&#x02013;<lpage>467</lpage>.</citation>
</ref>
<ref id="B202">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Team</surname> <given-names>N. L. S. T. R.</given-names></name></person-group> (<year>2011</year>). <article-title>Reduced lung-cancer mortality with low-dose computed tomographic screening</article-title>. <source>N. Engl. J. Med</source>. <volume>365</volume>, <fpage>395</fpage>&#x02013;<lpage>409</lpage>. <pub-id pub-id-type="doi">10.1056/NEJMoa1102873</pub-id><pub-id pub-id-type="pmid">21714641</pub-id></citation></ref>
<ref id="B203">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Thian</surname> <given-names>Y. L.</given-names></name> <name><surname>Ng</surname> <given-names>D.</given-names></name> <name><surname>Hallinan</surname> <given-names>J. T. P. D.</given-names></name> <name><surname>Jagmohan</surname> <given-names>P.</given-names></name> <name><surname>Sia</surname> <given-names>S. Y.</given-names></name> <name><surname>Tan</surname> <given-names>C. H.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>Deep learning systems for pneumothorax detection on chest radiographs: a multicenter external validation study</article-title>. <source>Radiol. Artif. Intell</source>. 3, 190. <pub-id pub-id-type="doi">10.1148/ryai.2021200190</pub-id><pub-id pub-id-type="pmid">34350409</pub-id></citation></ref>
<ref id="B204">
<citation citation-type="journal"><person-group person-group-type="author"><collab>This H..</collab></person-group> (<year>2020</year>). <article-title>Chest radiograph interpretation with deep learning models: assessment with radiologist-ad-judicated reference standards and population-adjusted evaluation</article-title>. <source>Radiology</source> <volume>294</volume>, <fpage>421</fpage>&#x02013;<lpage>431</lpage>. <pub-id pub-id-type="doi">10.1148/radiol.2019191293</pub-id><pub-id pub-id-type="pmid">31793848</pub-id></citation></ref>
<ref id="B205">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ting</surname> <given-names>H.</given-names></name> <name><surname>Tieqiang</surname> <given-names>L.</given-names></name> <name><surname>Xia</surname> <given-names>L.</given-names></name></person-group> (<year>2021</year>). <source>Lung Inflammation Recognition and Diagnosis Method Based on Deep Learning Convolutional Neural Network</source>. U.S. Patent No CN113192041.</citation>
</ref>
<ref id="B206">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ucar</surname> <given-names>F.</given-names></name> <name><surname>Korkmaz</surname> <given-names>D.</given-names></name></person-group> (<year>2020</year>). <article-title>Covidiagnosis-net: deep bayes-squeezenet based diagnosis of the coronavirus disease 2019 (COVID-19) from x-ray images</article-title>. <source>Med Hypotheses</source> <volume>140</volume>, <fpage>109761</fpage>. <pub-id pub-id-type="doi">10.1016/j.mehy.2020.109761</pub-id><pub-id pub-id-type="pmid">32344309</pub-id></citation></ref>
<ref id="B207">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Ul Abideen</surname> <given-names>Z.</given-names></name> <name><surname>Ghafoor</surname> <given-names>M.</given-names></name> <name><surname>Munir</surname> <given-names>K.</given-names></name> <name><surname>Saqib</surname> <given-names>M.</given-names></name> <name><surname>Ullah</surname> <given-names>A.</given-names></name> <name><surname>Zia</surname> <given-names>T.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Uncertainty assisted robust tuberculosis identification with bayesian convolutional neural networks</article-title>. <source>IEEE Access</source> <volume>8</volume>, <fpage>22812</fpage>&#x02013;<lpage>22825</lpage>. <pub-id pub-id-type="doi">10.1109/ACCESS.2020.2970023</pub-id><pub-id pub-id-type="pmid">32391238</pub-id></citation></ref>
<ref id="B208">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Van Ginneken</surname> <given-names>B.</given-names></name> <name><surname>Stegmann</surname> <given-names>M. B.</given-names></name> <name><surname>Loog</surname> <given-names>M.</given-names></name></person-group> (<year>2006</year>). <article-title>Segmentation of anatomical structures in chest radiographs using supervised methods: a comparative study on a public database</article-title>. <source>Med. Image Anal</source>. <volume>10</volume>, <fpage>19</fpage>&#x02013;<lpage>40</lpage>. <pub-id pub-id-type="doi">10.1016/j.media.2005.02.002</pub-id><pub-id pub-id-type="pmid">15919232</pub-id></citation></ref>
<ref id="B209">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vay&#x000E1;</surname> <given-names>M. D. L. I.</given-names></name> <name><surname>Saborit</surname> <given-names>J. M.</given-names></name> <name><surname>Montell</surname> <given-names>J. A.</given-names></name> <name><surname>Pertusa</surname> <given-names>A.</given-names></name> <name><surname>Bustos</surname> <given-names>A.</given-names></name> <name><surname>Cazorla</surname> <given-names>M.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Bimcv COVID-19&#x0002B;: a large annotated dataset of rx and ct images from COVID-19 patients</article-title>. <source>arXiv preprint</source> arxiv:2006.01174. <pub-id pub-id-type="doi">10.48550/arXiv.2006.01174</pub-id></citation>
</ref>
<ref id="B210">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Vedantam</surname> <given-names>R.</given-names></name> <name><surname>Lawrence Zitnick</surname> <given-names>C.</given-names></name> <name><surname>Parikh</surname> <given-names>D.</given-names></name></person-group> (<year>2015</year>). <article-title>&#x0201C;Cider: consensus-based image description evaluation,&#x0201D;</article-title> in <source>Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</source> (<publisher-loc>Boston, MA|</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>4566</fpage>&#x02013;<lpage>4575</lpage>.</citation>
</ref>
<ref id="B211">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Venkata Hari</surname> <given-names>G. P.</given-names></name></person-group> (<year>2022</year>). <source>Tuberculosis Detection Using Artificial Intelligence</source>. U.S. Patent No N202241001179.</citation>
</ref>
<ref id="B212">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>C.</given-names></name> <name><surname>Elazab</surname> <given-names>A.</given-names></name> <name><surname>Jia</surname> <given-names>F.</given-names></name> <name><surname>Wu</surname> <given-names>J.</given-names></name> <name><surname>Hu</surname> <given-names>Q.</given-names></name></person-group> (<year>2018</year>). <article-title>Automated chest screening based on a hybrid model of transfer learning and convolutional sparse denoising autoencoder</article-title>. <source>Biomed. Eng. Online</source> <volume>17</volume>, <fpage>1</fpage>&#x02013;<lpage>19</lpage>. <pub-id pub-id-type="doi">10.1186/s12938-018-0496-2</pub-id><pub-id pub-id-type="pmid">29792208</pub-id></citation></ref>
<ref id="B213">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>D.</given-names></name> <name><surname>Arzhaeva</surname> <given-names>Y.</given-names></name> <name><surname>Devnath</surname> <given-names>L.</given-names></name> <name><surname>Qiao</surname> <given-names>M.</given-names></name> <name><surname>Amirgholipour</surname> <given-names>S.</given-names></name> <name><surname>Liao</surname> <given-names>Q.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>&#x0201C;Automated pneumoconiosis detection on chest x-rays using cascaded learning with real and synthetic radiographs,&#x0201D;</article-title> in <source>2020 Digital Image Computing: Techniques and Applications (DICTA)</source>, 1&#x02013;6. <pub-id pub-id-type="doi">10.1109/DICTA51227.2020.9363416</pub-id></citation>
</ref>
<ref id="B214">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>H.</given-names></name> <name><surname>Wang</surname> <given-names>Z.</given-names></name> <name><surname>Du</surname> <given-names>M.</given-names></name> <name><surname>Yang</surname> <given-names>F.</given-names></name> <name><surname>Zhang</surname> <given-names>Z.</given-names></name> <name><surname>Ding</surname> <given-names>S.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>&#x0201C;Score-cam: score-weighted visual explanations for convolutional neural networks,&#x0201D;</article-title> in <source>Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops</source>, 24&#x02013;25. <pub-id pub-id-type="doi">10.1109/CVPRW50498.2020.00020</pub-id></citation>
</ref>
<ref id="B215">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>L.</given-names></name> <name><surname>Wong</surname> <given-names>A.</given-names></name> <name><surname>Lin</surname> <given-names>Z. Q.</given-names></name> <name><surname>McInnis</surname> <given-names>P.</given-names></name> <name><surname>Chung</surname> <given-names>A.</given-names></name> <name><surname>Gunraj</surname> <given-names>H.</given-names></name></person-group> (<year>2020</year>). <source>Actualmed COVID-19 Chest X-ray Dataset Initiative</source>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://github.com/agchung/actualmed-covid-chestxray-dataset">https://github.com/agchung/actualmed-covid-chestxray-dataset</ext-link><pub-id pub-id-type="pmid">34103766</pub-id></citation></ref>
<ref id="B216">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>N.</given-names></name> <name><surname>Liu</surname> <given-names>H.</given-names></name> <name><surname>Xu</surname> <given-names>C.</given-names></name></person-group> (<year>2020</year>). <article-title>&#x0201C;Deep learning for the detection of COVID-19 using transfer learning and model integration,&#x0201D;</article-title> in <source>2020 IEEE 10th International Conference on Electronics Information and Emergency Communication (ICEIEC)</source> (<publisher-loc>Beijing</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>281</fpage>&#x02013;<lpage>284</lpage>.</citation>
</ref>
<ref id="B217">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>X.</given-names></name> <name><surname>Peng</surname> <given-names>Y.</given-names></name> <name><surname>Lu</surname> <given-names>L.</given-names></name> <name><surname>Lu</surname> <given-names>Z.</given-names></name> <name><surname>Bagheri</surname> <given-names>M.</given-names></name> <name><surname>Summers</surname> <given-names>R. M.</given-names></name></person-group> (<year>2017</year>). <article-title>&#x0201C;Chestx-ray8: hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases,&#x0201D;</article-title> in <source>Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition</source> (<publisher-loc>Honolulu, HI</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>2097</fpage>&#x02013;<lpage>2106</lpage>.</citation>
</ref>
<ref id="B218">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>X.</given-names></name> <name><surname>Yu</surname> <given-names>J.</given-names></name> <name><surname>Zhu</surname> <given-names>Q.</given-names></name> <name><surname>Li</surname> <given-names>S.</given-names></name> <name><surname>Zhao</surname> <given-names>Z.</given-names></name> <name><surname>Yang</surname> <given-names>B.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Potential of deep learning in assessing pneumoconiosis depicted on digital chest radiography</article-title>. <source>Occup Environ. Med</source>. <volume>77</volume>, <fpage>597</fpage>&#x02013;<lpage>602</lpage>. <pub-id pub-id-type="doi">10.1136/oemed-2019-106386</pub-id><pub-id pub-id-type="pmid">32471837</pub-id></citation></ref>
<ref id="B219">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wang</surname> <given-names>Z.</given-names></name> <name><surname>Qian</surname> <given-names>Q.</given-names></name> <name><surname>Zhang</surname> <given-names>J.</given-names></name> <name><surname>Duo</surname> <given-names>C.</given-names></name> <name><surname>He</surname> <given-names>W.</given-names></name> <name><surname>Zhao</surname> <given-names>L.</given-names></name></person-group> (<year>2021</year>). <article-title>Deep learning for computer-aided diagnosis of pneumoconiosis</article-title>. <source>Res. Squ.</source> 1&#x02013;14. <pub-id pub-id-type="doi">10.21203/rs.3.rs-460896/v1</pub-id><pub-id pub-id-type="pmid">34879818</pub-id></citation></ref>
<ref id="B220">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wanli</surname> <given-names>J.</given-names></name> <name><surname>Xingwang</surname> <given-names>L.</given-names></name> <name><surname>Donglei</surname> <given-names>Y.</given-names></name></person-group> (<year>2021</year>). <source>Deep Learning-Based Pneumoconiosis Grading Method and Device, Medium and Equipment</source>. U.S. Patent No Cn112819819.</citation>
</ref>
<ref id="B221">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Wessel</surname> <given-names>J.</given-names></name> <name><surname>Heinrich</surname> <given-names>M. P.</given-names></name> <name><surname>von Berg</surname> <given-names>J.</given-names></name> <name><surname>Franz</surname> <given-names>A.</given-names></name> <name><surname>Saalbach</surname> <given-names>A.</given-names></name></person-group> (<year>2019</year>). <article-title>Sequential rib labeling and segmentation in chest x-ray using mask r-cnn</article-title>. <source>arXiv preprint</source> arXiv:1908.08329. <pub-id pub-id-type="doi">10.48550/arXiv.1908.08329</pub-id></citation>
</ref>
<ref id="B222">
<citation citation-type="journal"><person-group person-group-type="author"><collab>WHO</collab></person-group> (<year>2013</year>). <source>Global Tuberculosis Report 2013</source>. <publisher-loc>Geneva</publisher-loc>: <publisher-name>World Health Organization</publisher-name>.</citation>
</ref>
<ref id="B223">
<citation citation-type="journal"><person-group person-group-type="author"><collab>WHO</collab></person-group> (<year>2016</year>). <source>World Health Statistics 2016: Monitoring Health for the SDGs Sustainable Development Goals</source>. <publisher-loc>Geneva</publisher-loc>: <publisher-name>World Health Organization</publisher-name>.</citation>
</ref>
<ref id="B224">
<citation citation-type="journal"><person-group person-group-type="author"><collab>WHO</collab></person-group> (<year>2021</year>). <source>Meeting Report of the WHO Expert Consultation on the Definition of Extensively Drug-Resistant Tuberculosis</source>, <fpage>27</fpage>&#x02013;<lpage>29</lpage> October 2020. <publisher-loc>Geneva</publisher-loc>: <publisher-name>World Health Organization</publisher-name>.</citation>
</ref>
<ref id="B225">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xu</surname> <given-names>H.</given-names></name> <name><surname>Tao</surname> <given-names>X.</given-names></name> <name><surname>Sundararajan</surname> <given-names>R.</given-names></name> <name><surname>Yan</surname> <given-names>W.</given-names></name> <name><surname>Annangi</surname> <given-names>P.</given-names></name> <name><surname>Sun</surname> <given-names>X.</given-names></name> <etal/></person-group>. (<year>2010</year>). <article-title>&#x0201C;Computer aided detection for pneumoconiosis screening on digital chest radiographs,&#x0201D;</article-title> in <source>Proceedings of the Third International Workshop on Pulmonary Image Analysis</source>, <fpage>129</fpage>&#x02013;<lpage>138</lpage>.<pub-id pub-id-type="pmid">20174852</pub-id></citation></ref>
<ref id="B226">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Xue</surname> <given-names>Y.</given-names></name> <name><surname>Xu</surname> <given-names>T.</given-names></name> <name><surname>Rodney Long</surname> <given-names>L.</given-names></name> <name><surname>Xue</surname> <given-names>Z.</given-names></name> <name><surname>Antani</surname> <given-names>S.</given-names></name> <name><surname>Thoma</surname> <given-names>G. R.</given-names></name> <etal/></person-group>. (<year>2018</year>). <article-title>&#x0201C;Multimodal recurrent model with attention for automated radiology report generation,&#x0201D;</article-title> in <source>International Conference on Medical Image Computing and Computer-Assisted Intervention</source> (<publisher-loc>Springer</publisher-loc>), <fpage>457</fpage>&#x02013;<lpage>466</lpage>. <pub-id pub-id-type="doi">10.1007/1337978-3-030-00928-1_52</pub-id></citation>
</ref>
<ref id="B227">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yang</surname> <given-names>F.</given-names></name> <name><surname>Tang</surname> <given-names>Z.-R.</given-names></name> <name><surname>Chen</surname> <given-names>J.</given-names></name> <name><surname>Tang</surname> <given-names>M.</given-names></name> <name><surname>Wang</surname> <given-names>S.</given-names></name> <name><surname>Qi</surname> <given-names>W.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>Pneumoconiosis computer aided diagnosis system based on x-rays and deep learning</article-title>. <source>BMC Med. Imaging</source> <volume>21</volume>, <fpage>1</fpage>&#x02013;<lpage>7</lpage>. <pub-id pub-id-type="doi">10.1186/s12880-021-00723-z</pub-id><pub-id pub-id-type="pmid">34879818</pub-id></citation></ref>
<ref id="B228">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yao</surname> <given-names>Z.</given-names></name> <name><surname>Lai</surname> <given-names>Z.</given-names></name> <name><surname>Wang</surname> <given-names>C.</given-names></name></person-group> (<year>2016</year>). <article-title>&#x0201C;Image enhancement based on equal area dualistic sub-image and non-parametric modified histogram equalization method,&#x0201D;</article-title> in <source>2016 9th International Symposium on Computational Intelligence and Design (ISCID), Vol. 1</source> (<publisher-loc>Hangzhou</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>447</fpage>&#x02013;<lpage>450</lpage>.</citation>
</ref>
<ref id="B229">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yu</surname> <given-names>D.</given-names></name> <name><surname>Zhang</surname> <given-names>K.</given-names></name> <name><surname>Huang</surname> <given-names>L.</given-names></name> <name><surname>Zhao</surname> <given-names>B.</given-names></name> <name><surname>Zhang</surname> <given-names>X.</given-names></name> <name><surname>Guo</surname> <given-names>X.</given-names></name> <etal/></person-group>. (<year>2020</year>). <article-title>Detection of peripherally inserted central catheter (picc) in chest x-ray images: a multi-task deep learning model</article-title>. <source>Comput. Methods Programs Biomed</source>. 197, 105674. <pub-id pub-id-type="doi">10.1016/j.cmpb.2020.105674</pub-id><pub-id pub-id-type="pmid">32738678</pub-id></citation></ref>
<ref id="B230">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yu</surname> <given-names>P.</given-names></name> <name><surname>Xu</surname> <given-names>H.</given-names></name> <name><surname>Zhu</surname> <given-names>Y.</given-names></name> <name><surname>Yang</surname> <given-names>C.</given-names></name> <name><surname>Sun</surname> <given-names>X.</given-names></name> <name><surname>Zhao</surname> <given-names>J.</given-names></name></person-group> (<year>2011</year>). <article-title>An automatic computer-aided detection scheme for pneumoconiosis on digital chest radiographs</article-title>. <source>J. Digit. Imaging</source> <volume>24</volume>, <fpage>382</fpage>&#x02013;<lpage>393</lpage>. <pub-id pub-id-type="doi">10.1007/s10278-010-9276-7</pub-id><pub-id pub-id-type="pmid">20174852</pub-id></citation></ref>
<ref id="B231">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Yue</surname> <given-names>Z.</given-names></name> <name><surname>Ma</surname> <given-names>L.</given-names></name> <name><surname>Zhang</surname> <given-names>R.</given-names></name></person-group> (<year>2020</year>). <article-title>Comparison and validation of deep learning models for the diagnosis of pneumonia</article-title>. <source>Comput. Intell. Neurosci</source>. 2020, 8876798. <pub-id pub-id-type="doi">10.1155/2020/8876798</pub-id><pub-id pub-id-type="pmid">33014032</pub-id></citation></ref>
<ref id="B232">
<citation citation-type="web"><person-group person-group-type="author"><name><surname>Zadbuke</surname> <given-names>A. S.</given-names></name></person-group> (<year>2012</year>). <article-title>Brightness preserving image enhancement using modified dualistic sub image histogram equalization</article-title>. <source>Int. J. Sci. Eng. Res</source>. <volume>3</volume>, <fpage>1</fpage>&#x02013;<lpage>6</lpage>. Available online at: <ext-link ext-link-type="uri" xlink:href="https://www.ijser.org/researchpaper/Brightness-Preserving-Image-Enhancementusing-Modifieddualistic-Sub-Image-Histogram-Equalization.pdf">https://www.ijser.org/researchpaper/Brightness-Preserving-Image-Enhancementusing-Modifieddualistic-Sub-Image-Histogram-Equalization.pdf</ext-link></citation>
</ref>
<ref id="B233">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zafar</surname> <given-names>M. B.</given-names></name> <name><surname>Valera</surname> <given-names>I.</given-names></name> <name><surname>Rogriguez</surname> <given-names>M. G.</given-names></name> <name><surname>Gummadi</surname> <given-names>K. P.</given-names></name></person-group> (<year>2017</year>). <article-title>&#x0201C;Fairness constraints: mechanisms for fair classification,&#x0201D;</article-title> in <source>Proceedings of the 20th International Conference on Artificial Intelligence and Statistics</source>, eds A. Singh and X. J. Zhu (Fort Lauderdale, FL: PMLR), <fpage>962</fpage>&#x02013;<lpage>970</lpage>.</citation>
</ref>
<ref id="B234">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zech</surname> <given-names>J. R.</given-names></name> <name><surname>Badgeley</surname> <given-names>M. A.</given-names></name> <name><surname>Liu</surname> <given-names>M.</given-names></name> <name><surname>Costa</surname> <given-names>A. B.</given-names></name> <name><surname>Titano</surname> <given-names>J. J.</given-names></name> <name><surname>Oermann</surname> <given-names>E. K.</given-names></name></person-group> (<year>2018</year>). <article-title>Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: a cross-sectional study</article-title>. <source>PLoS Med</source>. 15, e1002683. <pub-id pub-id-type="doi">10.1371/journal.pmed.1002683</pub-id><pub-id pub-id-type="pmid">30399157</pub-id></citation></ref>
<ref id="B235">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>D.</given-names></name> <name><surname>Ren</surname> <given-names>F.</given-names></name> <name><surname>Li</surname> <given-names>Y.</given-names></name> <name><surname>Na</surname> <given-names>L.</given-names></name> <name><surname>Ma</surname> <given-names>Y.</given-names></name></person-group> (<year>2021</year>). <article-title>Pneumonia detection from chest x-ray images based on convolutional neural network</article-title>. <source>Electronics</source> <volume>10</volume>, <fpage>1512</fpage>. <pub-id pub-id-type="doi">10.3390/electronics10131512</pub-id><pub-id pub-id-type="pmid">33777251</pub-id></citation></ref>
<ref id="B236">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>J.</given-names></name> <name><surname>Xie</surname> <given-names>Y.</given-names></name> <name><surname>Pang</surname> <given-names>G.</given-names></name> <name><surname>Liao</surname> <given-names>Z.</given-names></name> <name><surname>Verjans</surname> <given-names>J.</given-names></name> <name><surname>Li</surname> <given-names>W.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>Viral pneumonia screening on chest x-rays using confidence-aware anomaly detection</article-title>. <source>IEEE Trans. Med. Imaging</source> <volume>40</volume>, <fpage>879</fpage>&#x02013;<lpage>890</lpage>. <pub-id pub-id-type="doi">10.1109/TMI.2020.3040950</pub-id><pub-id pub-id-type="pmid">33245693</pub-id></citation></ref>
<ref id="B237">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>L.</given-names></name> <name><surname>Rong</surname> <given-names>R.</given-names></name> <name><surname>Li</surname> <given-names>Q.</given-names></name> <name><surname>Yang</surname> <given-names>D. M.</given-names></name> <name><surname>Yao</surname> <given-names>B.</given-names></name> <name><surname>Luo</surname> <given-names>D.</given-names></name> <etal/></person-group>. (<year>2021</year>). <article-title>A deep learning-based model for screening and staging pneumoconiosis</article-title>. <source>Sci. Rep</source>. <volume>11</volume>, <fpage>1</fpage>&#x02013;<lpage>7</lpage>. <pub-id pub-id-type="doi">10.1038/s41598-020-77924-z</pub-id><pub-id pub-id-type="pmid">33500426</pub-id></citation></ref>
<ref id="B238">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>R.</given-names></name> <name><surname>Duan</surname> <given-names>H.</given-names></name> <name><surname>Cheng</surname> <given-names>J.</given-names></name> <name><surname>Zheng</surname> <given-names>Y.</given-names></name></person-group> (<year>2020</year>). <article-title>&#x0201C;A study on tuberculosis classification in chest x-ray using deep residual attention networks,&#x0201D;</article-title> in <source>2020 42nd Annual International Conference of the IEEE Engineering in Medicine &#x00026;Biology Society (EMBC)</source> (<publisher-loc>Montreal, QC</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>1552</fpage>&#x02013;<lpage>1555</lpage>.<pub-id pub-id-type="pmid">33018288</pub-id></citation></ref>
<ref id="B239">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhang</surname> <given-names>W.</given-names></name> <name><surname>Li</surname> <given-names>G.</given-names></name> <name><surname>Wang</surname> <given-names>F.</given-names></name> <name><surname>Yu</surname> <given-names>Y.</given-names></name> <name><surname>Lin</surname> <given-names>L.</given-names></name> <name><surname>Liang</surname> <given-names>H.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>&#x0201C;Simultaneous lung field detection and segmentation for pediatric chest radiographs,&#x0201D;</article-title> in <source>International Conference on Medical Image Computing and Computer-Assisted Intervention</source> (<publisher-loc>Springer</publisher-loc>), <fpage>594</fpage>&#x02013;<lpage>602</lpage>. <pub-id pub-id-type="doi">10.1007/978-3-030-32226-7_6</pub-id></citation>
</ref>
<ref id="B240">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhao</surname> <given-names>B.</given-names></name> <name><surname>Guo</surname> <given-names>Y.</given-names></name> <name><surname>Zheng</surname> <given-names>C.</given-names></name> <name><surname>Zhang</surname> <given-names>M.</given-names></name> <name><surname>Lin</surname> <given-names>J.</given-names></name> <name><surname>Luo</surname> <given-names>Y.</given-names></name> <etal/></person-group>. (<year>2019</year>). <article-title>Using deep-learning techniques for pulmonary-thoracic segmentations and improvement of pneumonia diagnosis in pediatric chest radiographs</article-title>. <source>Pediatr Pulmonol</source>. <volume>54</volume>, <fpage>1617</fpage>&#x02013;<lpage>1626</lpage>. <pub-id pub-id-type="doi">10.1002/ppul.24431</pub-id><pub-id pub-id-type="pmid">31270968</pub-id></citation></ref>
<ref id="B241">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhao</surname> <given-names>W.</given-names></name> <name><surname>Wang</surname> <given-names>L.</given-names></name> <name><surname>Zhang</surname> <given-names>Z.</given-names></name></person-group> (<year>2020</year>). <article-title>Artificial ecosystem-based optimization: a novel nature-inspired meta-heuristic algorithm</article-title>. <source>Neural Comput. Appl</source>. <volume>32</volume>, <fpage>9383</fpage>&#x02013;<lpage>9425</lpage>. <pub-id pub-id-type="doi">10.1007/s00521-019-04452-x</pub-id></citation>
</ref>
<ref id="B242">
<citation citation-type="journal"><person-group person-group-type="author"><name><surname>Zhu</surname> <given-names>J.-Y.</given-names></name> <name><surname>Park</surname> <given-names>T.</given-names></name> <name><surname>Isola</surname> <given-names>P.</given-names></name> <name><surname>Efros</surname> <given-names>A. A.</given-names></name></person-group> (<year>2017</year>). <article-title>&#x0201C;Unpaired image-to-image translation using cycle-consistent adversarial networks,&#x0201D;</article-title> in <source>Proceedings of the IEEE International Conference on Computer Vision</source> (<publisher-loc>Venice</publisher-loc>: <publisher-name>IEEE</publisher-name>), <fpage>2223</fpage>&#x02013;<lpage>2232</lpage>.</citation>
</ref>
</ref-list>
</back>
</article>