<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" article-type="editorial">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Neurosci.</journal-id>
<journal-title>Frontiers in Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Neurosci.</abbrev-journal-title>
<issn pub-type="epub">1662-453X</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnins.2023.1264015</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Neuroscience</subject>
<subj-group>
<subject>Editorial</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Editorial: Deep learning for multimodal brain data processing and analysis</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Wang</surname> <given-names>Liansheng</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x0002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/661268/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-original-draft/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Yu</surname> <given-names>Lequan</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1403777/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-original-draft/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/"/>
</contrib>
<contrib contrib-type="author">
<name><surname>Magnier</surname> <given-names>Baptiste</given-names></name>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/1824266/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-original-draft/"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-review-editing/"/>
</contrib>
</contrib-group>
<aff id="aff1"><sup>1</sup><institution>Department of Computer Science, Xiamen University</institution>, <addr-line>Xiamen</addr-line>, <country>China</country></aff>
<aff id="aff2"><sup>2</sup><institution>Department of Statistics and Actuarial Science, The University of Hong Kong, Pokfulam</institution>, <addr-line>Hong Kong SAR</addr-line>, <country>China</country></aff>
<aff id="aff3"><sup>3</sup><institution>Euromov Digital Health in Motion, Mines-Telecom Institute Al&#x000E8;s</institution>, <addr-line>Al&#x000E8;s</addr-line>, <country>France</country></aff>
<author-notes>
<fn fn-type="edited-by"><p>Edited and reviewed by: Vince D. Calhoun, Georgia State University, United States</p></fn>
<corresp id="c001">&#x0002A;Correspondence: Liansheng Wang <email>lswang&#x00040;xmu.edu.cn</email></corresp>
</author-notes>
<pub-date pub-type="epub">
<day>18</day>
<month>08</month>
<year>2023</year>
</pub-date>
<pub-date pub-type="collection">
<year>2023</year>
</pub-date>
<volume>17</volume>
<elocation-id>1264015</elocation-id>
<history>
<date date-type="received">
<day>20</day>
<month>07</month>
<year>2023</year>
</date>
<date date-type="accepted">
<day>10</day>
<month>08</month>
<year>2023</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x000A9; 2023 Wang, Yu and Magnier.</copyright-statement>
<copyright-year>2023</copyright-year>
<copyright-holder>Wang, Yu and Magnier</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/"><p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p></license>
</permissions>
<related-article id="RA1" related-article-type="commentary-article" xlink:href="https://www.frontiersin.org/research-topics/39766/deep-learning-for-multimodal-brain-data-processing-and-analysis" ext-link-type="uri">Editorial on the Research Topic <article-title>Deep learning for multimodal brain data processing and analysis</article-title></related-article>
<kwd-group>
<kwd>deep learning</kwd>
<kwd>multimodal</kwd>
<kwd>brain data</kwd>
<kwd>medical image analysis</kwd>
<kwd>brain image analysis</kwd>
</kwd-group>
<counts>
<fig-count count="0"/>
<table-count count="0"/>
<equation-count count="0"/>
<ref-count count="0"/>
<page-count count="2"/>
<word-count count="925"/>
</counts>
<custom-meta-wrap>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>Brain Imaging Methods</meta-value>
</custom-meta>
</custom-meta-wrap>
</article-meta>
</front>
<body>
<p>The processing and analysis of brain data are of great importance for understanding the workings of the human brain and diagnosing neurological disorders, brain tumors, and more. Deep learning can process complex relationships in brain data both automatically and effectively, demonstrating strong generalization capabilities. In recent years, it has gradually emerged as a common method for addressing complex medical problems related to the brain.</p>
<p>Deep learning-based methods offer a comprehensive, accurate, and fast approach to processing and analyzing brain data. An impressive achievement is that <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fnins.2023.1118340">Gong et al.</ext-link> proposed a high-performance multi-task framework with both regression and classification capabilities, which solved the problem of slow efficiency in evaluating IntraCerebral Hematoma (ICH) volume using Non-Contrast head Computed Tomography (NCCT). This method even achieved evaluation results comparable to those of clinical doctors in the task of assessing ICH hematoma volume. Using deep learning for brain segmentation can improve the accuracy of brain partition volume prediction. Using predictions of brain partition volume, <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fnins.2023.1146552">Pan et al.</ext-link> explored the correlation between 5-min cognitive test (FCT) scores used for cognitive impairment screening and brain structure. To enhance the ability of deep learning models to segment East Asian brains, <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fnins.2023.1157738">Moon et al.</ext-link> reported a high-performance brain segmentation algorithm based on a 3D convolutional neural network for East Asian brain segmentation.</p>
<p>By applying deep learning to multimodal brain data processing and analysis, we can fully harness the potential of multimodal data. This approach combines different types of brain data for a more comprehensive and accurate analysis. To help prevent stroke in patients with carotid atherosclerosis, <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fnins.2023.1118376">Lv et al.</ext-link> trained a multimodal model with a channel attention mechanism using multimodal brain MRI imaging data to predict stroke diagnosis incidence. The channel attention mechanism can learn significant regions in different modal images, effectively improving the classification performance of the model.</p>
<p>There are three notable issues to consider in this Research Topic.</p>
<p>Firstly, the interpretability of medical deep learning models is of paramount importance. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fnins.2023.1118340">Gong et al.</ext-link> addressed this in the Research Topic by introducing a method known as Gradient-weighted Class Activation Mapping (Grad-CAM). This technique can generate heat maps with high interpretability, enabling the localization of brain hemorrhage sites.</p>
<p>Secondly, the integration of brain data processing results with other disciplines can lead to new discoveries in brain science. For instance, <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fnins.2023.1146552">Pan et al.</ext-link> identified a pronounced correlation between 5-min Cognitive Test (FCT) scores and volumes of hippocampal-related regions and the amygdala in populations with cognitive impairments.</p>
<p>Lastly, model bias is a significant concern when utilizing pre-trained deep learning models. While many deep learning techniques have found successful application in brain data processing and analysis, a large number of pre-trained models are trained on Caucasian brain data, which can result in bias. To address this, <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/fnins.2023.1157738">Moon et al.</ext-link> trained a deep learning model using an extensive dataset of East Asian brain MRI scans, creating a model specifically calibrated for this ethnic group.</p>
<p>Advanced deep learning methods hold the potential to extract meaningful information from multimodal brain data. The collected research in this topic, &#x0201C;<italic>Frontiers in deep learning for multimodal brain data processing and analysis</italic>&#x0201D;, can significantly contribute to the field of brain medical research.</p>
<sec sec-type="author-contributions" id="s1">
<title>Author contributions</title>
<p>LW: Writing&#x02014;original draft, Writing&#x02014;review and editing. LY: Writing&#x02014;original draft, Writing&#x02014;review and editing. BM: Writing&#x02014;original draft, Writing&#x02014;review and editing.</p>
</sec>
</body>
<back>
<sec sec-type="funding-information" id="s2">
<title>Funding</title>
<p>The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.</p>
</sec>
<sec sec-type="COI-statement" id="conf1">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="disclaimer" id="s3">
<title>Publisher&#x00027;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec> 
</back>
</article>