<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD Journal Publishing DTD v2.3 20070202//EN" "journalpublishing.dtd">
<article article-type="editorial" dtd-version="2.3" xml:lang="EN" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Robot. AI</journal-id>
<journal-title>Frontiers in Robotics and AI</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Robot. AI</abbrev-journal-title>
<issn pub-type="epub">2296-9144</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="publisher-id">1662674</article-id>
<article-id pub-id-type="doi">10.3389/frobt.2025.1662674</article-id>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Robotics and AI</subject>
<subj-group>
<subject>Editorial</subject>
</subj-group>
</subj-group>
</article-categories>
<title-group>
<article-title>Editorial: Merging symbolic and data-driven AI for robot autonomy</article-title>
<alt-title alt-title-type="left-running-head">Meli et al.</alt-title>
<alt-title alt-title-type="right-running-head">
<ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/frobt.2025.1662674">10.3389/frobt.2025.1662674</ext-link>
</alt-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name>
<surname>Meli</surname>
<given-names>Daniele</given-names>
</name>
<xref ref-type="aff" rid="aff1">
<sup>1</sup>
</xref>
<xref ref-type="corresp" rid="c001">&#x2a;</xref>
<uri xlink:href="https://loop.frontiersin.org/people/1577338/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/writing-original-draft/"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Sridharan</surname>
<given-names>Mohan</given-names>
</name>
<xref ref-type="aff" rid="aff2">
<sup>2</sup>
</xref>
<uri xlink:href="https://loop.frontiersin.org/people/476186/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/Writing - review &#x26; editing/"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Perri</surname>
<given-names>Simona</given-names>
</name>
<xref ref-type="aff" rid="aff3">
<sup>3</sup>
</xref>
<uri xlink:href="https://loop.frontiersin.org/people/2088974/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/Writing - review &#x26; editing/"/>
</contrib>
<contrib contrib-type="author">
<name>
<surname>Katzouris</surname>
<given-names>Nikos</given-names>
</name>
<xref ref-type="aff" rid="aff4">
<sup>4</sup>
</xref>
<uri xlink:href="https://loop.frontiersin.org/people/835163/overview"/>
<role content-type="https://credit.niso.org/contributor-roles/Writing - review &#x26; editing/"/>
</contrib>
</contrib-group>
<aff id="aff1">
<sup>1</sup>Department of Computer Science, <institution>University of Verona</institution>, <addr-line>Verona</addr-line>, <country>Italy</country>
</aff>
<aff id="aff2">
<sup>2</sup>School of Informatics, <institution>University of Edinburgh</institution>, <addr-line>Edinburgh</addr-line>, <country>United Kingdom</country>
</aff>
<aff id="aff3">
<sup>3</sup>Department of Mathematics and Computer Science (DeMaCS), <institution>University of Calabria</institution>, <addr-line>Arcavacata di Rende</addr-line>, <country>Italy</country>
</aff>
<aff id="aff4">
<sup>4</sup>
<institution>National Centre of Scientific Research &#x201c;Demokritos&#x201d;</institution>, <addr-line>Agia Paraskevi</addr-line>, <country>Greece</country>
</aff>
<author-notes>
<fn fn-type="edited-by">
<p>
<bold>Edited and reviewed by:</bold> <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/571027/overview">Chenguang Yang</ext-link>, University of Liverpool, United Kingdom</p>
</fn>
<corresp id="c001">&#x2a;Correspondence: Daniele Meli, <email>daniele.meli@univr.it</email>
</corresp>
</author-notes>
<pub-date pub-type="epub">
<day>23</day>
<month>07</month>
<year>2025</year>
</pub-date>
<pub-date pub-type="collection">
<year>2025</year>
</pub-date>
<volume>12</volume>
<elocation-id>1662674</elocation-id>
<history>
<date date-type="received">
<day>09</day>
<month>07</month>
<year>2025</year>
</date>
<date date-type="accepted">
<day>15</day>
<month>07</month>
<year>2025</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#xa9; 2025 Meli, Sridharan, Perri and Katzouris.</copyright-statement>
<copyright-year>2025</copyright-year>
<copyright-holder>Meli, Sridharan, Perri and Katzouris</copyright-holder>
<license xlink:href="http://creativecommons.org/licenses/by/4.0/">
<p>This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</p>
</license>
</permissions>
<related-article id="RA1" related-article-type="commentary-article" journal-id="Front. Robot. AI" xlink:href="https://www.frontiersin.org/research-topics/50600" ext-link-type="uri">Editorial on the Research Topic <article-title>Merging symbolic and data-driven AI for robot autonomy</article-title> </related-article>
<kwd-group>
<kwd>neurosymbolic AI</kwd>
<kwd>probabilistic reasoning</kwd>
<kwd>reasoning under uncertainty</kwd>
<kwd>hybrid AI</kwd>
<kwd>robotics</kwd>
</kwd-group>
<custom-meta-wrap>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>Computational Intelligence in Robotics</meta-value>
</custom-meta>
</custom-meta-wrap>
</article-meta>
</front>
<body>
<p>Robots are increasingly being deployed to assist humans in many applications such as medicine, navigation, and industrial automation. To truly collaborate with humans in complex environments, robots require advanced cognitive capabilities, including the ability to reason with domain-specific commonsense knowledge and the noisy observations obtained in the presence of partial observability and non-deterministic action outcomes. Research in Artificial Intelligence (AI) has resulted in sophisticated symbolic formalisms based on logics to represent commonsense domain knowledge, as well as probabilistic and data-driven frameworks that quantitatively represent uncertainty in the decisions made by robots.</p>
<p>By themselves, symbolic or stochastic AI methods have limitations when applied to robots in complex scenarios. Symbolic AI methods reason with relational descriptions of the attributes of the domain and the robot to guide the robot&#x2019;s behavior. At the same time, they tend to require extensive prior knowledge about the domain and the robot. They also make it computationally expensive to operate at the level of granularity required for precise interaction with the physical world, or to reason about uncertainty quantitatively. Probabilistic and data-driven AI methods, on the other hand, elegantly represent uncertainty quantitatively, and provide mechanisms for reasoning and acting at the level of granularity required for interaction with the physical world. These methods, however, offer limited expressiveness for complex cognitive concepts, and it is not always meaningful to reason about uncertainty quantitatively. With the increasing use of AI and robots in different applications, there has been renewed interest in hybrid and neurosymbolic AI frameworks that combine symbolic and data-driven methods. The 10 contributions in this Research Topic highlight the promise and potential of such frameworks in the context of robotics.</p>
<p>Describing a vision for the future, <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/frobt.2024.1437496">Spasokukotskiy</ext-link> states that the next-generation AI systems should not only be endowed with autonomy but also <italic>&#x201c;morality&#x201d; that secures alignment in large systems</italic>, i.e., they should operate safely within the values of human society. Instead of being in full control of AI, humans would then cooperate and communicate with intelligent systems. Extending this idea, <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/frobt.2020.00076">Pal</ext-link> explains <italic>the relevance of transparency, explainability, learning from a few examples, and the trustworthiness of an AI system</italic>, exploring how insights into human reasoning can be a crucial ingredient for achieving reliable operation with embodied AI systems. In addition, <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/frobt.2024.1328934">Toberg et al.</ext-link> provide a systematic review of robot systems that represent, reason with, learn, and/or use commonsense knowledge in a wide range of application domains. Symbolic AI methods can play a crucial role in the design of such AI/robotics systems, providing the expressivity for elegantly representing human-level concepts and effectively modeling logical reasoning capabilities. These methods can also support more efficient and transparent learning, and the use of human guidance to generate symbolic abstractions. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/frobt.2020.00122">Das et al.</ext-link> describe a framework that extends an inductive logic program learner to demonstrate this capability on multiple benchmark domains, one of which focuses on planning the assembly of mechanical structures, a core task in industrial automation.</p>
<p>In addition to reasoning with prior knowledge that includes cognitive theories, robots that interact with the physical world process a large amount of continuous multi-modal inputs from different information sources, including humans and other agents. In this context, data-driven AI methods, particularly recent advancements in deep learning, have exhibited groundbreaking performance and established themselves as the state of the art for problems in computer vision, natural language processing, and complex decision-making. For example, <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/frobt.2020.00063">Mitrokhin et al.</ext-link> describe a hybrid framework for image-based context awareness, training a hash neural network on images to show that <italic>hyperdimensional vectors can be constructed such that vector-symbolic inference arises naturally out of their output.</italic> This enhances the robustness and explainability of the classification process, achieving state-of-the-art accuracy on real-world image datasets such as the popular CIFAR-10.</p>
<p>Acquiring symbol abstractions from raw continuous inputs, i.e., symbol grounding, and decision-making become particularly challenging with the high-dimensional inputs received by robots. Despite the impressive results achieved by deep neural networks and foundation models, their direct use in robots becomes inefficient, hinders transparency, and provides arbitrary responses in novel situations. Hybrid frameworks can address these limitations by leveraging the complementary strengths of symbolic and data-driven AI systems. For example, the framework of <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/frobt.2020.00084">Nevens et al.</ext-link> uses symbolic AI to enable an agent to construct <italic>a conceptual system in which meaningful concepts are formed</italic> based on <italic>human-interpretable feature channels.</italic> They use a dataset of images for manipulating blocks to illustrate how concepts acquired from limited data points can be combined and generalized to unseen instances. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/frobt.2024.1362463">Sasaki et al.</ext-link> show that grounding robotic gestures <italic>with quantitative meaning calculated from word-distributed representations constructed from a large corpus of text</italic> enable robots to display behavior that humans perceive to be natural. <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/frobt.2019.00125">Riley et al.</ext-link> describe a framework that supports non-monotonic logical reasoning with abstractions of prior commonsense knowledge and information extracted by deep neural networks from relevant image regions; they show substantial performance improvement compared with state of the art for visual question answering, and vision-based planning and diagnostics. Furthermore, <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/frobt.2025.1566623">Grosvenor et al.</ext-link> and <ext-link ext-link-type="uri" xlink:href="https://doi.org/10.3389/frobt.2024.1469315">Ghiasvand et al.</ext-link> document examples of real-world integration of similar ideas in the context of knowledge-enhanced deep visual tracking of satellites, and a comprehensive architecture for space robotic mission planning and control, respectively.</p>
<p>In summary, the contributions to this topic highlight the importance of merging symbolic and data-driven AI methods in the context of robotics (and AI). These papers demonstrate how such hybrid frameworks enable robots to reason with complex cognitive theories and noisy multimodal sensor observations to achieve reliable, efficient, and transparent scene understanding, planning, diagnostics, and human-robot collaboration in complex simulated and physical domains. The papers also draw attention to the fundamental open problems that need to be addressed to leverage the full potential of robots in practical applications. We hope that these papers will foster further collaboration between the related research communities toward achieving societal benefits.</p>
</body>
<back>
<sec sec-type="author-contributions" id="s1">
<title>Author contributions</title>
<p>DM: Writing &#x2013; original draft. MS: Writing &#x2013; review and editing. SP: Writing &#x2013; review and editing. NK: Writing &#x2013; review and editing.</p>
</sec>
<sec sec-type="funding-information" id="s2">
<title>Funding</title>
<p>The author(s) declare that no financial support was received for the research and/or publication of this article.</p>
</sec>
<sec sec-type="COI-statement" id="s3">
<title>Conflict of interest</title>
<p>The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="ai-statement" id="s4">
<title>Generative AI statement</title>
<p>The author(s) declare that no Generative AI was used in the creation of this manuscript.</p>
</sec>
<sec sec-type="disclaimer" id="s5">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
</back>
</article>