<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.3 20210610//EN" "JATS-journalpublishing1-3-mathml3.dtd">
<article xmlns:ali="http://www.niso.org/schemas/ali/1.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xml:lang="EN" article-type="research-article">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Mol. Neurosci.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Molecular Neuroscience</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Mol. Neurosci.</abbrev-journal-title>
</journal-title-group>
<issn pub-type="epub">1662-5099</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fnmol.2026.1767365</article-id>
<article-version article-version-type="Version of Record" vocab="NISO-RP-8-2008"/>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Hypothesis and Theory</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Integrating neural organoids and AI: increasing the risk of artificial consciousness or medical malpractice?</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes">
<name><surname>Harris</surname> <given-names>Alexander R.</given-names></name>
<xref ref-type="aff" rid="aff1"><sup>1</sup></xref>
<xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/648547/overview"/>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Conceptualization" vocab-term-identifier="https://credit.niso.org/contributor-roles/conceptualization/">Conceptualization</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Writing &#x2013; original draft" vocab-term-identifier="https://credit.niso.org/contributor-roles/writing-original-draft/">Writing &#x2013; original draft</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Writing &#x2013; review &amp; editing" vocab-term-identifier="https://credit.niso.org/contributor-roles/writing-review-editing/">Writing &#x2013; review &amp; editing</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Investigation" vocab-term-identifier="https://credit.niso.org/contributor-roles/investigation/">Investigation</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Validation" vocab-term-identifier="https://credit.niso.org/contributor-roles/validation/">Validation</role>
</contrib>
<contrib contrib-type="author">
<name><surname>McGivern</surname> <given-names>Patrick</given-names></name>
<xref ref-type="aff" rid="aff2"><sup>2</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/541727/overview"/>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Writing &#x2013; review &amp; editing" vocab-term-identifier="https://credit.niso.org/contributor-roles/writing-review-editing/">Writing &#x2013; review &amp; editing</role>
</contrib>
<contrib contrib-type="author">
<name><surname>Wedgwood</surname> <given-names>Kyle C. A.</given-names></name>
<xref ref-type="aff" rid="aff3"><sup>3</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/13259/overview"/>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Writing &#x2013; review &amp; editing" vocab-term-identifier="https://credit.niso.org/contributor-roles/writing-review-editing/">Writing &#x2013; review &amp; editing</role>
</contrib>
<contrib contrib-type="author">
<name><surname>Gilbert</surname> <given-names>Frederic</given-names></name>
<xref ref-type="aff" rid="aff4"><sup>4</sup></xref>
<uri xlink:href="http://loop.frontiersin.org/people/391229/overview"/>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Conceptualization" vocab-term-identifier="https://credit.niso.org/contributor-roles/conceptualization/">Conceptualization</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Investigation" vocab-term-identifier="https://credit.niso.org/contributor-roles/investigation/">Investigation</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Validation" vocab-term-identifier="https://credit.niso.org/contributor-roles/validation/">Validation</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Writing &#x2013; original draft" vocab-term-identifier="https://credit.niso.org/contributor-roles/writing-original-draft/">Writing &#x2013; original draft</role>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Writing &#x2013; review &amp; editing" vocab-term-identifier="https://credit.niso.org/contributor-roles/writing-review-editing/">Writing &#x2013; review &amp; editing</role>
</contrib>
</contrib-group>
<aff id="aff1"><label>1</label><institution>Department of Biomedical Engineering, The University of Melbourne</institution>, <city>Parkville</city>, <state>VIC</state>, <country country="au">Australia</country></aff>
<aff id="aff2"><label>2</label><institution>School of Humanities and Social Inquiry, University of Wollongong</institution>, <city>Wollongong</city>, <state>NSW</state>, <country country="au">Australia</country></aff>
<aff id="aff3"><label>3</label><institution>Living Systems Institute, University of Exeter</institution>, <city>Exeter</city>, <country country="gb">United Kingdom</country></aff>
<aff id="aff4"><label>4</label><institution>School of Humanities and Social Sciences, EthicsLab, University of Tasmania</institution>, <city>Hobart</city>, <state>TAS</state>, <country country="au">Australia</country></aff>
<author-notes>
<corresp id="c001"><label>&#x002A;</label>Correspondence: Alexander R. Harris, <email xlink:href="mailto:alexrharris@gmail.com">alexrharris@gmail.com</email></corresp>
</author-notes>
<pub-date publication-format="electronic" date-type="pub" iso-8601-date="2026-03-04">
<day>04</day>
<month>03</month>
<year>2026</year>
</pub-date>
<pub-date publication-format="electronic" date-type="collection">
<year>2026</year>
</pub-date>
<volume>19</volume>
<elocation-id>1767365</elocation-id>
<history>
<date date-type="received">
<day>14</day>
<month>12</month>
<year>2025</year>
</date>
<date date-type="rev-recd">
<day>31</day>
<month>01</month>
<year>2025</year>
</date>
<date date-type="accepted">
<day>05</day>
<month>02</month>
<year>2026</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2026 Harris, McGivern, Wedgwood and Gilbert.</copyright-statement>
<copyright-year>2026</copyright-year>
<copyright-holder>Harris, McGivern, Wedgwood and Gilbert</copyright-holder>
<license>
<ali:license_ref start_date="2026-03-04">https://creativecommons.org/licenses/by/4.0/</ali:license_ref>
<license-p>This is an open-access article distributed under the terms of the <ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution License (CC BY)</ext-link>. The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</license-p>
</license>
</permissions>
<abstract>
<p>Neural organoids can be integrated with AI models in various formats, termed AI-NO systems. Neural organoids and AI are each high impact research fields which may play significant roles in biological research and clinical decision making. However, their potential benefits, pitfalls and associated governance structures are often overshadowed by highly speculative discourse over the possibility of them developing some form of consciousness and subsequently acquiring a level of moral status. This article examines the cause for this speculative discourse, arguing for a focus on more immediate, empirically grounded ethical issues. It describes potential ethical issues when data obtained from AI models is used for research planning and drug discovery on neural organoids; when AI models are used to analyze data obtained from neural organoids; and when AI models are used to control interventions on neural organoids in open- or closed-loop configurations. It concludes with an investigation on how AI-NO systems may impact clinical decision making.</p>
</abstract>
<kwd-group>
<kwd>artificial intelligence</kwd>
<kwd>clinical trials</kwd>
<kwd>ethics</kwd>
<kwd>neural organoid</kwd>
<kwd>preclinical research</kwd>
<kwd>speculative ethics</kwd>
<kwd>stem cell</kwd>
</kwd-group>
<funding-group>
<funding-statement>The author(s) declared that financial support was received for this work and/or its publication. KW graciously acknowledges funding from the UKRI via grant MR/X034240/1.</funding-statement>
</funding-group>
<counts>
<fig-count count="0"/>
<table-count count="2"/>
<equation-count count="0"/>
<ref-count count="66"/>
<page-count count="12"/>
<word-count count="10631"/>
</counts>
<custom-meta-group>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>Methods and Model Organisms</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
</front>
<body>
<sec id="S1">
<label>1</label>
<title>Background</title>
<p>Neural organoids can be derived from tissue obtained from healthy people or those with various medical conditions. When derived from human donors, they provide more relevant tissue models for research and clinical applications than can be obtained from animal models. They enable experimentation into the effects for which different genetic and environmental factors have on tissue development and function, and ways of mitigating any adverse behavior. As a result, they are being developed for a range of applications, including for biological research, drug development, personalized drug screening, toxicity testing, phenotypic disease screening and regenerative medicine applications. They can be created in large quantities for pre-clinical research before undertaking any interventions on a person. Neural organoids can also be integrated with AI models (which we term an AI-NO system) which may assist in analyzing the large datasets associated with the applications listed above, and for predicting biological and clinical outcomes. AI-NO systems may enable faster development of more effective clinical interventions, reduce the cost of preclinical research and reduce risk to patients from administering harmful treatments.</p>
<p>Neural organoids are one form of stem cell-derived tissue construct (SCTC), which includes assembloids, induced and bioprinted tissues. The protocols for generating SCTCs are advancing rapidly, allowing creation of larger and more complex tissue structures that better replicate mature tissue (<xref ref-type="bibr" rid="B49">Maisumu et al., 2025</xref>). The distinct advantages of neural organoids over other types of SCTCs lie in the unique insights they offer into neural development and neurological disorders. These biological models enable investigations into processes that are otherwise extremely difficult or ethically impermissible to study directly in living fetuses or humans, as doing so would require highly invasive methods with unpredictable and potentially unsafe outcomes.</p>
<p>The development of neural organoids has also led to a related research program sometimes called Organoid Intelligence (OI). OI involves the use of microelectrode arrays to control and record the electrophysiological activity of neural organoids to investigate learning, memory-like phenomena and novel forms of biological computing (<xref ref-type="bibr" rid="B58">Smirnova, 2024</xref>; <xref ref-type="bibr" rid="B34">Hartung et al., 2023a</xref>,<xref ref-type="bibr" rid="B35">b</xref>; <xref ref-type="bibr" rid="B7">Bai et al., 2025</xref>; <xref ref-type="bibr" rid="B66">Zafeiriou et al., 2020</xref>). The concept of OI closely mimics the development of AI models, raising similar or related ethical and technical issues - including questions about the benefits, limitations, and broader societal implications of advanced AI systems, especially concerns around bias, data ownership, and environmental impact. Neural organoids and AI provide compelling examples for understanding biological systems, models of information processing, and the ethical issues these two technologies can raise, many of which we have investigated in previous work (<xref ref-type="bibr" rid="B62">Walker et al., 2019</xref>; <xref ref-type="bibr" rid="B29">Harris et al., 2020</xref>, <xref ref-type="bibr" rid="B31">2022a</xref>,<xref ref-type="bibr" rid="B32">b</xref>, <xref ref-type="bibr" rid="B28">2023</xref>, <xref ref-type="bibr" rid="B30">2024</xref>, <xref ref-type="bibr" rid="B33">2025</xref>; <xref ref-type="bibr" rid="B63">Walker et al., 2022</xref>). Together, these technologies are presented as opening plausible new avenues for investigating cognition and, potentially, certain degrees of consciousness &#x2013; although the possibility of such developments remains a subject of ongoing debate (<xref ref-type="bibr" rid="B45">Kataoka et al., 2025b</xref>; <xref ref-type="bibr" rid="B42">Kataoka and Sawai, 2023</xref>; <xref ref-type="bibr" rid="B39">Ishida and Sawai, 2024</xref>; <xref ref-type="bibr" rid="B48">Lavazza and Massimini, 2018</xref>; <xref ref-type="bibr" rid="B14">de Jongh et al., 2022</xref>; <xref ref-type="bibr" rid="B53">Niikawa et al., 2022</xref>; <xref ref-type="bibr" rid="B40">Kagan et al., 2022a</xref>; <xref ref-type="bibr" rid="B13">Croxford and Bayne, 2024</xref>; <xref ref-type="bibr" rid="B61">Van Gyseghem et al., 2025</xref>).</p>
<p>The integration of neural organoids and AI raises a variety of new opportunities and ethical issues. AI-NO systems may play a distinct role at each stage of research, development, clinical application and OI. Neural organoid formation involves lengthy, complicated experimental protocols and they generate vast amounts of information, including genetic, epigenetic, phenotypic and treatment response data; and this data must be integrated with patient information, including disease symptoms and biomarkers to achieve useful clinical outcomes. AI models may be able to analyze these very large datasets and detect small or irregular features. An AI-NO system may speed up research and development, and improve clinical decision making by a range of processes: it may help identify relevant biomarkers, identify correlations and distinctions among various SCTCs or population groups, identify new drugs, or reduce clinical errors from application of inappropriate treatments to particular people or groups. Non-clinical applications of AI-NO systems may be to assist in understanding information processing and consciousness or for the development of new computational methods.</p>
<p>This article investigates the potential ways in which AI can be integrated with neural organoids for research and clinical application, and identifies novel technical and ethical issues which may arise from the different types of AI-NO systems. Neural organoids and AI are both hot topics, gaining publicity ranging from recent technical achievements to highly speculative future scenarios, with the more speculative discussions often receiving an overweighted citation rate and level of publicity (<xref ref-type="bibr" rid="B52">Nestor and Wilson, 2025</xref>; <xref ref-type="bibr" rid="B43">Kataoka et al., 2023</xref>). As such, there is a risk that research on AI-NO systems will also focus on issues such as the emergence of consciousness, higher-order mental properties, or moral status, while ignoring other issues which can impact on SCTCs of all tissue types, not just neural tissue. Therefore, section (1) of this article examines speculative ethical claims in the context of neural organoid research, arguing that concerns about AI-NO consciousness can distort ethical discourse and governance priorities to the detriment of other important issues (<xref ref-type="bibr" rid="B52">Nestor and Wilson, 2025</xref>; <xref ref-type="bibr" rid="B37">Hyun et al., 2022</xref>; <xref ref-type="bibr" rid="B43">Kataoka et al., 2023</xref>). The following sections focus on novel technical and ethical issues that may arise with increasing forms of AI-NO integration which have already been demonstrated or are being openly discussed (<xref ref-type="bibr" rid="B46">Khalifa and Albadawy, 2024</xref>; <xref ref-type="bibr" rid="B1">World Health Organisation, 2024</xref>; <xref ref-type="bibr" rid="B28">Harris et al., 2023</xref>): (2) AI generated data is applied to neural organoids in some way, (3) AI is used to analyze neural organoid derived data, and (4) AI is used to control neural organoid function, including via closed-loop interactions. Broader issues around clinical application of AI-NO systems are then investigated. This article structure enables a systematic investigation of how different forms of AI-NO integration impact system behavior and ethical issues, with the risk of some potential but important repetition across article sections. An alternative article structure examining separate issues across all AI-NO systems risks oversight or domination of certain issues, and would be of less utility to those developing specific AI-NO systems. We emphasize that the article focuses on neural organoids, yet many of the arguments are applicable to other SCTC and tissue types.</p>
</sec>
<sec id="S2">
<label>2</label>
<title>Exaggerated issues associated with integrating neural organoids with AI</title>
<p>Do AI-NO systems constitute a morally urgent domain necessitating distinct analytical focus and precautionary oversight beyond that required for neural organoids in isolation (<xref ref-type="bibr" rid="B44">Kataoka et al., 2025a</xref>,<xref ref-type="bibr" rid="B45">b</xref>; <xref ref-type="bibr" rid="B57">Sawai et al., 2022</xref>)? Ethical and philosophical debates &#x2013; echoed widely in the media &#x2013; often center on whether increasing biological and computational complexity in neural organoids might eventually confer morally significant properties such as sentience, phenomenal consciousness, or the capacity to suffer. Some scholars argue that neural organoids raise unique moral concerns not seen in other forms of stem cell research, particularly due to the possibility that they might one day develop consciousness or higher cognitive functions (<xref ref-type="bibr" rid="B47">Koplin and Savulescu, 2019</xref>). According to them, these concerns are intensified when organoids are connected to machine learning or AI models that enable real-time interaction, potentially amplifying their cognitive capabilities. Others (<xref ref-type="bibr" rid="B26">Greely, 2021</xref>), advocate for a proactive moral framework that balances the immense potential benefits of organoid research with the need to prevent unethical experimentation, emphasizing the importance of defining moral limits now, before organoids reach levels of complexity that could raise serious ethical concerns. Yet, it is important to recognize that the prominence of such ethical debates reflects a broader pattern: across the history of Ethical, Legal and Social Issues (ELSI) research, speculative ethics has often dominated public and academic discourse, (<xref ref-type="bibr" rid="B54">Nordmann, 2007</xref>; <xref ref-type="bibr" rid="B27">Hansson, 2020</xref>; <xref ref-type="bibr" rid="B12">Caulfield, 2016</xref>; <xref ref-type="bibr" rid="B56">Racine et al., 2014</xref>; <xref ref-type="bibr" rid="B19">Gilbert and Goddard, 2014</xref>) with scenarios ranging from self-replicating nanorobots to optogenetic manipulation of human behavior (<xref ref-type="bibr" rid="B19">Gilbert and Goddard, 2014</xref>; <xref ref-type="bibr" rid="B60">Toumey, 2019</xref>). These discussions, frequently inspired more by dramatic scenarios than empirical reality, continue to shape how emerging technologies are understood and governed.</p>
<p>These concerns underscore both the promise and pitfalls of neural organoids integrated with AI. However, much of the current discourse surrounding artificial consciousness remains highly speculative. Speculative assertions that AI-NO systems &#x2013; such as those involving the so called DishBrain &#x2013; might develop consciousness or morally relevant capacities are, at present, not substantiated by empirical data and risk conflating imaginative speculation with scientific plausibility, (<xref ref-type="bibr" rid="B52">Nestor and Wilson, 2025</xref>; <xref ref-type="bibr" rid="B3">Adam, 2025a</xref>; <xref ref-type="bibr" rid="B37">Hyun et al., 2022</xref>), reinforcing the need for proportionate, evidence-sensitive oversight. This is not to deny that future artificial consciousness scenarios merits academic discussion. Rather, it is to observe that their salience often outpaces the current evidentiary base and risks crowding out governance of ongoing, demonstrable risks (<xref ref-type="bibr" rid="B52">Nestor and Wilson, 2025</xref>). Proportionate, evidence-sensitive oversight is therefore warranted. We therefore regard some degrees of speculative ethics as conditionally legitimate and valuable as a complementary&#x2013;rather than dominant&#x2013;mode of inquiry: it should map plausible risk envelopes, articulate guardrails, and specify early warning indicators, while remaining explicitly constrained by available evidence and recalibrated to empirical progress (see <xref ref-type="table" rid="T1">Table 1</xref>). While current evidence does not support consciousness in AI-NO systems, we acknowledge that consciousness related speculative concerns would become empirically actionable were convergent indicators of morally relevant capacities to emerge (<xref ref-type="bibr" rid="B13">Croxford and Bayne, 2024</xref>).</p>
<table-wrap position="float" id="T1">
<label>TABLE 1</label>
<caption><p>Empirical anchoring of risk (Observed &#x2192; Plausible &#x2192; Possible).</p></caption>
<table cellspacing="5" cellpadding="5" frame="box" rules="all">
<thead>
<tr>
<th valign="top" align="left">Biological risks</th>
<th valign="top" align="left">Evidence now</th>
<th valign="top" align="left">Anchor example</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">Cell line mix-ups/contamination</td>
<td valign="top" align="left">Observed</td>
<td valign="top" align="left">Longstanding cell-culture errors</td>
</tr>
<tr>
<td valign="top" align="left">Tumor/teratoma risk if transplanted</td>
<td valign="top" align="left">Observed</td>
<td valign="top" align="left">Pluripotent stem cell therapy safety literature</td>
</tr>
<tr>
<td valign="top" align="left">Unintended differentiation/plasticity</td>
<td valign="top" align="left">Observed</td>
<td valign="top" align="left">Off-target lineage in stem-cell work</td>
</tr>
<tr>
<td valign="top" align="left">Ectopic circuit integration in hosts</td>
<td valign="top" align="left">Plausible</td>
<td valign="top" align="left">Neural graft effects in animals (e.g., seizure-like patterns within organoids)</td>
</tr>
<tr>
<td valign="top" align="left">Immunogenicity/rejection (allogeneic)</td>
<td valign="top" align="left">Observed</td>
<td valign="top" align="left">Human leukocyte antigen mismatch responses</td>
</tr>
<tr>
<td valign="top" align="left">High batch-to-batch variability</td>
<td valign="top" align="left">Observed</td>
<td valign="top" align="left">Organoid variability across labs</td>
</tr>
<tr>
<td valign="top" align="left"><bold>Algorithmic risks</bold></td>
<td valign="top" align="left"><bold>Evidence now</bold></td>
<td valign="top" align="left"><bold>Anchor example</bold></td>
</tr>
<tr>
<td valign="top" align="left">Dataset/algorithmic bias</td>
<td valign="top" align="left">Observed</td>
<td valign="top" align="left">Medical AI performance gaps</td>
</tr>
<tr>
<td valign="top" align="left">Hallucinated outputs</td>
<td valign="top" align="left">Observed</td>
<td valign="top" align="left">LLMs fabricating steps/citations</td>
</tr>
<tr>
<td valign="top" align="left">Black-box opacity</td>
<td valign="top" align="left">Observed</td>
<td valign="top" align="left">Low clinician trust/contestability</td>
</tr>
<tr>
<td valign="top" align="left">Distribution shift (out-of-scope data)</td>
<td valign="top" align="left">Observed</td>
<td valign="top" align="left">Deployed machine learning drifts across sites</td>
</tr>
<tr>
<td valign="top" align="left">Reward-hacking in closed loops</td>
<td valign="top" align="left">Plausible</td>
<td valign="top" align="left">Reinforcement learning systems optimize proxy artifacts</td>
</tr>
<tr>
<td valign="top" align="left">Data rights/privacy/IP disputes</td>
<td valign="top" align="left">Observed</td>
<td valign="top" align="left">Secondary use without consent</td>
</tr>
<tr>
<td valign="top" align="left">Cybersecurity/adversarial manipulation</td>
<td valign="top" align="left">Observed</td>
<td valign="top" align="left">Medical devices/machine learning attacks</td>
</tr>
<tr>
<td valign="top" align="left"><bold>Clinical risks</bold></td>
<td valign="top" align="left"><bold>Evidence now</bold></td>
<td valign="top" align="left"><bold>Anchor example</bold></td>
</tr>
<tr>
<td valign="top" align="left">Automation bias/deskilling</td>
<td valign="top" align="left">Observed</td>
<td valign="top" align="left">Clinician over-reliance</td>
</tr>
<tr>
<td valign="top" align="left">Equity of access/benefit sharing</td>
<td valign="top" align="left">Observed</td>
<td valign="top" align="left">Health-tech rollouts</td>
</tr>
<tr>
<td valign="top" align="left"><bold>Other risks</bold></td>
<td valign="top" align="left"><bold>Evidence now</bold></td>
<td valign="top" align="left"><bold>Anchor example</bold></td>
</tr>
<tr>
<td valign="top" align="left">Artificial consciousness/morally relevant sentience in AI-NO systems</td>
<td valign="top" align="left">Possible</td>
<td valign="top" align="left">Public claims around &#x201C;neurons in a dish&#x201D; sentience have been criticized and walked back; many neuroscientists argue critical conditions are missing (e.g., embodiment, rich sensory coupling).</td>
</tr>
</tbody>
</table>
<table-wrap-foot>
<fn><p>Legend: Observed = already seen in adjacent AI/biomedical contexts. Plausible = credible but not yet observed in AI-NO systems. Possible = speculated but no evidence for or against.</p></fn>
</table-wrap-foot>
</table-wrap>
<p>In contrast to these speculative ethical concerns, a common neuroscientific viewpoint is that, given our current understanding, consciousness in disembodied neural organoids remains epistemically uncertain and, at present, highly improbable. Many neuroscience articles dismiss concerns regarding neural organoid consciousness due to the absence of critical features such as embodiment, sensory inputs and environmental coupling (<xref ref-type="bibr" rid="B13">Croxford and Bayne, 2024</xref>). While we believe these arguments against organoid consciousness are too simplistic and require deeper investigation, we argue here that ethical works should prioritize proportionate, evidence-based oversight, rather than overextend protection to entities with hypothetical properties (<xref ref-type="bibr" rid="B52">Nestor and Wilson, 2025</xref>)<sup><xref ref-type="fn" rid="footnote1">1</xref></sup>.</p>
<p>Current discourse also shows a narrow focus on a single neural model &#x2013;neural organoids which are small, largely unstructured and undeveloped tissue&#x2013; while overlooking longstanding research with resected neural tissues from animals and humans for research and clinical applications (<xref ref-type="bibr" rid="B65">Wickham et al., 2020</xref>); organisms which are generally regarded as having moral status. These resected tissues can be significantly larger than neural organoids, have mature, developed structure and cellular composition, and may have undergone significant memory formation. Yet resected tissues are generally considered to be non-conscious entities and have no moral status. This suggests the concerns around current neural organoids developing consciousness and moral status are, at best, premature &#x2013;especially given arguments that consciousness in NO is a comparatively low-priority topic relative to more pressing scientific and moral issues (<xref ref-type="bibr" rid="B9">Barnhart and Dierickx, 2023</xref>; <xref ref-type="bibr" rid="B61">Van Gyseghem et al., 2025</xref>).</p>
<p>In contrast to these more speculative claims regarding the potential for morally significant forms of experience in AI-NO systems, other real and urgent ethical priorities &#x2013; such as extending legal and regulatory protections to cognitively complex and clearly sentient non-primate animals used in novel and experimental settings &#x2013; demand immediate attention. Ethical and policy frameworks should be proportionate to the available evidence: rather than pre-emptively legislating for hypothetical AI-NO consciousness, resources should focus on strengthening animal ethics and on developing tools to detect morally relevant capacities in entities we already have good reason to believe can suffer. Pre-emptive legislation of AI-NO systems based on unfounded capacities may undermine trust in their use, result in unjustified moratoriums, prevent development of valuable new technologies which benefit society, and waste time developing unnecessary regulations.</p>
<p>Why, then, does so much ethical attention gravitate toward speculative scenarios, such as AI-NO artificial consciousness? Several interrelated ethical discussions explain the disproportionate attention to speculative scenarios in neural organoid ethics. First, neurohype and media amplification (<xref ref-type="bibr" rid="B64">Wexler, 2019</xref>). It is a well-documented pattern in neuroscience communication; institutional press releases and media outlets often amplify early-stage findings through exaggerated claims and superlative language (<xref ref-type="bibr" rid="B22">Gilbert et al., 2018a</xref>,<xref ref-type="bibr" rid="B23">b</xref>, <xref ref-type="bibr" rid="B24">2019</xref>), while individuals and organizations may seek to gain financial rewards or raise their public profile (<xref ref-type="bibr" rid="B11">Bretzner et al., 2011</xref>). This rhetorical phenomenon, neurohype, privileges dramatic possibilities over incremental realities, shaping both research agendas and public appetite for sensationalism. For example, calls to abandon terms such as &#x201C;mini-brains&#x201D; or &#x201C;brain-in-a-dish&#x201D; when referring to neural organoids reflect concerns about misleading metaphors (<xref ref-type="bibr" rid="B43">Kataoka et al., 2023</xref>; <xref ref-type="bibr" rid="B10">Bassil, 2024</xref>). Likewise, recent debates over &#x201C;mind-reading&#x201D; devices and speculative &#x201C;neurorights&#x201D; legislation have tended to fixate on far-fetched scenarios rather than realistic technical capabilities (<xref ref-type="bibr" rid="B20">Gilbert and Russo, 2024a</xref>,<xref ref-type="bibr" rid="B21">b</xref>). In terms of AI-NO systems, this would mean creating premature or overbroad regulation and protecting the rights of emerging AI-NO entities with capacities that are not currently evidenced and may never materialize (<xref ref-type="bibr" rid="B52">Nestor and Wilson, 2025</xref>; <xref ref-type="bibr" rid="B3">Adam, 2025a</xref>). Second, the cognitive salience of certain imaginable scenarios. Concepts like &#x201C;mini-brains that might become conscious&#x201D; are highly imaginable and narratively compelling, granting them a visibility advantage over diffuse but pressing issues like neural-data governance or procurement protocols (<xref ref-type="bibr" rid="B44">Kataoka et al., 2025a</xref>). For instance, the framing of OI as a biocomputing frontier further reinforces this allure, despite its current technical immaturity and uncertainty (in this article, we refrain from the use of OI, rather using more descriptive terms over the form of integration within an AI-NO system). Finally, there exists cross-domain spill over from AI ethics. Organoid ethics debates have increasingly absorbed concerns from broader AI ethics, particularly from discussions around artificial general intelligence and existential risk. This spill over effect risks redirecting ethical scrutiny from current harms (e.g., algorithmic bias, inequitable access, data privacy) to hypothetical future threats, replicating a well-documented problem in AI governance (<xref ref-type="bibr" rid="B51">Mueller, 2024</xref>). If these speculative narratives dominate the ethical landscape, we risk consciousness inflation, where scarce regulatory resources and academic attention are monopolized by scenarios with little or no empirical basis. As recent calls in AI ethics have emphasized, governance efforts should refocus on real-world, empirically documented risks. The same imperative applies to neural organoid research: speculative ethics should not dominate discourse.</p>
<p>In contrast to speculative concerns, the integration of AI with neural organoids introduces a set of immediate, empirically grounded ethical challenges that remain underexplored. These include the governance of AI models, vast, high-dimensional datasets generated by organoids and linked patient information (involving a range of data governance issues such as data accuracy, bias and security); the safety and accountability of AI-driven closed-loop systems capable of delivering real-time electrical or pharmacological interventions; and the transparency and auditability of algorithmic decision-making in experimental and clinical contexts. Unlike the hypothetical possibility of the emergence of consciousness, these issues are already material: they affect how AI-NO systems are designed, validated, and deployed today, and they carry direct implications for patient privacy, research integrity, and translational equity. We therefore advocate rebalancing ethical attention: prioritize proportionate, evidence-based oversight of concrete risks while maintaining a watchful, noninflationary posture toward speculative consciousness claims.</p>
</sec>
<sec id="S3">
<label>3</label>
<title>Potential issues when using AI generated data on neural organoids: research and drug discovery</title>
<p>To consider these concrete risks in more detail, we begin by describing two examples where data generated by an AI model informs subsequent work with neural organoids, but there is no direct connection between the two components (subsequent sections discuss examples involving direct data transfer connectivity between both components).</p>
<p>Our first example is where data generated by an AI model is used to assist in neural organoid research planning. This example is reflected across all areas of AI-assisted scientific research (<xref ref-type="bibr" rid="B46">Khalifa and Albadawy, 2024</xref>; <xref ref-type="bibr" rid="B1">World Health Organisation, 2024</xref>), but has not yet been articulated in the neural organoid field. In this example, the AI model would be trained on protocols and other relevant data detailing neural organoid formation, development, maintenance and analysis from a variety of sources including research articles, guidance documents, patents, and user input. Model training could involve a continual learning process, where new information is provided by the software supplier or via user input. The model could also incorporate relevant ethical, regulatory and scientific guidelines, such as from the International Society for Stem Cell Research (ISSCR) standards (<xref ref-type="bibr" rid="B38">International Society for Stem Cell Research, 2023</xref>). This type of AI model could be embedded in electronic lab notebooks to help define and optimize research methods, experimental protocols, analysis and statistics for a researcher. The aim of such a system would be to ensure more neural organoid studies are of high quality, ethically compliant and reproducible. However, the use of AI in this role may raise a range of issues, including the appropriate selection of training methods, protocols and ethical guidelines. For example, the AI models may be trained on and recommend &#x201C;gold standard&#x201D; research protocols using the most expensive equipment and materials, or specific commercial products, impacting on less wealthy labs being able to perform these experimental protocols. It may only utilize information from high impact (or open source) journals, ignoring data from more junior or less well known scientists. It may be dominated by large numbers of articles based on research fads or cultures (e.g., favoring particular analytical methods) rather than validated and foundational data. It may also incorporate erroneous or fabricated data and articles which have been retracted. Information entered into the AI model by researchers may also be used to update the model; this could improve the models accuracy or lead to a radicalization toward certain methods. The inclusion of certain ethical standards may tell researchers to follow ethical or regulatory rules/concerns raised from specific groups or jurisdictions which are not applicable to the user, such as avoiding the use of embryonic stem cells, the formation of certain cell types or limit the length of cell culture to minimize the risk of morally significant states from arising.</p>
<p>Even with properly curated training data, the training of AI models may be difficult. Poorly trained models may fail to capture the highly technical language or complex experimental protocols used in stem cell research, and important small differences between protocols; they may fail to identify critical experimental details, instead focussing on commonly used phrases; and they may fail to solve complex mathematical problems (<xref ref-type="bibr" rid="B15">Fang et al., 2025</xref>), so there is a non-negligible chance that any statistical information they provide will be incorrect. Model training may never provide sufficient information for every different experimental permutation (e.g., differences in genetic and cellular diversity) and experimental technique (e.g., variations due to systematic errors or sample contamination). Poorly designed, trained or used (e.g., in context learning and users inputting insufficient or inappropriate data) AI models can provide false or misleading information, termed AI hallucinations, resulting in poor research protocols. To limit the risk and impact of AI hallucinations, the models could be required to provide reasoning for a suggested research protocol, so that users can identify incorrect recommendations and adjust or reject the protocols as required. However, even if this provided an adequate method for verifying a model&#x2019;s reliability, the need to invest additional time and resources in the verification process might detract from the benefits of using AI models in the first place.</p>
<p>An AI model behaving as intended may still result in inappropriate use in research planning. Users may become complacent, failing to recognize errors in protocols and significant details in their experiments. This could result in poor research outcomes, with users having difficulty in justifying their research in publications and preventing others from reproducing their work. Reliance on AI models trained on published research may bias neural organoid research away from novel problems and techniques, leading to further growth in derivative-type publications. This could be further impacted by reviewers utilizing the same AI models, leading to the rejection of articles or grant applications which apply different methods (<xref ref-type="bibr" rid="B4">Adam, 2025b</xref>).</p>
<p>The use of AI models in research planning may also lead to complicated intellectual property disputes (<xref ref-type="bibr" rid="B2">Aboy et al., 2025</xref>). For example, data provided by a user to train the model may be used by other people or the software provider for their own research planning. Identifying the inventor of a new discovery in these cases may be very difficult or impossible and the legal status of the AI model itself has not been settled. The laws governing AI in research is a complex topic which is only beginning to be investigated (<xref ref-type="bibr" rid="B18">Gaidartzi and Stamatoudi, 2025</xref>). New guidelines will need to be developed for consent processes for user data to be used in updating an AI model, and new details on intellectual property ownership will need to be clearly articulated.</p>
<p>Given the abilities and limitations of AI-NO systems in research planning described above, while there is a potential productivity benefit for researchers using AI models to plan or optimize routine neural organoid research, it is likely that there will be less benefit for more innovative research, and a great risk to the quality of research if it is used incorrectly. Ultimately, the appropriate use of AI models in neural organoid research may require such high levels of user oversight that they fail to provide significant advantages.</p>
<p>Our second example is where an AI model is used for drug discovery before screening on neural organoids. In this situation, an AI model may be trained on known drugs and drug targets to predict potential new drug entities. These drugs can then be screened on a neural organoid to assess impact on disease biomarkers and toxicity prior to animal and human trials. However, the quality of the drug predictions are limited by the training of the AI model and the validity of any biomarkers (<xref ref-type="bibr" rid="B30">Harris et al., 2024</xref>). To improve the quality of AI-led drug discovery, newer AI models are beginning to include other relevant training data, including Absorption, Distribution, Metabolism, Excretion, and Toxicity (ADMET) information (<xref ref-type="bibr" rid="B17">Fu and Chen, 2025</xref>). The difficulty with this approach is in knowing what training data is relevant and ensuring that the data are robust. This is of particular concern for drugs targeting the brain, as the drugs must have the correct physicochemical properties to cross the blood-brain barrier; the brain displays many emergent features which we do not understand how they arise and subsequently don&#x2019;t know what training data is relevant (<xref ref-type="bibr" rid="B29">Harris et al., 2020</xref>); and there are a limited number of drugs and known drug targets which control neural disorders that can be used to train an AI model. Given the enormous complexity of biological systems, it is possible that AI models may never be trained on all of the relevant data, limiting the accuracy of any predictions. Subsequently, neural organoids will play a critical role in validating data obtained from AI models, but there will remain a risk of serious harm or lack of effectiveness when AI-predicted drugs are assessed in a clinical trial.</p>
<p>Of further concern is that AI-led drug discovery would be trained on known interactions and clinical outcomes, so that new drug predictions will most likely identify &#x201C;me-too&#x201D; drugs which act in a similar manner to existing drugs. Subsequently, any predicted drugs would not improve outcomes for intractable disorders requiring completely novel interventions, ultimately limiting the utility of AI-led drug discovery. A recent attempt to overcome this issue used an AI model to combine chemical fragments to identify novel antibiotics, but it proposed millions of potential compounds, many of which were unrealistic or could not be easily synthesized (<xref ref-type="bibr" rid="B50">Marchal, 2025</xref>). Nevertheless, any promising antibiotics identified by this process still need to be fully investigated to understand their method of action, safety, efficacy, pharmacodynamics and pharmacokinetics. It remains to be seen if drug screening on AI-NO systems (as opposed to neural organoids without AI) will assist researchers significantly in identifying new therapies for treatment refractory patients.</p>
</sec>
<sec id="S4">
<label>4</label>
<title>Potential issues when analyzing neural organoid data with AI models: research and disease screening</title>
<p>Neural organoids generate vast amounts of information which can be analyzed by an AI model, including electrophysiology, &#x2018;omics and structural data, with further integration with genetic and patient-derived and environmental data also possible. AI models can analyze these large datasets and detect small or irregular features. This type of AI-NO system may be used to predict biological and clinical outcomes, including new insights into biological processes and identify potential biomarkers for medical interventions or early disease diagnosis. However, this can come with risks of incorrect detection when using a poorly trained or used AI model.</p>
<p>AI models may be trained on one data type or combine multiple types with different formats, including text, numerical, image and audio formats, and proprietary information. Once again, there are potential issues raised by the training of the AI models which can result in AI hallucinations or misleading information. The training dataset must be from a properly labeled and representative population, as poorly trained models may incorrectly label test data (e.g., healthy biomarkers as unhealthy and vice versa). If the test data contains information not in the training data set, the AI model may generate false information. And while it is hoped that AI models will only detect relevant biomarkers, poor training can result in the detection of irrelevant features, such as user induced effects as part of a testing protocol, or reading user labels rather than measuring the actual sample.</p>
<p>AI hallucinations can also arise from users not providing the AI model with correct or sufficient information in the correct format to make a valid prediction. This may be due to the use of different equipment or data formats used from the AI training data, errors in data curation [e.g., the &#x201C;vegetative electron microscopy&#x201D; optical character recognition error (<xref ref-type="bibr" rid="B16">Ford, 2025</xref>)], or data obtained from the neural organoids not being large enough or too variable for the AI model to analyze. In fact, it is not clear how an AI-NO system would handle variability and integrate different information. There are multiple sources of variability associated with neural organoids. Neural organoids within a single experiment can display variability due to different genetic clones, cellular composition and structure, resulting in variable behavior and biomarker expression. Information collected by different researchers and facilities/equipment can also vary due to different calibration, bias, systematic errors, analytical methods or reporting methods. Researchers deal with these issues through multiple sample repeats and statistical analyses, including rejection of outliers. It isn&#x2019;t clear if an AI model would apply the same analytical processes, incorporate all of the data, or reject some data; and it may not be easy to determine what the model has done or why. Regardless of the cause, AI hallucinations causing incorrect or misleading neural organoid analysis can have varying impact, ranging from wasted research efforts, to publishing of false data, and misdiagnosis of patients.</p>
<p>A properly trained AI model can be used to measure biomarkers in test data which have been defined by a user, or to identify new biomarkers (<xref ref-type="bibr" rid="B5">Aerts et al., 2014</xref>). However, the measurement of a neural organoid biomarker by an AI model does not imply these features are relevant to the user or connected to humans in any way. Neural organoids are distinct entities in relation to the patients they are derived from, and whose biomarkers must be validated against patient biomarkers and clinical outcome assessments (<xref ref-type="bibr" rid="B30">Harris et al., 2024</xref>). The biomarkers measured by an AI model in a neural organoid may be considered validated biomarkers of human disease by professional bodies or regulatory agencies when measured in patients, but they must still be validated in neural organoids, and AI models. Many neural organoid biomarkers measured by an AI model will not be validated biomarkers, rather they will be measurements made by individual researchers, equipment and software suppliers. The validity of these biomarkers may be further limited by the equipment and software available for analyzing neural organoids in comparison to those used to analyze patients (e.g., MRI and CT imaging may be used on patients but may be incompatible with AI-NO systems).</p>
<p>There will be significant data governance issues arising from AI-NO systems used for data analysis relating to data rights. AI developers must gain permission to access all of the training data. Even if people are willing to provide tissue samples for personal diagnostics and treatment screening, they may not be willing to provide information for training an AI model, especially if the model is proprietary.</p>
<p>Finally, the reasoning of why an AI model makes a certain prediction may not be understood by a user. This may confuse researchers when trying to understand what information is useful, exacerbating the big data problem of neural organoids, reducing trust in the results provided by an AI-NO system and ultimately limiting the utility of the information provided. Conversely, leaving all neural organoid analysis to an AI model trained on previous data sets may enhance user complacency, so that novel, interesting data is not identified, reducing the power of AI-NO systems in research. Subsequently, AI-NO systems may provide a significant benefit on the productivity and accuracy of data analysis for research and disease screening platforms, but this will depend on the quality of data provided to it and the validity of the biomarkers it measures.</p>
</sec>
<sec id="S5">
<label>5</label>
<title>Potential issues when controlling neural organoids with an AI model: neural development and OI</title>
<p>Our final example involves an AI model being used to control direct interactions with neural organoids (e.g., controlling a device delivering electrical stimulation or drugs). In this system, the AI model would control the location, timing and type of intervention made to a neural organoid. This type of AI-NO system can be run in an open-loop setup with the AI function controlled by a user or via information in a database; or in a closed-loop setup involving real time measurement and data analysis of neural organoid behavior. The open or closed-loop structure of the AI-NO system and its intended purpose will likely determine the necessary training process of the AI model. When neural organoids are used in research and clinical applications, the precise delivery of chemical and electrical input may be critical to its development and behavior. Therefore, use of an AI-NO system, rather than individuals, to control interventions on neural organoids may better direct tissue development, structure and function, ensuring the desired neural organoid is created and improving reproducibility. In the case of AI-NO systems used for OI type applications, the incorporation of AI into neural organoids may be critical in forming the desired neural structure and information processing.</p>
<p>Many biological factors governing the electrophysiological behavior of neural systems have already been determined, such as neural connectivity, input-output functions, long-term potentiation and long-term depression. However, the rules governing an AI model stimulating a neural system may not be known or even contradict typical neural behavior (e.g., stimulation timings or locations may occur in unexpected patterns). While the AI model supplies some form of intervention, it may not be clear why it is delivering a particular type, timing or location for that intervention. Subsequently, any AI driven intervention may be atypical, resulting in unwanted organoid behavior or development. This could be exacerbated in a closed-loop system, where the AI model is analyzing organoid data which may also be atypical. For example, electrical stimulation induces an artifact which resembles a large action potential (spike). A poorly trained, closed-loop AI-NO system could maximize &#x201C;spike rate&#x201D; by repeatedly stimulating the organoid to induce electrical artifacts. If the AI-NO system induces the formation of atypical neural structures, it could alter the organoids behavior in a way which undermines its intended role (e.g., inducing spontaneous seizures); and the organoid may develop novel behaviors which cannot be recognized by a closed-loop AI model.</p>
<p>Another issue is that people using AI-NO systems may fail to understand the systems learning processes, leading to incorrect attribution of system capabilities and properties. An AI-NO system involves two forms of learning, the AI model can update its analysis and control of neural organoid behavior; and the neural organoid can change network structure, synapse connectivity and strength. AI learning (training) is relatively fast, while the different forms of neural organoid learning occur over longer timescales. Differences in learning speed may be a benefit in studying learning processes in AI-NO systems, but could result in a failure of integration between AI and organoid. AI learning may be associated with better detection of specific electrophysiological patterns on a single neural organoid, which can be achieved without any biological learning. However, continual learning of an AI model may result in constant changes to the model in response to an organoids spontaneous activity or background noise. Furthermore, the structure and behavior of organoids are notoriously variable, so that an AI model which has been trained on the behavior of one neural organoid may not be correctly trained to analyze another organoid. Conversely, neural organoid learning can result in long-term changes in behavior, which may be detected by AI models created from a range of training datasets. The risk is that users may misinterpret learning in an AI-NO system as neural organoid learning and memory instead of the less impressive AI learning and memory. Neural organoid learning must be confirmed by applying the same AI model to different neural organoids, and vice versa; and by immunohistochemical and microscopic analysis of neural organoid structure. To our knowledge, only short-term learning processes (a few minutes duration) associated with short-term potentiation/depolarization and long-term potentiation/depolarization have been demonstrated in neural organoids (<xref ref-type="bibr" rid="B66">Zafeiriou et al., 2020</xref>), no long-term learning and memory processes associated with changes in neural structure have been shown to date. While previous studies provide important baseline data on the capabilities of an AI-NO system with AI learning, only after long-term neural organoid learning has been demonstrated will the full capabilities of AI-NO systems be ascertained. Researchers claiming current AI-NO systems are learning may not be distinguishing between AI and neural organoid learning and memory, or more worryingly, they may be leaning in to neurohype.</p>
<p>There is great potential and benefit in using AI to control neural organoids. More detailed analysis of the capabilities and ethical issues of this application will depend on the research outcomes published over the coming years.</p>
</sec>
<sec id="S6">
<label>6</label>
<title>Potential issues arising from clinical application of neural organoids integrated with AI models: personalized drug and disease screening</title>
<p>It is expected that neural organoids will play an increasing role in clinical decision making, particularly for personalized drug screening and phenotypic disease screening. In these applications, cells obtained from individual patients are converted into neural organoids with phenotypic features compared to control organoids for disease diagnosis, while changes in phenotypic features after exposure to different drugs would be used for high throughput drug screening. We have previously investigated some of the ethical and regulatory issues related to these clinical applications (<xref ref-type="bibr" rid="B62">Walker et al., 2019</xref>, <xref ref-type="bibr" rid="B63">2022</xref>; <xref ref-type="bibr" rid="B29">Harris et al., 2020</xref>, <xref ref-type="bibr" rid="B32">2022b</xref>,<xref ref-type="bibr" rid="B28">2023</xref>, <xref ref-type="bibr" rid="B33">2025</xref>). As discussed above, the integration of AI into analysis of neural organoid data may have significant benefits in dealing with large datasets, and incorporation of other relevant data. For example, the AI model could integrate geographical, population and lifestyle information to provide more accurate and patient specific diagnostics or treatment recommendations, or more rapidly identify rarer diseases that may otherwise be missed or slow to diagnose by healthcare workers. However, poorly trained and used AI models may provide inaccurate or misleading information. This section examines some of the unique issues an AI-NO system could raise when used for clinical decision making<sup><xref ref-type="fn" rid="footnote2">2</xref></sup>.</p>
<p>In the clinical application of AI-NO systems, inappropriate AI training and hallucinations could result in significant health implications through misdiagnosis and incorrect drug recommendations. This would also reduce trust in the technology. Subsequently, the training requirements of the AI model would be stricter than for non-clinical applications of AI-NO systems. Training should be based on validated biomarkers obtained from the most recent clinical knowledge and accepted diagnostic criteria from professional bodies. This would prevent AI-NO systems recommending proprietary treatments, under/over diagnosing conditions or providing information biased by religious or political views. Furthermore, the AI model should only be updated after appropriate validation processes have occurred; it should not undergo user-based continual learning, which may introduce biases or errors. The downside to using a fixed AI model is that it may prevent optimization of clinical outcomes for certain groups, although this may be at the expense of other people (<xref ref-type="bibr" rid="B59">Sparrow et al., 2024</xref>). A fixed model would also lag behind clinical research so that diagnoses and treatments based on the latest clinical knowledge would require other diagnostic methods. An alternative strategy could be to allow continual learning of the AI model and classifying its use as medical research and any patients as research subjects (<xref ref-type="bibr" rid="B59">Sparrow et al., 2024</xref>). However, research subjects, as opposed to patients, are not expected to receive benefit from a clinical intervention; and an AI model trained on one individual may not generate data which is considered generalizable knowledge which is applicable to other patients.</p>
<p>While the use of AI-NO systems may help detect rarer diseases, there may be less data on these diseases for training the AI model. Various co-morbidities and emerging conditions (e.g., novel infections, diseases, toxic contamination) may also not be included in training data and subsequently incapable of being identified by the AI model. As a result, AI-NO systems may fail to deliver significant benefits in diagnosing rare, complex and emerging diseases. The lack of training data or novel behavior outside of its training data, may also enhance the risk of AI hallucinations. In the case that AI hallucinations do occur, it is critical that AI-NO systems convey an &#x201C;I do not know&#x201D; response, rather than providing an incorrect diagnosis or drug recommendation.</p>
<p>The regulation of AI-NO systems for clinical use will be complicated and impacted by jurisdiction (<xref ref-type="bibr" rid="B6">Angus et al., 2025</xref>). The decision to classify AI-NO systems as either a medical device or research tool will have dramatic implications on how the technology is regulated, who maintains oversight, patient informed consent requirements and health insurance coverage (<xref ref-type="bibr" rid="B59">Sparrow et al., 2024</xref>). Diagnostic devices, including companion diagnostics, have varying levels of regulatory oversight which can depend on its accuracy and intended use. Meanwhile, regulations over AI are yet to be created for many jurisdictions. The European Union has only recently created the EU AI Act, a world-first legal framework governing all AI use within the EU. The Act classifies AI models according to risk (unacceptable, high or minimal), with medical devices utilizing AI being considered high-risk. However, even in the EU, the oversight of AI-NO systems under current regulations may not be sufficient, as the classification of AI-NO systems as medical devices is unclear, traditional evaluation methods of medical devices are poorly matched to AI, and safeguards have not yet been implemented (<xref ref-type="bibr" rid="B55">Ong et al., 2025</xref>). Regulatory approval of the organoid component will require validation of its biomarker expression against patient biomarkers, symptoms and clinical outcome assessments (<xref ref-type="bibr" rid="B30">Harris et al., 2024</xref>). The AI component will likely require a different form of validation, which ensures the model is running as intended and accurately measures the organoid biomarkers. However, the validation process may be prevented by a range of reasons, including inconsistencies between patients, poor measurement of patient biomarkers, symptoms and clinical outcome assessments. Critically, any changes to the organoid or AI model would require full system revalidation. To ensure safe clinical application of AI-NO systems and so that trust is built for its use, there must be an international organization to push uniform standards and governance of AI in clinical practice (<xref ref-type="bibr" rid="B55">Ong et al., 2025</xref>).</p>
<p>Once approved for use, the processes for integrating AI-NO systems into clinical decision making will need to be developed. AI-NO systems used for personalized drug screening may provide a large list of potentially suitable drugs with no other information to assist clinicians in choosing the most appropriate drug for a patient, requiring complex decision processes. A clinician could incorporate other patient information into the decision-making process after receiving the results from the AI-NO system, alternatively they could provide information to the AI-NO system to shorten the drug list or identify an optimum drug candidate. This information could be provided to the AI model before or after the drug screening process has occurred. The information could include a range of patient data, such as lifestyle choices, co-morbidities and impact of potential side effects. However, the choice of information provided by a clinician may be biased, leading to suboptimal clinical outcomes or entrenchment of biases already present in the healthcare system. The system may also have limited or no capacity to weigh up moral judgments or understand the precautionary principle, only recommending drugs with a maximum impact on an organoids phenotypic behavior. There will need to be very clear, standardized user input into the AI model to ensure it provides appropriate clinical information. It may be necessary to run clinical trials to determine the most appropriate method for integrating AI-NO systems into clinical decision making.</p>
<p>The use of AI-NO systems in clinical decision making may also impact on clinician behavior. Clinicians may be reluctant to diagnose patients by phenotypic screening on SCTCs, as the biomarkers may have no visible relationship with patient biomarkers and symptoms. Patient diagnosis with an AI-NO system may be even more difficult, as the AI model may be detecting very obscure biomarkers. Furthermore, as current AI models do not provide explanation or justification, clinicians may not understand its drug recommendations or diagnoses, and be hesitant to use this information (<xref ref-type="bibr" rid="B36">Hatherley et al., 2024</xref>). This has even led to arguments that it is more important for AI predictions to be interpretable rather than accurate, as clinicians are able to inspect information flagged by the AI model and integrate other types of information for clinical decision making (<xref ref-type="bibr" rid="B36">Hatherley et al., 2024</xref>). Conversely, clinicians may feel financial, social or time pressures to transfer decision making over to AI-NO systems (<xref ref-type="bibr" rid="B25">Goddard et al., 2012</xref>). In the event that clinicians become over-reliant on the technology, it could lead to cognitive deskilling, where clinicians reduce their expertise or lose confidence in their abilities. This could result in an over-valuing of data provided by the AI model, even against their own judgments, and increase the risk of patient harms (<xref ref-type="bibr" rid="B25">Goddard et al., 2012</xref>). This will require an educational process for clinicians and pathology lab workers to understand the capabilities and limitations of AI-NO systems.</p>
<p>There should be debate around the appropriate response for when a patient receives an incorrect diagnosis or treatment based on data obtained from an AI-NO system. This should determine who has responsibility over each step of AI-NO use, examine existing malpractice frameworks to determine if they are suitable, and provide guidance on how pathology labs and clinicians detect errors and adjust a patients&#x2019; healthcare. It should also create guidelines and procedures to determine who is liable in the case of clinical errors (e.g., clinicians, pathology lab technicians, biochemical, hardware and software suppliers). Other relevant organizations should also be consulted to determine if they understand the capabilities and limitations of AI-NO systems, and if the data the systems provide is sufficient for the organization to perform its role in the provision of patient healthcare (e.g., regulators, human research ethics committees, insurance companies).</p>
<p>Finally, the development of AI-NO systems requires significant expertise, lab facilities and data storage, this could result in a stratification of AI-NO system access to more affluent jurisdictions and conditions more common in those jurisdictions. Efforts will be needed to ensure AI-NO systems are made available to poorer communities and diseases they are affected by. The complex validation processes, may also result in jurisdictions with less developed health care systems using AI-NO systems for clinical applications without the necessary validation, AI training and health care worker education. An education campaign will be needed to ensure AI-NO systems are only used after appropriate validation has been undertaken.</p>
</sec>
<sec id="S7" sec-type="conclusion">
<label>7</label>
<title>Conclusion</title>
<p>AI-NO systems may assist in biological research, drug development, personalized drug screening, toxicity testing, phenotypic disease screening and regenerative medicine applications, and play a role in understanding learning, memory formation and biological computing. These benefits may be achieved by recommending optimal and ethical research protocols, identifying new drug entities, analyzing large datasets, identifying relevant biomarkers, and controlling interventions to direct organoid structure and function. However, the combination of two high impact research fields, neural organoids and AI, risks fomenting highly speculative discourse around the formation of conscious entities with moral status, demanding immediate regulatory protections. This may lead to pre-emptive legislation of AI-NO systems based on unfounded capacities, undermining trust in the technology, an unjustified moratorium on their development and use, prevention of the development of valuable new technologies which benefit society, and wasted time developing unnecessary regulations. We argue that ethical discussions around AI-NO systems should be based on more immediate, empirically grounded issues and recommend clear governance actions (for some examples see <xref ref-type="table" rid="T2">Table 2</xref>).</p>
<table-wrap position="float" id="T2">
<label>TABLE 2</label>
<caption><p>Where AI meets organoids &#x2192; what can go wrong &#x2192; what to do.</p></caption>
<table cellspacing="5" cellpadding="5" frame="box" rules="all">
<thead>
<tr>
<th valign="top" align="left">AI-NO integration</th>
<th valign="top" align="left">Main risk (one-liner)</th>
<th valign="top" align="left">Main governance action</th>
</tr>
</thead>
<tbody>
<tr>
<td valign="top" align="left">AI helps plan experiments (e.g., smart assistants)</td>
<td valign="top" align="left">Wrong or biased steps creep into protocols</td>
<td valign="top" align="left">Require sources + human sign-off for each suggested step</td>
</tr>
<tr>
<td valign="top" align="left">AI suggests new drugs &#x2192; then screen on organoids</td>
<td valign="top" align="left">&#x201C;Me-too&#x201D; drugs or false positives from weak biomarkers</td>
<td valign="top" align="left">Pre-specify validation; only move forward if human biomarkers align</td>
</tr>
<tr>
<td valign="top" align="left">AI analyses organoid data (e.g., &#x2018;omics/imaging)</td>
<td valign="top" align="left">Mislabels or chases artifacts; privacy conflicts</td>
<td valign="top" align="left">Use representative datasets; log decisions; allow &#x201C;I don&#x2019;t know&#x201D; outputs</td>
</tr>
<tr>
<td valign="top" align="left">AI controls stimulation/drug delivery (open-loop)</td>
<td valign="top" align="left">Unsafe timing/dose patterns</td>
<td valign="top" align="left">Bound parameters; safety interlocks; full activity logs</td>
</tr>
<tr>
<td valign="top" align="left">AI controls in real time (closed-loop)</td>
<td valign="top" align="left">&#x201C;Rewards-hacks&#x201D; artifacts (e.g., boosts spike artifacts)</td>
<td valign="top" align="left">Multi-metric objectives + artifact filters; safe algorithm development</td>
</tr>
<tr>
<td valign="top" align="left">Clinical decision support using AI-NO systems</td>
<td valign="top" align="left">Misdiagnosis; automation bias; unclear liability</td>
<td valign="top" align="left">Locked/validated models; clinician in the loop; incident reporting</td>
</tr>
</tbody>
</table></table-wrap>
<p>The development of useful AI-NO systems is dependent on their training protocol and use. To prevent AI hallucinations, all relevant data must be included in training an AI model and users must input appropriate data.</p>
<p>AI-NO systems used in research planning may increase user productivity involving routine neural organoid research, but be of less benefit for more innovative research protocols. It may also lead to complicated intellectual property disputes which are only in the early stages of being addressed by legal scholars. The use of AI models in neural organoid research may require such high levels of user oversight that they fail to provide significant advantages.</p>
<p>Artificial intelligence-led drug discovery has potential to identify many new therapies, but given the enormous complexity of biological systems, it is possible that AI models may never be trained on all the relevant data, limiting the accuracy of any predictions. Subsequently, neural organoids will be critical in validating data obtained from AI models. However, the training of AI models on known drugs and drug targets will likely identify me-too drugs which act in a similar manner to existing drugs. Subsequently, AI-predicted drugs may not improve outcomes for intractable disorders requiring completely novel interventions, ultimately limiting the utility of AI-led drug discovery.</p>
<p>AI models may be able to analyze large data sets obtained from neural organoids. However, neural organoids are notoriously variable, and it isn&#x2019;t yet clear how an AI model would handle this variable information to provide useful data. Organoids are also distinct entities in relation to the people they are derived from. Subsequently, organoid biomarkers must be validated against patient biomarkers and clinical outcome assessments (<xref ref-type="bibr" rid="B30">Harris et al., 2024</xref>). Many neural organoid biomarkers measured by an AI model will not be validated biomarkers. This may impact on users understanding and appropriate use of the data provided by AI-NO systems.</p>
<p>AI models may be important tools in controlling interventions on neural organoids to ensure appropriate development and function. This can be implemented in open- or closed-loop configurations. These systems will involve two forms of learning associated with the AI model and the neural organoid. There is a risk that uses will fail to understand the systems learning processes, leading to incorrect attribution of system capabilities and properties.</p>
<p>Finally, AI-NO systems may play various roles in clinical decision making. Poorly trained AI models may entrench bias in medicine or lead to patient harms. AI-NO systems may also modify clinician behaviors in ways that undermine the potential benefits of the technology. It is critical that AI-NO systems used in clinical decision making are validated properly and that the users, be they pathology labs or clinicians, understand the systems capabilities and limitations. However, the classification and relevant regulations of AI-NO systems have not yet been developed. The decisions on how to classify AI-NO systems will have dramatic implications on how the technology is regulated, who maintains oversight, patient informed consent requirements and health insurance coverage.</p>
</sec>
</body>
<back>
<sec id="S8" sec-type="data-availability">
<title>Data availability statement</title>
<p>The original contributions presented in this study are included in this article/supplementary material, further inquiries can be directed to the corresponding author.</p>
</sec>
<sec id="S9" sec-type="author-contributions">
<title>Author contributions</title>
<p>AH: Conceptualization, Writing &#x2013; original draft, Writing &#x2013; review &#x0026; editing, Investigation, Validation. PM: Writing &#x2013; review &#x0026; editing. KW: Writing &#x2013; review &#x0026; editing. FG: Conceptualization, Investigation, Validation, Writing &#x2013; original draft, Writing &#x2013; review &#x0026; editing.</p>
</sec>
<ack>
<title>Acknowledgments</title>
<p>FG thanks Professor Sawai&#x2019;s Lab and team for inspiring these reflections on speculation during his time at Hiroshima University. FG also warmly acknowledges the generous residency offered by the Lin family in Nakameguro, Tokyo, which provided the ideal environment for the inception of this work and the development of its premises.</p>
</ack>
<sec id="S11" sec-type="COI-statement">
<title>Conflict of interest</title>
<p>The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
<p>The author AH declared that they were an editorial board member of Frontiers at the time of submission. This had no impact on the peer review process and the final decision.</p>
</sec>
<sec id="S12" sec-type="ai-statement">
<title>Generative AI statement</title>
<p>The author(s) declared that generative AI was not used in the creation of this manuscript.</p>
<p>Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.</p>
</sec>
<sec id="S13" sec-type="disclaimer">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="B1"><mixed-citation publication-type="book"><collab>World Health Organisation</collab> (<year>2024</year>). <source><italic>Ethics and Governance of Artificial Intelligence for Health: Guidance on Large Multi-Modal Models</italic></source>. <publisher-loc>Geneva</publisher-loc>: <publisher-name>World Health Organisation</publisher-name>.</mixed-citation></ref>
<ref id="B2"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Aboy</surname> <given-names>M.</given-names></name> <name><surname>Liddell</surname> <given-names>K.</given-names></name> <name><surname>Lath</surname> <given-names>A.</given-names></name></person-group> (<year>2025</year>). <article-title>Inventorship in the age of AI: Examining the USPTO Guidance on AI-assisted inventions.</article-title> <source><italic>J. Intellect. Prop. Law Pract.</italic></source> <volume>20</volume> <fpage>495</fpage>&#x2013;<lpage>502</lpage>.</mixed-citation></ref>
<ref id="B3"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Adam</surname> <given-names>D.</given-names></name></person-group> (<year>2025a</year>). <article-title>The computers that run on human brain cells.</article-title> <source><italic>Nature</italic></source> <volume>647</volume> <fpage>306</fpage>&#x2013;<lpage>308</lpage>. <pub-id pub-id-type="doi">10.1038/d41586-025-03633-0</pub-id> <pub-id pub-id-type="pmid">41219421</pub-id></mixed-citation></ref>
<ref id="B4"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Adam</surname> <given-names>D.</given-names></name></person-group> (<year>2025b</year>). <article-title>When AI rejects your grant proposal: Algorithms are helping to make funding decisions.</article-title> <source><italic>Nature</italic></source> <volume>645</volume> <fpage>832</fpage>&#x2013;<lpage>833</lpage>. <pub-id pub-id-type="doi">10.1038/d41586-025-02852-9</pub-id> <pub-id pub-id-type="pmid">40913122</pub-id></mixed-citation></ref>
<ref id="B5"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Aerts</surname> <given-names>H. J. W. L.</given-names></name> <name><surname>Velazquez</surname> <given-names>E. R.</given-names></name> <name><surname>Leijenaar</surname> <given-names>R. T. H.</given-names></name> <name><surname>Parmar</surname> <given-names>C.</given-names></name> <name><surname>Grossmann</surname> <given-names>P.</given-names></name> <name><surname>Carvalho</surname> <given-names>S.</given-names></name><etal/></person-group> (<year>2014</year>). <article-title>Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach.</article-title> <source><italic>Nat. Commun.</italic></source> <volume>5</volume>:<fpage>4006</fpage>. <pub-id pub-id-type="doi">10.1038/ncomms5006</pub-id> <pub-id pub-id-type="pmid">24892406</pub-id></mixed-citation></ref>
<ref id="B6"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Angus</surname> <given-names>D. C.</given-names></name> <name><surname>Khera</surname> <given-names>R.</given-names></name> <name><surname>Lieu</surname> <given-names>T.</given-names></name> <name><surname>Liu</surname> <given-names>V.</given-names></name> <name><surname>Ahmad</surname> <given-names>F. S.</given-names></name> <name><surname>Anderson</surname> <given-names>B.</given-names></name><etal/></person-group> (<year>2025</year>). <article-title>AI, health, and health care today and tomorrow: The JAMA summit report on artificial intelligence.</article-title> <source><italic>JAMA</italic></source> <volume>334</volume> <fpage>1650</fpage>&#x2013;<lpage>1664</lpage>. <pub-id pub-id-type="doi">10.1001/jama.2025.18490</pub-id> <pub-id pub-id-type="pmid">41082366</pub-id></mixed-citation></ref>
<ref id="B7"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Bai</surname> <given-names>L.</given-names></name> <name><surname>Wang</surname> <given-names>J.</given-names></name> <name><surname>Lai</surname> <given-names>Y.</given-names></name> <name><surname>Su</surname> <given-names>J.</given-names></name></person-group> (<year>2025</year>). <article-title>Living intelligence toward human-level models (HLMs) via Organoid-AI integration.</article-title> <source><italic>EngMedicine</italic></source> <volume>2</volume>:<fpage>100106</fpage>. <pub-id pub-id-type="doi">10.1016/j.engmed.2025.100106</pub-id></mixed-citation></ref>
<ref id="B8"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Balci</surname> <given-names>F.</given-names></name> <name><surname>Ben Hamed</surname> <given-names>S.</given-names></name> <name><surname>Boraud</surname> <given-names>T.</given-names></name> <name><surname>Bouret</surname> <given-names>S.</given-names></name> <name><surname>Brochier</surname> <given-names>T.</given-names></name> <name><surname>Brun</surname> <given-names>C.</given-names></name><etal/></person-group> (<year>2023</year>). <article-title>A response to claims of emergent intelligence and sentience in a dish.</article-title> <source><italic>Neuron</italic></source> <volume>111</volume> <fpage>604</fpage>&#x2013;<lpage>605</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2023.02.009</pub-id> <pub-id pub-id-type="pmid">36863319</pub-id></mixed-citation></ref>
<ref id="B9"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Barnhart</surname> <given-names>A. J.</given-names></name> <name><surname>Dierickx</surname> <given-names>K.</given-names></name></person-group> (<year>2023</year>). <article-title>Moving beyond the moral status of organoid-entities.</article-title> <source><italic>Bioethics</italic></source> <volume>37</volume> <fpage>103</fpage>&#x2013;<lpage>110</lpage>. <pub-id pub-id-type="doi">10.1111/bioe.13098</pub-id> <pub-id pub-id-type="pmid">36322903</pub-id></mixed-citation></ref>
<ref id="B10"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Bassil</surname> <given-names>K.</given-names></name></person-group> (<year>2024</year>). <article-title>The end of &#x2018;mini-brains&#x2019;! Responsible communication of brain organoid research.</article-title> <source><italic>Mol. Psychol. Brain, Behav. Soc.</italic></source> <volume>2</volume>:<fpage>13</fpage>. <pub-id pub-id-type="doi">10.12688/molpsychol.17534.1</pub-id></mixed-citation></ref>
<ref id="B11"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Bretzner</surname> <given-names>F.</given-names></name> <name><surname>Gilbert</surname> <given-names>F.</given-names></name> <name><surname>Baylis</surname> <given-names>F.</given-names></name> <name><surname>Brownstone</surname> <given-names>R. M.</given-names></name></person-group> (<year>2011</year>). <article-title>Target populations for first-in-human embryonic stem cell research in spinal cord injury.</article-title> <source><italic>Cell Stem Cell</italic></source> <volume>8</volume> <fpage>468</fpage>&#x2013;<lpage>475</lpage>. <pub-id pub-id-type="doi">10.1016/j.stem.2011.04.012</pub-id> <pub-id pub-id-type="pmid">21549321</pub-id></mixed-citation></ref>
<ref id="B12"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Caulfield</surname> <given-names>T.</given-names></name></person-group> (<year>2016</year>). <article-title>Ethics hype?</article-title> <source><italic>Hastings Cent. Rep.</italic></source> <volume>46</volume> <fpage>13</fpage>&#x2013;<lpage>16</lpage>. <pub-id pub-id-type="doi">10.1002/hast.612</pub-id> <pub-id pub-id-type="pmid">27649824</pub-id></mixed-citation></ref>
<ref id="B13"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Croxford</surname> <given-names>J.</given-names></name> <name><surname>Bayne</surname> <given-names>T.</given-names></name></person-group> (<year>2024</year>). <article-title>The case against organoid consciousness.</article-title> <source><italic>Neuroethics</italic></source> <volume>17</volume>:<fpage>13</fpage>. <pub-id pub-id-type="doi">10.1007/s12152-024-09548-3</pub-id></mixed-citation></ref>
<ref id="B14"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>de Jongh</surname> <given-names>D.</given-names></name> <name><surname>Massey</surname> <given-names>E. K.</given-names></name> <name><surname>Berishvili</surname> <given-names>E.</given-names></name> <name><surname>Fonseca</surname> <given-names>L. M.</given-names></name> <name><surname>Lebreton</surname> <given-names>F.</given-names></name> <name><surname>Bellofatto</surname> <given-names>K.</given-names></name><etal/></person-group> (<year>2022</year>). <article-title>Organoids: A systematic review of ethical issues.</article-title> <source><italic>Stem Cell Res. Ther.</italic></source> <volume>13</volume>:<fpage>337</fpage>. <pub-id pub-id-type="doi">10.1186/s13287-022-02950-9</pub-id> <pub-id pub-id-type="pmid">35870991</pub-id></mixed-citation></ref>
<ref id="B15"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Fang</surname> <given-names>M.</given-names></name> <name><surname>Wan</surname> <given-names>X.</given-names></name> <name><surname>Lu</surname> <given-names>F.</given-names></name> <name><surname>Xing</surname> <given-names>F.</given-names></name> <name><surname>Zou</surname> <given-names>K.</given-names></name></person-group> (<year>2025</year>). <article-title>MathOdyssey: Benchmarking mathematical problem-solving skills in large language models using odyssey math data.</article-title> <source><italic>Sci. Data</italic></source> <volume>12</volume>:<fpage>1392</fpage>. <pub-id pub-id-type="doi">10.1038/s41597-025-05283-3</pub-id> <pub-id pub-id-type="pmid">40781231</pub-id></mixed-citation></ref>
<ref id="B16"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Ford</surname> <given-names>J.</given-names></name></person-group> (<year>2025</year>). <source><italic>Is There Such a Thing as a &#x201C;Vegetative Electron Microscope&#x201D;? Doubtful.</italic></source> <publisher-loc>London</publisher-loc>: <publisher-name>New Scientist</publisher-name>.</mixed-citation></ref>
<ref id="B17"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Fu</surname> <given-names>C.</given-names></name> <name><surname>Chen</surname> <given-names>Q.</given-names></name></person-group> (<year>2025</year>). <article-title>The future of pharmaceuticals: Artificial intelligence in drug discovery and development.</article-title> <source><italic>J. Pharm. Anal.</italic></source> <volume>15</volume>:<fpage>101248</fpage>. <pub-id pub-id-type="doi">10.1016/j.jpha.2025.101248</pub-id> <pub-id pub-id-type="pmid">40893437</pub-id></mixed-citation></ref>
<ref id="B18"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Gaidartzi</surname> <given-names>A.</given-names></name> <name><surname>Stamatoudi</surname> <given-names>I.</given-names></name></person-group> (<year>2025</year>). <article-title>Authorship and ownership issues raised by AI-Generated works: A comparative analysis.</article-title> <source><italic>Laws</italic></source> <volume>14</volume>:<fpage>57</fpage>. <pub-id pub-id-type="doi">10.3390/laws14040057</pub-id></mixed-citation></ref>
<ref id="B19"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Gilbert</surname> <given-names>F.</given-names></name> <name><surname>Goddard</surname> <given-names>E.</given-names></name></person-group> (<year>2014</year>). <article-title>Thinking ahead too much: Speculative ethics and implantable brain devices.</article-title> <source><italic>AJOB Neurosci.</italic></source> <volume>5</volume> <fpage>49</fpage>&#x2013;<lpage>51</lpage>. <pub-id pub-id-type="doi">10.1080/21507740.2013.863252</pub-id></mixed-citation></ref>
<ref id="B20"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Gilbert</surname> <given-names>F.</given-names></name> <name><surname>Russo</surname> <given-names>I.</given-names></name></person-group> (<year>2024a</year>). <article-title>Mind-reading in AI and neurotechnology: Evaluating claims, hype, and ethical implications for neurorights.</article-title> <source><italic>AI Ethics</italic></source> <volume>4</volume> <fpage>855</fpage>&#x2013;<lpage>872</lpage>. <pub-id pub-id-type="doi">10.1007/s43681-024-00514-6</pub-id></mixed-citation></ref>
<ref id="B21"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Gilbert</surname> <given-names>F.</given-names></name> <name><surname>Russo</surname> <given-names>I.</given-names></name></person-group> (<year>2024b</year>). <article-title>Neurorights: The land of speculative ethics and alarming claims?</article-title> <source><italic>AJOB Neurosci.</italic></source> <volume>15</volume> <fpage>113</fpage>&#x2013;<lpage>115</lpage>. <pub-id pub-id-type="doi">10.1080/21507740.2024.2328244</pub-id> <pub-id pub-id-type="pmid">38568703</pub-id></mixed-citation></ref>
<ref id="B22"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Gilbert</surname> <given-names>F.</given-names></name> <name><surname>O&#x2019;Connell</surname> <given-names>C. D.</given-names></name> <name><surname>Mladenovska</surname> <given-names>T.</given-names></name> <name><surname>Dodds</surname> <given-names>S.</given-names></name></person-group> (<year>2018a</year>). <article-title>Print me an organ? ethical and regulatory issues emerging from 3D bioprinting in medicine.</article-title> <source><italic>Sci. Eng. Ethics</italic></source> <volume>24</volume> <fpage>73</fpage>&#x2013;<lpage>91</lpage>. <pub-id pub-id-type="doi">10.1007/s11948-017-9874-6</pub-id> <pub-id pub-id-type="pmid">28185142</pub-id></mixed-citation></ref>
<ref id="B23"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Gilbert</surname> <given-names>F.</given-names></name> <name><surname>Via&#x00F1;a</surname> <given-names>J. N. M.</given-names></name> <name><surname>O&#x2019;Connell</surname> <given-names>C. D.</given-names></name> <name><surname>Dodds</surname> <given-names>S.</given-names></name></person-group> (<year>2018b</year>). <article-title>Enthusiastic portrayal of 3D bioprinting in the media: Ethical side effects.</article-title> <source><italic>Bioethics</italic></source> <volume>32</volume> <fpage>94</fpage>&#x2013;<lpage>102</lpage>. <pub-id pub-id-type="doi">10.1111/bioe.12414</pub-id> <pub-id pub-id-type="pmid">29171867</pub-id></mixed-citation></ref>
<ref id="B24"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Gilbert</surname> <given-names>F.</given-names></name> <name><surname>Pham</surname> <given-names>C.</given-names></name> <name><surname>Via&#x00F1;a</surname> <given-names>J.</given-names></name> <name><surname>Gillam</surname> <given-names>W.</given-names></name></person-group> (<year>2019</year>). <article-title>Increasing brain-computer interface media depictions: Pressing ethical concerns.</article-title> <source><italic>Brain-Comp. Interfaces</italic></source> <volume>6</volume> <fpage>49</fpage>&#x2013;<lpage>70</lpage>. <pub-id pub-id-type="doi">10.1080/2326263X.2019.1655837</pub-id></mixed-citation></ref>
<ref id="B25"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Goddard</surname> <given-names>K.</given-names></name> <name><surname>Roudsari</surname> <given-names>A.</given-names></name> <name><surname>Wyatt</surname> <given-names>J. C.</given-names></name></person-group> (<year>2012</year>). <article-title>Automation bias: A systematic review of frequency, effect mediators, and mitigators.</article-title> <source><italic>J. Am. Med. Inform. Assoc.</italic></source> <volume>19</volume> <fpage>121</fpage>&#x2013;<lpage>127</lpage>. <pub-id pub-id-type="doi">10.1136/amiajnl-2011-000089</pub-id> <pub-id pub-id-type="pmid">21685142</pub-id></mixed-citation></ref>
<ref id="B26"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Greely</surname> <given-names>H. T.</given-names></name></person-group> (<year>2021</year>). <article-title>Human brain surrogates research: The onrushing ethical dilemma.</article-title> <source><italic>Am. J. Bioeth.</italic></source> <volume>21</volume> <fpage>34</fpage>&#x2013;<lpage>45</lpage>. <pub-id pub-id-type="doi">10.1080/15265161.2020.1845853</pub-id> <pub-id pub-id-type="pmid">33373556</pub-id></mixed-citation></ref>
<ref id="B27"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Hansson</surname> <given-names>S. O.</given-names></name></person-group> (<year>2020</year>). <article-title>Neuroethics for fantasyland or for the clinic? The limitations of speculative ethics.</article-title> <source><italic>Cambridge Q. Healthc. Ethics</italic></source> <volume>29</volume> <fpage>630</fpage>&#x2013;<lpage>641</lpage>. <pub-id pub-id-type="doi">10.1017/S0963180120000377</pub-id> <pub-id pub-id-type="pmid">32892771</pub-id></mixed-citation></ref>
<ref id="B28"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Harris</surname> <given-names>A. R.</given-names></name> <name><surname>McGivern</surname> <given-names>P.</given-names></name> <name><surname>Gilbert</surname> <given-names>F.</given-names></name></person-group> (<year>2023</year>). <article-title>A review of ethical and regulatory issues in the clinical application of stem cell-derived tissue constructs.</article-title> <source><italic>Mol. Psychol. Brain Behav. Soc.</italic></source> <volume>2</volume>:<fpage>8</fpage>. <pub-id pub-id-type="doi">10.12688/molpsychol.17522.1</pub-id></mixed-citation></ref>
<ref id="B29"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Harris</surname> <given-names>A. R.</given-names></name> <name><surname>McGivern</surname> <given-names>P.</given-names></name> <name><surname>Ooi</surname> <given-names>L.</given-names></name></person-group> (<year>2020</year>). <article-title>Modeling emergent properties in the brain using tissue models to investigate neurodegenerative disease.</article-title> <source><italic>Neuroscientist</italic></source> <volume>26</volume> <fpage>224</fpage>&#x2013;<lpage>230</lpage>. <pub-id pub-id-type="doi">10.1177/1073858419870446</pub-id> <pub-id pub-id-type="pmid">31517587</pub-id></mixed-citation></ref>
<ref id="B30"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Harris</surname> <given-names>A. R.</given-names></name> <name><surname>McGivern</surname> <given-names>P.</given-names></name> <name><surname>Gilbert</surname> <given-names>F.</given-names></name> <name><surname>Van Bergen</surname> <given-names>N.</given-names></name></person-group> (<year>2024</year>). <article-title>Defining biomarkers in stem cell-derived tissue constructs for drug and disease screening.</article-title> <source><italic>Adv. Healthc. Mater.</italic></source> <volume>13</volume>:<fpage>2401433</fpage>. <pub-id pub-id-type="doi">10.1002/adhm.202401433</pub-id> <pub-id pub-id-type="pmid">38741544</pub-id></mixed-citation></ref>
<ref id="B31"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Harris</surname> <given-names>A. R.</given-names></name> <name><surname>Walker</surname> <given-names>M. J.</given-names></name> <name><surname>Gilbert</surname> <given-names>F.</given-names></name></person-group> (<year>2022a</year>). <article-title>Ethical and regulatory issues of stem cell-derived 3-dimensional organoid and tissue therapy for personalised regenerative medicine.</article-title> <source><italic>BMC Med.</italic></source> <volume>20</volume>:<fpage>499</fpage>. <pub-id pub-id-type="doi">10.1186/s12916-022-02710-9</pub-id> <pub-id pub-id-type="pmid">36575403</pub-id></mixed-citation></ref>
<ref id="B32"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Harris</surname> <given-names>A. R.</given-names></name> <name><surname>Walker</surname> <given-names>M. J.</given-names></name> <name><surname>Gilbert</surname> <given-names>F.</given-names></name> <name><surname>McGivern</surname> <given-names>P.</given-names></name></person-group> (<year>2022b</year>). <article-title>Investigating the feasibility and ethical implications of phenotypic screening using stem cell-derived tissue models to detect and manage disease.</article-title> <source><italic>Stem Cell Rep.</italic></source> <volume>17</volume> <fpage>1023</fpage>&#x2013;<lpage>1032</lpage>. <pub-id pub-id-type="doi">10.1016/j.stemcr.2022.04.002</pub-id> <pub-id pub-id-type="pmid">35487211</pub-id></mixed-citation></ref>
<ref id="B33"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Harris</surname> <given-names>A. R.</given-names></name> <name><surname>Walker</surname> <given-names>M. J.</given-names></name> <name><surname>Gilbert</surname> <given-names>F.</given-names></name> <name><surname>McGivern</surname> <given-names>P.</given-names></name></person-group> (<year>2025</year>). <article-title>Where is the ethical debate around phenotypic screening of prenatal tissue using stem cell-derived tissue constructs?</article-title> <source><italic>Stem Cell Rev. Rep.</italic></source> <volume>21</volume> <fpage>280</fpage>&#x2013;<lpage>282</lpage>. <pub-id pub-id-type="doi">10.1007/s12015-024-10795-3</pub-id> <pub-id pub-id-type="pmid">39356391</pub-id></mixed-citation></ref>
<ref id="B34"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Hartung</surname> <given-names>T.</given-names></name> <name><surname>Morales Pantoja</surname> <given-names>I. E.</given-names></name> <name><surname>Smirnova</surname> <given-names>L.</given-names></name></person-group> (<year>2023a</year>). <article-title>Brain organoids and organoid intelligence from ethical, legal, and social points of view.</article-title> <source><italic>Front. Artif. Intell.</italic></source> <volume>6</volume>:<fpage>1307613</fpage>. <pub-id pub-id-type="doi">10.3389/frai.2023.1307613</pub-id> <pub-id pub-id-type="pmid">38249793</pub-id></mixed-citation></ref>
<ref id="B35"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Hartung</surname> <given-names>T.</given-names></name> <name><surname>Smirnova</surname> <given-names>L.</given-names></name> <name><surname>Morales Pantoja</surname> <given-names>I. E.</given-names></name> <name><surname>Akwaboah</surname> <given-names>A.</given-names></name> <name><surname>Alam, El Din</surname> <given-names>D.-M.</given-names></name><etal/></person-group> (<year>2023b</year>). <article-title>The Baltimore declaration toward the exploration of organoid intelligence.</article-title> <source><italic>Front. Sci.</italic></source> <volume>1</volume>:<fpage>1068159</fpage>. <pub-id pub-id-type="doi">10.3389/fsci.2023.1068159</pub-id></mixed-citation></ref>
<ref id="B36"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Hatherley</surname> <given-names>J.</given-names></name> <name><surname>Sparrow</surname> <given-names>R.</given-names></name> <name><surname>Howard</surname> <given-names>M.</given-names></name></person-group> (<year>2024</year>). <article-title>The virtues of interpretable medical AI.</article-title> <source><italic>Cambridge Q. Healthc. Ethics</italic></source> <volume>33</volume> <fpage>323</fpage>&#x2013;<lpage>332</lpage>. <pub-id pub-id-type="doi">10.1017/S0963180122000664</pub-id> <pub-id pub-id-type="pmid">36624634</pub-id></mixed-citation></ref>
<ref id="B37"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Hyun</surname> <given-names>I.</given-names></name> <name><surname>Scharf-Deering</surname> <given-names>J. C.</given-names></name> <name><surname>Sullivan</surname> <given-names>S.</given-names></name> <name><surname>Aach</surname> <given-names>J. D.</given-names></name> <name><surname>Arlotta</surname> <given-names>P.</given-names></name> <name><surname>Baum</surname> <given-names>M. L.</given-names></name><etal/></person-group> (<year>2022</year>). <article-title>How collaboration between bioethicists and neuroscientists can advance research.</article-title> <source><italic>Nat. Neurosci.</italic></source> <volume>25</volume> <fpage>1399</fpage>&#x2013;<lpage>1401</lpage>. <pub-id pub-id-type="doi">10.1038/s41593-022-01187-2</pub-id> <pub-id pub-id-type="pmid">36258039</pub-id></mixed-citation></ref>
<ref id="B38"><mixed-citation publication-type="book"><collab>International Society for Stem Cell Research</collab> (<year>2023</year>). <source><italic>Standards for Human Stem Cell Use in Research.</italic></source> <publisher-loc>Skokie, Ill</publisher-loc>: <publisher-name>International Society for Stem Cell Research</publisher-name>.</mixed-citation></ref>
<ref id="B39"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ishida</surname> <given-names>S.</given-names></name> <name><surname>Sawai</surname> <given-names>T.</given-names></name></person-group> (<year>2024</year>). <article-title>Beyond the personhood: An in-depth analysis of moral considerations in human brain organoid research.</article-title> <source><italic>Am. J. Bioeth.</italic></source> <volume>24</volume> <fpage>54</fpage>&#x2013;<lpage>56</lpage>. <pub-id pub-id-type="doi">10.1080/15265161.2023.2278553</pub-id> <pub-id pub-id-type="pmid">38236870</pub-id></mixed-citation></ref>
<ref id="B40"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Kagan</surname> <given-names>B. J.</given-names></name> <name><surname>Duc</surname> <given-names>D.</given-names></name> <name><surname>Stevens</surname> <given-names>I.</given-names></name> <name><surname>Gilbert</surname> <given-names>F.</given-names></name></person-group> (<year>2022a</year>). <article-title>Neurons embodied in a virtual world: Evidence for organoid ethics?</article-title> <source><italic>AJOB Neurosci.</italic></source> <volume>13</volume> <fpage>114</fpage>&#x2013;<lpage>117</lpage>. <pub-id pub-id-type="doi">10.1080/21507740.2022.2048731</pub-id> <pub-id pub-id-type="pmid">35324408</pub-id></mixed-citation></ref>
<ref id="B41"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Kagan</surname> <given-names>B. J.</given-names></name> <name><surname>Kitchen</surname> <given-names>A. C.</given-names></name> <name><surname>Tran</surname> <given-names>N. T.</given-names></name> <name><surname>Habibollahi</surname> <given-names>F.</given-names></name> <name><surname>Khajehnejad</surname> <given-names>M.</given-names></name> <name><surname>Parker</surname> <given-names>B. J.</given-names></name><etal/></person-group> (<year>2022b</year>). <article-title>In vitro neurons learn and exhibit sentience when embodied in a simulated game-world.</article-title> <source><italic>Neuron</italic></source> <volume>110</volume> <fpage>3952</fpage>&#x2013;<lpage>3969.e8</lpage>. <pub-id pub-id-type="doi">10.1016/j.neuron.2022.09.001</pub-id> <pub-id pub-id-type="pmid">36228614</pub-id></mixed-citation></ref>
<ref id="B42"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Kataoka</surname> <given-names>M.</given-names></name> <name><surname>Sawai</surname> <given-names>T.</given-names></name></person-group> (<year>2023</year>). <article-title>What implications do a consciousness-independent perspective on moral status entail for future brain organoid research?</article-title> <source><italic>AJOB Neurosci.</italic></source> <volume>14</volume> <fpage>163</fpage>&#x2013;<lpage>165</lpage>. <pub-id pub-id-type="doi">10.1080/21507740.2023.2188285</pub-id> <pub-id pub-id-type="pmid">37097860</pub-id></mixed-citation></ref>
<ref id="B43"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Kataoka</surname> <given-names>M.</given-names></name> <name><surname>Gyngell</surname> <given-names>C.</given-names></name> <name><surname>Savulescu</surname> <given-names>J.</given-names></name> <name><surname>Sawai</surname> <given-names>T.</given-names></name></person-group> (<year>2023</year>). <article-title>The importance of accurate representation of human brain organoid research.</article-title> <source><italic>Trends Biotechnol.</italic></source> <volume>41</volume> <fpage>985</fpage>&#x2013;<lpage>987</lpage>. <pub-id pub-id-type="doi">10.1016/j.tibtech.2023.02.010</pub-id> <pub-id pub-id-type="pmid">36959082</pub-id></mixed-citation></ref>
<ref id="B44"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Kataoka</surname> <given-names>M.</given-names></name> <name><surname>Ishida</surname> <given-names>S.</given-names></name> <name><surname>Kobayashi</surname> <given-names>C.</given-names></name> <name><surname>Lee</surname> <given-names>T.-L.</given-names></name> <name><surname>Sawai</surname> <given-names>T.</given-names></name></person-group> (<year>2025a</year>). <article-title>Evaluating neuroprivacy concerns in human brain organoid research.</article-title> <source><italic>Trends Biotechnol.</italic></source> <volume>43</volume> <fpage>491</fpage>&#x2013;<lpage>493</lpage>. <pub-id pub-id-type="doi">10.1016/j.tibtech.2024.09.001</pub-id> <pub-id pub-id-type="pmid">39306492</pub-id></mixed-citation></ref>
<ref id="B45"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Kataoka</surname> <given-names>M.</given-names></name> <name><surname>Niikawa</surname> <given-names>T.</given-names></name> <name><surname>Nagaishi</surname> <given-names>N.</given-names></name> <name><surname>Lee</surname> <given-names>T. L.</given-names></name> <name><surname>Erler</surname> <given-names>A.</given-names></name> <name><surname>Savulescu</surname> <given-names>J.</given-names></name><etal/></person-group> (<year>2025b</year>). <article-title>Beyond consciousness: Ethical, legal, and social issues in human brain organoid research and application.</article-title> <source><italic>Eur. J. Cell Biol.</italic></source> <volume>104</volume>:<fpage>151470</fpage>. <pub-id pub-id-type="doi">10.1016/j.ejcb.2024.151470</pub-id> <pub-id pub-id-type="pmid">39729735</pub-id></mixed-citation></ref>
<ref id="B46"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Khalifa</surname> <given-names>M.</given-names></name> <name><surname>Albadawy</surname> <given-names>M.</given-names></name></person-group> (<year>2024</year>). <article-title>Using artificial intelligence in academic writing and research: An essential productivity tool.</article-title> <source><italic>Comput. Methods Programs Biomed. Updat.</italic></source> <volume>5</volume>:<fpage>100145</fpage>. <pub-id pub-id-type="doi">10.1016/j.cmpbup.2024.100145</pub-id></mixed-citation></ref>
<ref id="B47"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Koplin</surname> <given-names>J. J.</given-names></name> <name><surname>Savulescu</surname> <given-names>J.</given-names></name></person-group> (<year>2019</year>). <article-title>Moral limits of brain organoid research.</article-title> <source><italic>J. Law, Med. Ethics</italic></source> <volume>47</volume> <fpage>760</fpage>&#x2013;<lpage>767</lpage>. <pub-id pub-id-type="doi">10.1177/1073110519897789</pub-id> <pub-id pub-id-type="pmid">31957593</pub-id></mixed-citation></ref>
<ref id="B48"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Lavazza</surname> <given-names>A.</given-names></name> <name><surname>Massimini</surname> <given-names>M.</given-names></name></person-group> (<year>2018</year>). <article-title>Cerebral organoids: Ethical issues and consciousness assessment.</article-title> <source><italic>J. Med. Ethics</italic></source> <volume>44</volume> <fpage>606</fpage>&#x2013;<lpage>610</lpage>. <pub-id pub-id-type="doi">10.1136/medethics-2017-104555</pub-id> <pub-id pub-id-type="pmid">29491041</pub-id></mixed-citation></ref>
<ref id="B49"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Maisumu</surname> <given-names>G.</given-names></name> <name><surname>Willerth</surname> <given-names>S.</given-names></name> <name><surname>Nestor</surname> <given-names>M. W.</given-names></name> <name><surname>Waldau</surname> <given-names>B.</given-names></name> <name><surname>Sch&#x00FC;lke</surname> <given-names>S.</given-names></name> <name><surname>Nardi</surname> <given-names>F. V.</given-names></name><etal/></person-group> (<year>2025</year>). <article-title>Brain organoids: Building higher-order complexity and neural circuitry models.</article-title> <source><italic>Trends Biotechnol.</italic></source> <volume>43</volume> <fpage>1583</fpage>&#x2013;<lpage>1598</lpage>. <pub-id pub-id-type="doi">10.1016/j.tibtech.2025.02.009</pub-id> <pub-id pub-id-type="pmid">40221251</pub-id></mixed-citation></ref>
<ref id="B50"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Marchal</surname> <given-names>I.</given-names></name></person-group> (<year>2025</year>). <article-title>AI designs de novo antibiotics.</article-title> <source><italic>Nat. Biotechnol.</italic></source> <volume>43</volume>:<fpage>1425</fpage>. <pub-id pub-id-type="doi">10.1038/s41587-025-02822-6</pub-id> <pub-id pub-id-type="pmid">40954271</pub-id></mixed-citation></ref>
<ref id="B51"><mixed-citation publication-type="book"><person-group person-group-type="author"><name><surname>Mueller</surname> <given-names>M.</given-names></name></person-group> (<year>2024</year>). <source><italic>The Myth of AGI.</italic></source> <publisher-loc>Atlanta, GA</publisher-loc>: <publisher-name>Internet Governance Project</publisher-name>.</mixed-citation></ref>
<ref id="B52"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Nestor</surname> <given-names>M. W.</given-names></name> <name><surname>Wilson</surname> <given-names>R. L.</given-names></name></person-group> (<year>2025</year>). <article-title>Assessing the utility of organoid intelligence: Scientific and ethical perspectives.</article-title> <source><italic>Organoids</italic></source> <volume>4</volume>:<fpage>9</fpage>. <pub-id pub-id-type="doi">10.3390/organoids4020009</pub-id></mixed-citation></ref>
<ref id="B53"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Niikawa</surname> <given-names>T.</given-names></name> <name><surname>Hayashi</surname> <given-names>Y.</given-names></name> <name><surname>Shepherd</surname> <given-names>J.</given-names></name> <name><surname>Sawai</surname> <given-names>T.</given-names></name></person-group> (<year>2022</year>). <article-title>Human brain organoids and consciousness.</article-title> <source><italic>Neuroethics</italic></source> <volume>15</volume>:<fpage>5</fpage>. <pub-id pub-id-type="doi">10.1007/s12152-022-09483-1</pub-id></mixed-citation></ref>
<ref id="B54"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Nordmann</surname> <given-names>A.</given-names></name></person-group> (<year>2007</year>). <article-title>If and then: A critique of speculative nanoethics.</article-title> <source><italic>Nanoethics</italic></source> <volume>1</volume> <fpage>31</fpage>&#x2013;<lpage>46</lpage>. <pub-id pub-id-type="doi">10.1007/s11569-007-0007-6</pub-id></mixed-citation></ref>
<ref id="B55"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ong</surname> <given-names>J. C. L.</given-names></name> <name><surname>Ning</surname> <given-names>Y.</given-names></name> <name><surname>Collins</surname> <given-names>G. S.</given-names></name> <name><surname>Bitterman</surname> <given-names>D. S.</given-names></name> <name><surname>Beecy</surname> <given-names>A. N.</given-names></name> <name><surname>Chang</surname> <given-names>R. T.</given-names></name><etal/></person-group> (<year>2025</year>). <article-title>International partnership for governing generative artificial intelligence models in medicine.</article-title> <source><italic>Nat. Med.</italic></source> <volume>31</volume> <fpage>2836</fpage>&#x2013;<lpage>2839</lpage>. <pub-id pub-id-type="doi">10.1038/s41591-025-03787-4</pub-id> <pub-id pub-id-type="pmid">40588674</pub-id></mixed-citation></ref>
<ref id="B56"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Racine</surname> <given-names>E.</given-names></name> <name><surname>Martin Rubio</surname> <given-names>T.</given-names></name> <name><surname>Chandler</surname> <given-names>J.</given-names></name> <name><surname>Forlini</surname> <given-names>C.</given-names></name> <name><surname>Lucke</surname> <given-names>J.</given-names></name></person-group> (<year>2014</year>). <article-title>The value and pitfalls of speculation about science and technology in bioethics: The case of cognitive enhancement.</article-title> <source><italic>Med. Heal. Care Philos.</italic></source> <volume>17</volume> <fpage>325</fpage>&#x2013;<lpage>337</lpage>. <pub-id pub-id-type="doi">10.1007/s11019-013-9539-4</pub-id> <pub-id pub-id-type="pmid">24402841</pub-id></mixed-citation></ref>
<ref id="B57"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Sawai</surname> <given-names>T.</given-names></name> <name><surname>Hayashi</surname> <given-names>Y.</given-names></name> <name><surname>Niikawa</surname> <given-names>T.</given-names></name> <name><surname>Shepherd</surname> <given-names>J.</given-names></name> <name><surname>Thomas</surname> <given-names>E.</given-names></name> <name><surname>Lee</surname> <given-names>T.-L.</given-names></name><etal/></person-group> (<year>2022</year>). <article-title>Mapping the ethical issues of brain organoid research and application.</article-title> <source><italic>AJOB Neurosci.</italic></source> <volume>13</volume> <fpage>81</fpage>&#x2013;<lpage>94</lpage>. <pub-id pub-id-type="doi">10.1080/21507740.2021.1896603</pub-id> <pub-id pub-id-type="pmid">33769221</pub-id></mixed-citation></ref>
<ref id="B58"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Smirnova</surname> <given-names>L.</given-names></name></person-group> (<year>2024</year>). <article-title>Biocomputing with organoid intelligence.</article-title> <source><italic>Nat. Rev. Bioeng.</italic></source> <volume>2</volume> <fpage>633</fpage>&#x2013;<lpage>634</lpage>. <pub-id pub-id-type="doi">10.1038/s44222-024-00200-6</pub-id></mixed-citation></ref>
<ref id="B59"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Sparrow</surname> <given-names>R.</given-names></name> <name><surname>Hatherley</surname> <given-names>J.</given-names></name> <name><surname>Oakley</surname> <given-names>J.</given-names></name> <name><surname>Bain</surname> <given-names>C.</given-names></name></person-group> (<year>2024</year>). <article-title>Should the use of adaptive machine learning systems in medicine be classified as research?</article-title> <source><italic>Am. J. Bioeth.</italic></source> <volume>24</volume> <fpage>58</fpage>&#x2013;<lpage>69</lpage>. <pub-id pub-id-type="doi">10.1080/15265161.2024.2337429</pub-id> <pub-id pub-id-type="pmid">38662360</pub-id></mixed-citation></ref>
<ref id="B60"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Toumey</surname> <given-names>C.</given-names></name></person-group> (<year>2019</year>). <article-title>Early voices for ethics in nanotechnology.</article-title> <source><italic>Nat. Nanotechnol.</italic></source> <volume>14</volume> <fpage>304</fpage>&#x2013;<lpage>305</lpage>. <pub-id pub-id-type="doi">10.1038/s41565-019-0422-1</pub-id> <pub-id pub-id-type="pmid">30944424</pub-id></mixed-citation></ref>
<ref id="B61"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Van Gyseghem</surname> <given-names>A.</given-names></name> <name><surname>Dierickx</surname> <given-names>K.</given-names></name> <name><surname>Barnhart</surname> <given-names>A. J.</given-names></name></person-group> (<year>2025</year>). <article-title>Consciousness and human brain organoids: A conceptual mapping of ethical and philosophical literature.</article-title> <source><italic>AJOB Neurosci.</italic></source> <pub-id pub-id-type="doi">10.1080/21507740.2025.2519459</pub-id> <pub-id pub-id-type="pmid">40632929</pub-id> <comment>[Epub ahead of print]</comment>.</mixed-citation></ref>
<ref id="B62"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Walker</surname> <given-names>M. J.</given-names></name> <name><surname>Bourke</surname> <given-names>J.</given-names></name> <name><surname>Hutchison</surname> <given-names>K.</given-names></name></person-group> (<year>2019</year>). <article-title>Evidence for personalised medicine: Mechanisms, correlation, and new kinds of black box.</article-title> <source><italic>Theor. Med. Bioeth.</italic></source> <volume>40</volume> <fpage>103</fpage>&#x2013;<lpage>121</lpage>. <pub-id pub-id-type="doi">10.1007/s11017-019-09482-z</pub-id> <pub-id pub-id-type="pmid">30771062</pub-id></mixed-citation></ref>
<ref id="B63"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Walker</surname> <given-names>M. J.</given-names></name> <name><surname>Nielsen</surname> <given-names>J.</given-names></name> <name><surname>Goddard</surname> <given-names>E.</given-names></name> <name><surname>Harris</surname> <given-names>A.</given-names></name> <name><surname>Hutchison</surname> <given-names>K.</given-names></name></person-group> (<year>2022</year>). <article-title>Induced pluripotent stem cell-based systems for personalising epilepsy treatment: Research ethics challenges and new insights for the ethics of personalised medicine.</article-title> <source><italic>AJOB Neurosci.</italic></source> <volume>13</volume> <fpage>120</fpage>&#x2013;<lpage>131</lpage>. <pub-id pub-id-type="doi">10.1080/21507740.2021.1949404</pub-id> <pub-id pub-id-type="pmid">34324412</pub-id></mixed-citation></ref>
<ref id="B64"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Wexler</surname> <given-names>A.</given-names></name></person-group> (<year>2019</year>). <article-title>Separating neuroethics from neurohype.</article-title> <source><italic>Nat. Biotechnol.</italic></source> <volume>37</volume> <fpage>988</fpage>&#x2013;<lpage>990</lpage>. <pub-id pub-id-type="doi">10.1038/s41587-019-0230-z</pub-id> <pub-id pub-id-type="pmid">31399721</pub-id></mixed-citation></ref>
<ref id="B65"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Wickham</surname> <given-names>J.</given-names></name> <name><surname>Corna</surname> <given-names>A.</given-names></name> <name><surname>Schwarz</surname> <given-names>N.</given-names></name> <name><surname>Uysal</surname> <given-names>B.</given-names></name> <name><surname>Layer</surname> <given-names>N.</given-names></name> <name><surname>Honegger</surname> <given-names>J. B.</given-names></name><etal/></person-group> (<year>2020</year>). <article-title>Human cerebrospinal fluid induces neuronal excitability changes in resected human neocortical and hippocampal brain slices.</article-title> <source><italic>Front. Neurosci.</italic></source> <volume>14</volume>:<fpage>283</fpage>. <pub-id pub-id-type="doi">10.3389/fnins.2020.00283</pub-id> <pub-id pub-id-type="pmid">32372899</pub-id></mixed-citation></ref>
<ref id="B66"><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Zafeiriou</surname> <given-names>M.-P.</given-names></name> <name><surname>Bao</surname> <given-names>G.</given-names></name> <name><surname>Hudson</surname> <given-names>J.</given-names></name> <name><surname>Halder</surname> <given-names>R.</given-names></name> <name><surname>Blenkle</surname> <given-names>A.</given-names></name> <name><surname>Schreiber</surname> <given-names>M.-K.</given-names></name><etal/></person-group> (<year>2020</year>). <article-title>Developmental GABA polarity switch and neuronal plasticity in bioengineered neuronal organoids.</article-title> <source><italic>Nat. Commun.</italic></source> <volume>11</volume>:<fpage>3791</fpage>. <pub-id pub-id-type="doi">10.1038/s41467-020-17521-w</pub-id> <pub-id pub-id-type="pmid">32728089</pub-id></mixed-citation></ref>
</ref-list>
<fn-group>
<fn id="n1" fn-type="custom" custom-type="edited-by"><p>Edited by: <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/180908/overview">Andrea Lavazza</ext-link>, Pegaso University, Italy</p></fn>
<fn id="n2" fn-type="custom" custom-type="reviewed-by"><p>Reviewed by: <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/217118/overview">Rodrigo Ramos-Z&#x00FA;&#x00F1;iga</ext-link>, University of Guadalajara, Mexico</p>
<p><ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/3332765/overview">Abel Garcia Abejas</ext-link>, Universidade da Beira Interior, Portugal</p></fn>
</fn-group>
<fn-group>
<fn id="footnote1"><label>1</label><p>Even less conceptually demanding notions such as <italic>sentience</italic> illustrate this point. The description of neural organoid&#x2013;AI behavior as &#x201C;sentient&#x201D; by Cortical Labs (<xref ref-type="bibr" rid="B41">Kagan et al., 2022b</xref>) was strongly criticized by the field as scientifically unwarranted (<xref ref-type="bibr" rid="B8">Balci et al., 2023</xref>). Tellingly, the lead author of <xref ref-type="bibr" rid="B41">Kagan et al. (2022b)</xref> later acknowledged this in <italic>Nature</italic>, stating: &#x201C;I wouldn&#x2019;t use the word sentient going forward&#x201D; (<xref ref-type="bibr" rid="B3">Adam, 2025a</xref>).</p></fn>
<fn id="footnote2"><label>2</label><p>We note that in these examples, a pathology lab would be running the AI-NO system, with information provided to and from a clinician. It is unlikely a clinician would be in direct contact with the AI-NO system.</p></fn>
</fn-group>
</back>
</article>