<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.3 20210610//EN" "JATS-journalpublishing1-3-mathml3.dtd">
<article xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:ali="http://www.niso.org/schemas/ali/1.0/" article-type="review-article" dtd-version="1.3" xml:lang="EN">
<front>
<journal-meta>
<journal-id journal-id-type="publisher-id">Front. Vet. Sci.</journal-id>
<journal-title-group>
<journal-title>Frontiers in Veterinary Science</journal-title>
<abbrev-journal-title abbrev-type="pubmed">Front. Vet. Sci.</abbrev-journal-title>
</journal-title-group>
<issn pub-type="epub">2297-1769</issn>
<publisher>
<publisher-name>Frontiers Media S.A.</publisher-name>
</publisher>
</journal-meta>
<article-meta>
<article-id pub-id-type="doi">10.3389/fvets.2026.1780868</article-id>
<article-version article-version-type="Version of Record" vocab="NISO-RP-8-2008"/>
<article-categories>
<subj-group subj-group-type="heading">
<subject>Mini Review</subject>
</subj-group>
</article-categories>
<title-group>
<article-title>Ethical considerations of artificial intelligence in veterinary medicine decision-making</article-title>
</title-group>
<contrib-group>
<contrib contrib-type="author" corresp="yes"><name><surname>Heinlein</surname> <given-names>Matt</given-names></name><xref ref-type="aff" rid="aff1"/><xref ref-type="corresp" rid="c001"><sup>&#x002A;</sup></xref>
<uri xlink:href="https://loop.frontiersin.org/people/3330304"/>
<role vocab="credit" vocab-identifier="https://credit.niso.org/" vocab-term="Writing &#x2013; original draft" vocab-term-identifier="https://credit.niso.org/contributor-roles/writing-original-draft/">Writing &#x2013; original draft</role>
</contrib>
</contrib-group>
<aff id="aff1"><institution>TLC Animal Hospital</institution>, <city>El Paso</city>, <city>TX</city>, <country country="us">United States</country></aff>
<author-notes>
<corresp id="c001"><label>&#x002A;</label>Correspondence: Matt Heinlein, <email xlink:href="mailto:matthew.heinlein@yahoo.com">mheinlein.research@gmail.com</email></corresp>
</author-notes>
<pub-date publication-format="electronic" date-type="pub" iso-8601-date="2026-02-13">
<day>13</day>
<month>02</month>
<year>2026</year>
</pub-date>
<pub-date publication-format="electronic" date-type="collection">
<year>2026</year>
</pub-date>
<volume>13</volume>
<elocation-id>1780868</elocation-id>
<history>
<date date-type="received">
<day>05</day>
<month>01</month>
<year>2026</year>
</date>
<date date-type="rev-recd">
<day>25</day>
<month>01</month>
<year>2026</year>
</date>
<date date-type="accepted">
<day>26</day>
<month>01</month>
<year>2026</year>
</date>
</history>
<permissions>
<copyright-statement>Copyright &#x00A9; 2026 Heinlein.</copyright-statement>
<copyright-year>2026</copyright-year>
<copyright-holder>Heinlein</copyright-holder>
<license>
<ali:license_ref start_date="2026-02-13">https://creativecommons.org/licenses/by/4.0/</ali:license_ref>
<license-p>This is an open-access article distributed under the terms of the <ext-link ext-link-type="uri" xlink:href="https://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution License (CC BY)</ext-link>. The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.</license-p>
</license>
</permissions>
<abstract>
<p>The rapid growth and development of Artificial Intelligence (AI) is leading to a paradigm shift across multiple disciplines of decision-making. Veterinary medicine is an area wherein this proliferation offers profound potential for advancement, but is also ripe with potential ethical dilemmas resulting from the assimilation of AI technology into the decision-making process. While AI can increase accessibility of advanced veterinary care and improve efficiency of clinical and administrative workflow, the successful implementation of it into veterinary decision making requires assessment of key areas. These areas are the accuracy and reliability of AI diagnostic interpretations, the ethical implications of bias in AI algorithms, stewardship of privacy and personal data, and the balance of innovation with legal and professional responsibilities of animal welfare. Results of this review found that AI should aid, not replace, veterinary professional decision-making. To that end, continued research into accuracy and vigilance to mitigate bias is necessary, foundational standards for AI use and education must be enacted, and further research into the effect of AI on clinically ambiguous cases is imperative to safeguard the ethical standards of veterinary decision-making.</p>
</abstract>
<kwd-group>
<kwd>accuracy</kwd>
<kwd>artificial intelligence</kwd>
<kwd>bias</kwd>
<kwd>decision- making</kwd>
<kwd>ethics</kwd>
<kwd>medicine</kwd>
<kwd>veterinary</kwd>
</kwd-group>
<funding-group>
<funding-statement>The author(s) declared that financial support was not received for this work and/or its publication.</funding-statement>
</funding-group>
<counts>
<fig-count count="0"/>
<table-count count="0"/>
<equation-count count="0"/>
<ref-count count="14"/>
<page-count count="5"/>
<word-count count="3821"/>
</counts>
<custom-meta-group>
<custom-meta>
<meta-name>section-at-acceptance</meta-name>
<meta-value>Veterinary Humanities and Social Sciences</meta-value>
</custom-meta>
</custom-meta-group>
</article-meta>
</front>
<body>
<sec sec-type="intro" id="sec1">
<title>Introduction</title>
<p>The proliferation of Artificial Intelligence (AI) technology is ingraining itself into an increasing number of daily activities in everyday life. The promise of AI has become a force for paradigm change, as algorithmic models reduce human involvement in decision-making and societal reliance on AI for evaluating information increases (<xref ref-type="bibr" rid="ref1">1</xref>). Technology does not advance in a strictly linear manner. Roadblocks to technological advancement include areas such as accessibility, affordability, processing power, and speed. Increases in any of these areas will exponentially increase the likelihood and speed of the next advancement. It bears in mind asking whether the speed of AI advancement&#x2014;and the rate of adoption&#x2014;have circumvented traditional ethical frameworks, presenting new challenges for decision-making.</p>
<p>Veterinary medicine is a field that offers numerous opportunities to leverage AI in decision-making, and the rapid pace of technological advancement is leading to its ubiquitous integration into veterinary practice (<xref ref-type="bibr" rid="ref2">2</xref>). AI applications are available for a variety of clinical and professional uses, such as client communication, scheduling, document generation, record-keeping, and diagnostic evaluation. The use of AI models is not limited to the clinical setting. Research and education are two other areas apt for the adoption of AI into their processes (<xref ref-type="bibr" rid="ref3">3</xref>). A 2024 survey of 3,968 veterinary professionals found that 39.2% used AI in medical practice and 69.5% used AI for professional/administrative tasks (<xref ref-type="bibr" rid="ref4">4</xref>).</p>
<p>While further integration of AI has the potential to drive positive change and advancements, the unique nature of veterinary medicine poses complex ethical considerations regarding the use of AI technologies for decision-making and support (<xref ref-type="bibr" rid="ref5">5</xref>). The scope of this review is to evaluate existing AI technology in veterinary decision-making, discuss its potential for positive advancements, and assess its impact on ethical considerations within the field. Ethical dilemmas directly arising from the use of AI will be identified and detailed. The goal of this review is to identify gaps in the literature and identify areas that warrant recommendations for further research or regulatory consideration.</p>
</sec>
<sec id="sec2">
<title>Overview of AI applications in veterinary medicine: opportunities and considerations</title>
<p>An important distinction must be made to understand the current capabilities of AI technology. Narrow AI systems are designed to perform a single task or a limited set of tasks, with outputs based on problem-solving, pattern recognition, and reasoning guided by programmed parameters and preloaded datasets. General AI is a strictly theoretical concept in which advanced AI models would actively learn and apply knowledge to make independent decisions across potentially unlimited disciplines (<xref ref-type="bibr" rid="ref3">3</xref>). As of the writing of this paper, general AI does not exist. All available AI systems today are a form of narrow AI, and their capabilities should be understood as such.</p>
<p>Sobkowich (<xref ref-type="bibr" rid="ref3">3</xref>) details a wide range of commonly used AI applications in veterinary clinical workflows. Workflow automation includes such opportunities as client communication, medical scribes, appointment scheduling, and inventory management. The tools available to remove the human time cost of scheduling appointments or transcribing phone calls for training offer the promise of improved administrative efficiency. AI transcribed conversations with pet owners can be instantly uploaded to the patient record and streamline medical record maintenance. Automating inventory management can reduce the cost of goods sold by reducing shrinkage, and it can also prevent running out of necessary stock.</p>
<p>AI diagnostic tools can support clinical evaluation of various imaging modalities or cytological interpretation. Digital AI radiology services offer algorithmic screening of images, which have the potential to improve efficiency, accuracy, and consistency in the diagnostic interpretation and treatment planning of veterinarians (<xref ref-type="bibr" rid="ref6">6</xref>). AI-powered point-of-care diagnostic tools provide cytological slide evaluation for areas such as hematology, fecal analysis, urine sediment analysis, and dermatology, thereby enhancing the capabilities of point-of-care instruments to more closely match those of a reference laboratory (<xref ref-type="bibr" rid="ref7">7</xref>).</p>
<p>AI can be used in educational settings and practical skills training through in-clinic simulations and augmented reality. An example of this would be instructional overlays that provide real-time instruction and feedback to veterinary students on surgical rotations, or AI-curated educational platforms that can personalize learning support for individual students. Predictive analytics can also assist with early-detection systems and lab sample prioritization, as well as forecasting pathological trends in epidemiology (<xref ref-type="bibr" rid="ref3">3</xref>).</p>
<p>The application of AI to these realms of veterinary decision-making stands to advance patient care, improve client service, and optimize clinical processes, while also offering new avenues to expand research, education, and animal welfare initiatives. However, the rapid advancement and general acceptance of AI create ethical concerns that must be addressed (<xref ref-type="bibr" rid="ref8">8</xref>). This research found the questions of ethical concerns to be as follows:</p>
<list list-type="order">
<list-item>
<p>Can veterinarians rely on the accuracy and reliability of AI-generated information for high-stakes decision-making?</p>
</list-item>
<list-item>
<p>What are the ethical implications of bias in AI datasets and algorithmic outputs?</p>
</list-item>
<list-item>
<p>How can veterinary professionals ensure ethical use of data and maintain stewardship of privacy and security?</p>
</list-item>
<list-item>
<p>How can veterinary professionals balance innovation with their legal and professional responsibilities assigned to patient care and animal welfare?</p>
</list-item>
</list>
</sec>
<sec id="sec3">
<title>Accuracy and reliability</title>
<p>There is no single way to build a diagnostic algorithm. However, a generally accepted sound practice is to train a convolutional neural network (CNN), with supporting learning models fine-tuning the process and, ideally, reducing errors. CNNs are advanced filter systems designed to recognize and identify increasingly precise aspects of images as they move down through finer layers of the filter system (<xref ref-type="bibr" rid="ref7">7</xref>). For example, the first filter may recognize general shapes and outlines; the next level may identify image orientation; the third level may recognize body cavities; and this continues down to optimize image recognition through finer details, i.e., species, specific organs, or cellular structure, if using an AI cytology product. The images are then compared against a standard set of &#x201C;normal&#x201D; cases and &#x201C;abnormal&#x201D; examples of the pathology that the developers are training the AI to detect. Ideally, board-certified veterinary radiologists should be directly involved in developing and training AI models on standard image recognition datasets and image interpretation (<xref ref-type="bibr" rid="ref6">6</xref>). The same should hold for cytology AI models, with board-certified pathologists directly involved in their development and training (<xref ref-type="bibr" rid="ref7">7</xref>).</p>
<p>Many of the algorithms behind learning models in commercially available veterinary AI systems are proprietary information. As such, there is an inherent lack of transparency in the decision-making parameters of a given system, and veterinary practitioners may not know the internal factors influencing an AI program&#x2019;s determination or proposed diagnosis (<xref ref-type="bibr" rid="ref8">8</xref>). The opacity of proprietary algorithms creates questions of reliability and accuracy. AHAA and Digitail (<xref ref-type="bibr" rid="ref4">4</xref>) surveyed veterinary professionals&#x2019; concerns about AI adoption, and reliability and accuracy were cited as the top concern by a majority of respondents (70.3%).</p>
<p>Pomerantz et al. (<xref ref-type="bibr" rid="ref9">9</xref>) conducted an experiment to assess the ability of Vetology AI<sup>&#x00AE;</sup>, a commercially available AI radiology tool, to recognize pulmonary masses on thoracic radiographs. Accuracy, balanced accuracy, specificity, and sensitivity were tested by reading 56 sets of radiographic images using Vetology AI<sup>&#x00AE;</sup>. The 56 cases featured disease consistent with pulmonary nodules on radiograph, as confirmed by other diagnostic modalities, e.g., CT, cytology, or histopathology. A control group consisted of an additional 32 sets of normal radiographs.</p>
<p>Pomerantz et al. (<xref ref-type="bibr" rid="ref9">9</xref>) found that the AI model correctly indicated the presence of pulmonary masses in 31 of 56 confirmed positive cases, and accurately read 30 of 32 controlled negative cases. The AI&#x2019;s clinical interpretation was accurate 69.3% of the time, with a balanced accuracy of 74.6%. Specificity was 93.75%, and sensitivity dropped to 55.4%.</p>
<p>A study by Ndiaye et al. (<xref ref-type="bibr" rid="ref10">10</xref>) compared the performance of another commercially available AI radiology software, SignalRAY<sup>&#x00AE;</sup>, with that of 11 board-certified veterinary radiologists. A sample group of 50 radiographic studies was randomly selected and anonymized from an institutional PACS system, consisting of 10 feline and 40 canine studies. Each study was both read by each radiologist and processed with the AI software. Results were coded as either normal or abnormal depending on the radiologists&#x2019; diagnostic report and the AI model&#x2019;s classification of findings.</p>
<p>The results of the experiment found that the AI software&#x2019;s overall accuracy was on par with that of the highest-performing radiologist and exceeded that of the median-performing radiologist. In both low- and high-ambiguity cases, the AI maintained a high level of accuracy. However, the AI was more specific but less sensitive compared to the interpretations of the images by the radiologists (<xref ref-type="bibr" rid="ref10">10</xref>).</p>
<p>The high specificity-to-low sensitivity comparison observed by both Pomerantz et al. (<xref ref-type="bibr" rid="ref9">9</xref>) and Ndiaye et al. (<xref ref-type="bibr" rid="ref10">10</xref>) indicated that the versions of diagnostic AI models studied are better at recognizing negative results than positive results under the testing parameters. The systems performed better at recognizing normal images than at recognizing abnormal findings, indicating a greater propensity to rule out disease than to verify its presence (<xref ref-type="bibr" rid="ref9">9</xref>). AI diagnostics products have the potential to increase the efficiency, availability, and accuracy of veterinary diagnostic decision-making; however, they should not be viewed as a replacement for veterinary clinical judgment, but rather as another tool in the veterinarian&#x2019;s box to gain a greater understanding of a patient&#x2019;s clinical presentation (<xref ref-type="bibr" rid="ref11">11</xref>). As AI continues to advance, and existing models are trained on more data, continued research is required to assess the accuracy and reliability of their results. Furthermore, additional research is warranted to understand the propensity of AI results to influence veterinarians&#x2019; decisions in ambiguous cases or when the algorithm generates an erroneous recommendation.</p>
</sec>
<sec id="sec4">
<title>The effect of bias</title>
<p>An extenuating circumstance of the opaque nature of proprietary AI algorithms is the introduction of bias to the datasets. A substantial limitation of current studies on AI applications in veterinary medicine decision-making is bias arising from a limited data pool (<xref ref-type="bibr" rid="ref10">10</xref>). For example, AI algorithms in pathology and radiology can only be trained with images available to developers. If particular species, conditions, or age groups are over-represented in either software development or in continually building the dataset at the end-user level, then results can be heavily skewed (<xref ref-type="bibr" rid="ref12">12</xref>). An ethical question then arises: is it morally responsible for the veterinarian to make clinical recommendations based on those results (<xref ref-type="bibr" rid="ref8">8</xref>)? Could an AI system trained mostly on canine and feline patients be expected to reliably produce clinical evaluations for underrepresented species, i.e., exotic companion animals or livestock? If species popular amongst urban clinics are fed into the algorithm at a higher rate than those of rural communities, which may have less access to veterinary care, would the AI output then be biased towards greater accuracy for conditions more common in urban breeds? As evidenced in human healthcare, ingrained bias can lead to unequal patient care across underrepresented groups (<xref ref-type="bibr" rid="ref12">12</xref>). The same is true of veterinary medicine. Unseen bias can influence clinical outcomes, exacerbating disparities in access to care and, at a broader level, eroding public trust in veterinary clinical decisions (<xref ref-type="bibr" rid="ref1">1</xref>). Addressing bias is an ethical obligation to mitigate unequal access to care and disparities in animal welfare.</p>
</sec>
<sec id="sec5">
<title>Data privacy and security</title>
<p>A common assumption about the nature of privacy is that it confers a right on a person to prevent unauthorized use of their information (<xref ref-type="bibr" rid="ref13">13</xref>). While that is not all the concept of privacy entails, it is a suitable definition for the context of data collection in veterinary AI models. Data security and privacy were cited as the second most common concern regarding AI in the AHAA and Digitail survey (<xref ref-type="bibr" rid="ref4">4</xref>) at 53.9%. Veterinary considerations include not only patients but also clients&#x2019; sensitive data collected during routine clinical practice. Substantial amounts of owner-identifying data are present in electronic medical records, and veterinary facilities must ensure the security of personal sensitive data (<xref ref-type="bibr" rid="ref14">14</xref>).</p>
<p>The complex digital infrastructure required to maintain AI models and datasets relies on cloud storage, third-party vendors, and integrated software. This increases the risk of cyberattacks because the overall amount of breach points increases (<xref ref-type="bibr" rid="ref13">13</xref>). Even anonymized data can be matched with other datasets to identify individuals in the event of a malicious breach (<xref ref-type="bibr" rid="ref5">5</xref>). Data breaches can jeopardize client trust in a veterinary practice and severely disrupt practice operations, incurring catastrophic financial and reputational costs. Veterinary hospitals should have sufficient protection protocols in place to safeguard private information, such as cyberattack insurance and malware protection.</p>
<p>Malicious intent is not the sole consideration of data protection. The intentional release of information to developers or third-party vendors can be part of the AI workflow (<xref ref-type="bibr" rid="ref14">14</xref>). Non-malicious use of the data could apply to algorithm training, marketing data aggregation, and additional software integration. This raises questions about informed consent, as clients may not fully understand how their data may be shared. Veterinary professionals should keep such questions in mind when reviewing terms of service for AI systems in their practice (<xref ref-type="bibr" rid="ref14">14</xref>). Ethical stewardship of confidential information requires veterinarians to understand how data might be shared to inform clients of the possibilities adequately.</p>
</sec>
<sec id="sec6">
<title>Professional responsibility, animal welfare, and regulations</title>
<p>AI-assisted decision-making presents an exciting opportunity to leverage technology in improving veterinary standards of care. However, the rapid development and complexity of AI technology have outpaced the establishment of guidelines, regulations, and best practices (<xref ref-type="bibr" rid="ref6">6</xref>). Veterinary patients, by nature, lack legal agency and the ability to communicate to inform their own care. Instead, this agency is assigned to both veterinary professionals and pet owners through the Veterinary-Client-Patient Relationship (VCPR). This lack of self-agency potentially exposes veterinary patients to a greater risk of harm from AI than patients in the human medical field (<xref ref-type="bibr" rid="ref5">5</xref>). The VCPR is a construct designed to ensure that informed decisions are made in the best interests of the patient, as determined by the veterinarian and the client. In this role, veterinary professionals must advocate for their patients and allow clients an opportunity to advocate for their pets.</p>
<p>The ethical dilemma of informed decision-making is two-fold. First, veterinarians may not understand the complexity of AI algorithms and therefore cannot explain how an AI-generated report arrived at a specific diagnosis. Second, there is a lack of a regulatory framework that standardizes when veterinarians must inform clients about the use of AI tools. Ethically, veterinarians should strive for the utmost transparency when disclosing the use of AI systems to clients (<xref ref-type="bibr" rid="ref6">6</xref>).</p>
<p>The use of AI poses challenges for assigning responsibility for erroneous diagnoses and clinical interpretations (<xref ref-type="bibr" rid="ref5">5</xref>). If an AI-generated interpretation misses pathology or overdiagnoses a case, who is ultimately responsible? Is it the algorithm, the developers, or the veterinarian who is liable? The veterinarian is responsible for maintaining standards of care and assessing the information provided by diagnostic tools. The scope of veterinary responsibility is defined by a region&#x2019;s veterinary practice act, which does not regulate specific tools but rather how they are used by a licensed veterinarian (<xref ref-type="bibr" rid="ref14">14</xref>). For the successful deployment of AI in veterinary medicine decision-making, it is imperative that AI models support, rather than replace, practitioners&#x2019; clinical judgment (<xref ref-type="bibr" rid="ref11">11</xref>). Regulatory bodies, such as state certifying boards, should be encouraged to establish rules on the scope of AI use in veterinary practice to protect veterinary professionals, clients, and animal welfare from potential hazards posed by AI misuse in veterinary decision-making (<xref ref-type="bibr" rid="ref6">6</xref>).</p>
<p>Competency, training, and AI literacy will be integral as the field moves toward further integration with AI decision-making support. Veterinarians implementing AI models into their decision-making processes should seek out continuing education on AI to understand the performance of veterinary AI systems (<xref ref-type="bibr" rid="ref6">6</xref>). A survey of professional veterinary students at the University of California-Davis found that 80% of respondents reported having slight or no knowledge of AI, and 59% expected to use AI tools in their practice. Moderate to extreme interest in the potential opportunities for AI in veterinary medicine was acknowledged by 79% of respondents, but only 37% reported learning about AI concepts in their curriculum (<xref ref-type="bibr" rid="ref2">2</xref>). Given the role AI is likely to play in the future of veterinary medicine, it would behoove veterinary colleges to establish standardized parameters for AI development and utilization education in their programs (<xref ref-type="bibr" rid="ref6">6</xref>). This will facilitate future veterinarians&#x2019; competence in AI applications, enabling them to critically evaluate the integration of AI into their practice (<xref ref-type="bibr" rid="ref2">2</xref>). The potential benefits AI innovation offers to veterinary medicine must be continually scrutinized to balance technological advancement with monitoring of patient welfare.</p>
</sec>
<sec sec-type="discussion" id="sec7">
<title>Discussion</title>
<p>Like many aspects of the world today, AI holds the potential for positive transformative power in veterinary medicine (<xref ref-type="bibr" rid="ref6">6</xref>). However, AI should be used to support veterinary professionals&#x2019; clinical skills, but not to replace them. Continued research is necessary to assess the clinical performance of AI diagnostics, to increase the size of the tested data pool, and to monitor for signs of bias and inaccuracy. Governing and licensing bodies should set standards for the ethical use of AI in veterinary medicine and for safeguards for the collection and storage of personal data. Veterinary educational institutions must adapt to changes in the veterinary world that their students will experience and implement curricula that prepare veterinary professionals to understand and critically assess AI&#x2019;s performance in their decision-making. This review found the need for further research into the use of AI in areas of ambiguous clinical evaluation. What is the propensity of AI to alter the clinical decision course of a veterinarian when patient presentation or clinical judgement disagrees with the AI-generated interpretation? Veterinary professionals have a professional duty and moral responsibility to safeguard the welfare of their patients and clients against the potential ethical risks of AI technological adoption.</p>
</sec>
</body>
<back>
<sec sec-type="author-contributions" id="sec8">
<title>Author contributions</title>
<p>MH: Writing &#x2013; original draft.</p>
</sec>
<ack>
<title>Acknowledgments</title>
<p>Dr. Carly Speranza, Indiana Tech University, assisted in the proofreading stage of this work.</p>
</ack>
<sec sec-type="COI-statement" id="sec9">
<title>Conflict of interest</title>
<p>The author(s) declared that this work was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.</p>
</sec>
<sec sec-type="ai-statement" id="sec10">
<title>Generative AI statement</title>
<p>The author(s) declared that Generative AI was not used in the creation of this manuscript. Assistive AI technology (Grammarly) was used in the proofreading stage of this work.</p>
<p>Any alternative text (alt text) provided alongside figures in this article has been generated by Frontiers with the support of artificial intelligence and reasonable efforts have been made to ensure accuracy, including review by the authors wherever possible. If you identify any issues, please contact us.</p>
</sec>
<sec sec-type="disclaimer" id="sec11">
<title>Publisher&#x2019;s note</title>
<p>All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.</p>
</sec>
<ref-list>
<title>References</title>
<ref id="ref1"><label>1.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Wojciechowski</surname> <given-names>A</given-names></name> <name><surname>Korjonen-Kuusipuro</surname> <given-names>K</given-names></name></person-group>. <article-title>Social impact of data bias in artificial intelligence models</article-title>. <source>Hum Technol</source>. (<year>2025</year>) <volume>21</volume>:<fpage>246</fpage>&#x2013;<lpage>50</lpage>. doi: <pub-id pub-id-type="doi">10.14254/1795-6889.2025.21-2.0</pub-id></mixed-citation></ref>
<ref id="ref2"><label>2.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Reagan</surname> <given-names>KL</given-names></name> <name><surname>Boudreaux</surname> <given-names>K</given-names></name> <name><surname>Keller</surname> <given-names>SM</given-names></name></person-group>. <article-title>Veterinary students exhibit low artificial intelligence literacy but agree it will be deployed to improve veterinary medicine</article-title>. <source>Am J Vet Res</source>. (<year>2025</year>):<fpage>1</fpage>&#x2013;<lpage>6</lpage>. doi: <pub-id pub-id-type="doi">10.2460/ajvr.25.03.0082</pub-id></mixed-citation></ref>
<ref id="ref3"><label>3.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Sobkowich</surname> <given-names>KE</given-names></name></person-group>. <article-title>Demystifying artificial intelligence for veterinary professionals: practical applications and future potential</article-title>. <source>Am J Vet Res</source>. (<year>2025</year>) <volume>86</volume>:<fpage>S6</fpage>&#x2013;<lpage>S15</lpage>. doi: <pub-id pub-id-type="doi">10.2460/ajvr.24.09.0275</pub-id>, <pub-id pub-id-type="pmid">39842402</pub-id></mixed-citation></ref>
<ref id="ref4"><label>4.</label><mixed-citation publication-type="other"><person-group person-group-type="author"><collab id="coll1">AHAA and Digitail</collab></person-group>. (<year>2024</year>). AI in veterinary medicine: the next paradigm shift Available online at: <ext-link xlink:href="https://4912130.fs1.hubspotusercontent-na1.net/hubfs/4912130/Whitepapers/DigitailAIinVeterinaryMedicineStudy.pdf?" ext-link-type="uri">https://4912130.fs1.hubspotusercontent-na1.net/hubfs/4912130/Whitepapers/DigitailAIinVeterinaryMedicineStudy.pdf?</ext-link> (Accessed February 2, 2024)</mixed-citation></ref>
<ref id="ref5"><label>5.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Coghlan</surname> <given-names>S</given-names></name> <name><surname>Quinn</surname> <given-names>T</given-names></name></person-group>. <article-title>Ethics of using artificial intelligence (AI) in veterinary medicine</article-title>. <source>AI Soc</source>. (<year>2023</year>) <volume>39</volume>:<fpage>2337</fpage>&#x2013;<lpage>48</lpage>. doi: <pub-id pub-id-type="doi">10.1007/s00146-023-01686-1</pub-id></mixed-citation></ref>
<ref id="ref6"><label>6.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Appleby</surname> <given-names>RB</given-names></name> <name><surname>Difazio</surname> <given-names>M</given-names></name> <name><surname>Cassel</surname> <given-names>N</given-names></name> <name><surname>Hennessey</surname> <given-names>R</given-names></name> <name><surname>Basran</surname> <given-names>PS</given-names></name></person-group>. <article-title>American college of veterinary radiology and European college of veterinary diagnostic imaging position statement on artificial intelligence</article-title>. <source>J Am Vet Med Assoc</source>. (<year>2025</year>) <volume>263</volume>:<fpage>773</fpage>&#x2013;<lpage>6</lpage>. doi: <pub-id pub-id-type="doi">10.2460/javma.25.01.0027</pub-id>, <pub-id pub-id-type="pmid">40107235</pub-id></mixed-citation></ref>
<ref id="ref7"><label>7.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Neal</surname> <given-names>SV</given-names></name> <name><surname>Rudmann</surname> <given-names>DG</given-names></name> <name><surname>Corps</surname> <given-names>KN</given-names></name></person-group>. <article-title>Artificial intelligence in veterinary clinical pathology&#x2014;an introduction and review</article-title>. <source>Vet Clin Pathol</source>. (<year>2025</year>) <volume>54</volume>:<fpage>S13</fpage>&#x2013;<lpage>S29</lpage>. doi: <pub-id pub-id-type="doi">10.1111/vcp.70012</pub-id>, <pub-id pub-id-type="pmid">40462415</pub-id></mixed-citation></ref>
<ref id="ref8"><label>8.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Cohen</surname> <given-names>EB</given-names></name> <name><surname>Gordon</surname> <given-names>IK</given-names></name></person-group>. <article-title>First, do no harm. Ethical and legal issues of artificial intelligence and machine learning in veterinary radiology and radiation oncology</article-title>. <source>Vet Radiol Ultrasound</source>. (<year>2022</year>) <volume>63</volume>:<fpage>840</fpage>&#x2013;<lpage>50</lpage>. doi: <pub-id pub-id-type="doi">10.1111/vru.13171</pub-id>, <pub-id pub-id-type="pmid">36514231</pub-id></mixed-citation></ref>
<ref id="ref9"><label>9.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Pomerantz</surname> <given-names>LK</given-names></name> <name><surname>Solano</surname> <given-names>M</given-names></name> <name><surname>Kalosa-Kenyon</surname> <given-names>E</given-names></name></person-group>. <article-title>Performance of a commercially available artificial intelligence software for the detection of confirmed pulmonary nodules and masses in canine thoracic radiography</article-title>. <source>Vet Radiol Ultrasound</source>. (<year>2023</year>) <volume>64</volume>:<fpage>881</fpage>&#x2013;<lpage>9</lpage>. doi: <pub-id pub-id-type="doi">10.1111/vru.13287</pub-id>, <pub-id pub-id-type="pmid">37549965</pub-id></mixed-citation></ref>
<ref id="ref10"><label>10.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Ndiaye</surname> <given-names>YS</given-names></name> <name><surname>Cramton</surname> <given-names>P</given-names></name> <name><surname>Chernev</surname> <given-names>C</given-names></name> <name><surname>Ockenfels</surname> <given-names>A</given-names></name> <name><surname>Schwarz</surname> <given-names>T</given-names></name></person-group>. <article-title>Comparison of radiological interpretation made by veterinary radiologists and state-of-the-art commercial AI software for canine and feline radiographic studies</article-title>. <source>Front Vet Sci</source>. (<year>2025</year>) <volume>12</volume>:<fpage>1502790</fpage>. doi: <pub-id pub-id-type="doi">10.3389/fvets.2025.1502790</pub-id>, <pub-id pub-id-type="pmid">40061904</pub-id></mixed-citation></ref>
<ref id="ref11"><label>11.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Burti</surname> <given-names>S</given-names></name> <name><surname>Banzato</surname> <given-names>T</given-names></name> <name><surname>Coghlan</surname> <given-names>S</given-names></name> <name><surname>Wodzinski</surname> <given-names>M</given-names></name> <name><surname>Bendazzoli</surname> <given-names>M</given-names></name> <name><surname>Zotti</surname> <given-names>A</given-names></name></person-group>. <article-title>Artificial intelligence in veterinary diagnostic imaging: perspectives and limitations</article-title>. <source>Res Vet Sci</source>. (<year>2024</year>) <volume>175</volume>:<fpage>105317</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.rvsc.2024.105317</pub-id>, <pub-id pub-id-type="pmid">38843690</pub-id></mixed-citation></ref>
<ref id="ref12"><label>12.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Norori</surname> <given-names>N</given-names></name> <name><surname>Hu</surname> <given-names>Q</given-names></name> <name><surname>Aellen</surname> <given-names>FM</given-names></name> <name><surname>Faraci</surname> <given-names>FD</given-names></name> <name><surname>Tzovara</surname> <given-names>A</given-names></name></person-group>. <article-title>Addressing bias in big data and AI for health care: a call for open science</article-title>. <source>Patterns</source>. (<year>2021</year>) <volume>2</volume>:<fpage>100347</fpage>. doi: <pub-id pub-id-type="doi">10.1016/j.patter.2021.100347</pub-id>, <pub-id pub-id-type="pmid">34693373</pub-id></mixed-citation></ref>
<ref id="ref13"><label>13.</label><mixed-citation publication-type="journal"><person-group person-group-type="author"><name><surname>Elliott</surname> <given-names>D</given-names></name> <name><surname>Soifer</surname> <given-names>E</given-names></name></person-group>. <article-title>AI technologies, privacy, and security</article-title>. <source>Front Artif Intell</source>. (<year>2022</year>) <volume>5</volume>:<fpage>826737</fpage>. doi: <pub-id pub-id-type="doi">10.3389/frai.2022.826737</pub-id>, <pub-id pub-id-type="pmid">35493613</pub-id></mixed-citation></ref>
<ref id="ref14"><label>14.</label><mixed-citation publication-type="other"><person-group person-group-type="author"><collab id="coll2">AAVSB</collab></person-group>. (<year>2025</year>). Regulatory considerations of the use of artificial intelligence in veterinary medicine. American association of veterinary state boards: regulatory considerations of the use of artificial intelligence in veterinary medicine. Available online at: <ext-link xlink:href="https://www.aavsb.org/wp-content/uploads/2025/08/AAVSB-AI-Guidance-Whitepaper.pdf" ext-link-type="uri">https://www.aavsb.org/wp-content/uploads/2025/08/AAVSB-AI-Guidance-Whitepaper.pdf</ext-link></mixed-citation></ref>
</ref-list>
<fn-group>
<fn fn-type="custom" custom-type="edited-by" id="fn0001">
<p>Edited by: <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/2645352/overview">Andra-Sabina Neculai-Valeanu</ext-link>, Academy of Romanian Scientists, Romania</p>
</fn>
<fn fn-type="custom" custom-type="reviewed-by" id="fn0002">
<p>Reviewed by: <ext-link ext-link-type="uri" xlink:href="https://loop.frontiersin.org/people/3146593/overview">G&#x00FC;zin Yasemin Tun&#x00E7;ay</ext-link>, &#x00C7;ank&#x0131;r&#x0131; Karatekin University, T&#x00FC;rkiye</p>
</fn>
</fn-group>
</back>
</article>