AUTHOR=Tung Tina , Hasnaeen Shah Md Nehal , Zhao Xiaopeng TITLE=Ethical and practical challenges of generative AI in healthcare and proposed solutions: a survey JOURNAL=Frontiers in Digital Health VOLUME=Volume 7 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2025.1692517 DOI=10.3389/fdgth.2025.1692517 ISSN=2673-253X ABSTRACT=BackgroundGenerative artificial intelligence (AI) is rapidly transforming healthcare, but its adoption introduces significant ethical and practical challenges. Algorithmic bias, ambiguous liability, lack of transparency, and data privacy risks can undermine patient trust and create health disparities, making their resolution critical for responsible AI integration.ObjectivesThis systematic review analyzes the generative AI landscape in healthcare. Our objectives were to: (1) identify AI applications and their associated ethical and practical challenges; (2) evaluate current data-centric, model-centric, and regulatory solutions; and (3) propose a framework for responsible AI deployment.MethodsFollowing the PRISMA 2020 statement, we conducted a systematic review of PubMed and Google Scholar for articles published between January 2020 and May 2025. A multi-stage screening process yielded 54 articles, which were analyzed using a thematic narrative synthesis.ResultsOur review confirmed AI’s growing integration into medical training, research, and clinical practice. Key challenges identified include systemic bias from non-representative data, unresolved legal liability, the “black box” nature of complex models, and significant data privacy risks. Proposed solutions are multifaceted, spanning technical (e.g., explainable AI), procedural (e.g., stakeholder oversight), and regulatory strategies.DiscussionCurrent solutions are fragmented and face significant implementation barriers. Technical fixes are insufficient without robust governance, clear legal guidelines, and comprehensive professional education. Gaps in global regulatory harmonization and frameworks ill-suited for adaptive AI persist. A multi-layered, socio-technical approach is essential to build trust and ensure the safe, equitable, and ethical deployment of generative AI in healthcare.ConclusionsThe review confirmed that generative AI has a growing integration into medical training, research, and clinical practice. Key challenges identified include systemic bias stemming from non-representative data, unresolved legal liability, the “black box” nature of complex models, and significant data privacy risks. These challenges can undermine patient trust and create health disparities. Proposed solutions are multifaceted, spanning technical (such as explainable AI), procedural (like stakeholder oversight), and regulatory strategies.