AUTHOR=Barabadi Elyas , Fotuhabadi Zahra , Arghavan Amanollah , Booth James R TITLE=Comparing AI and human moral reasoning: context-sensitive patterns beyond utilitarian bias JOURNAL=Frontiers in Artificial Intelligence VOLUME=Volume 8 - 2025 YEAR=2026 URL=https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2025.1710410 DOI=10.3389/frai.2025.1710410 ISSN=2624-8212 ABSTRACT=IntroductionDecision-making supported by intelligent systems is being increasingly deployed in ethically sensitive domains. As a result, it is of considerable importance to understand the patterns of moral judgments generated by large language models (LLMs).MethodsTo this end, the current research systematically investigates how two prominent LLMs (i.e., ChatGPT and Claude Sonnet) respond to 12 moral scenarios previously administered to human participants (first language and second language users). The primary purpose was to examine whether the responses generated by LLMs align with either deontological or utilitarian orientations. Our secondary aim was to compare response patterns of these two models to those of human respondents in previous studies.ResultsContrary to prevailing assumptions regarding the utilitarian tendency of LLMs, the findings revealed subtle response distributions of moral choice that are context-sensitive. Specifically, both models alternated between deontological and utilitarian judgments, depending on the scenario-specific features.DiscussionThese output patterns reflect complex moral trade-offs and may play a significant role in shaping societal trust and acceptance of AI systems in morally sensitive domains.