AUTHOR=Basu Semanti , Kim Moon Hwan , Tatlidil Semir , Williams Tom , Sloman Steven , Bahar Ruth Iris TITLE=Augmenting large language models with psychologically grounded models of causal reasoning for planning under uncertainty JOURNAL=Frontiers in Artificial Intelligence VOLUME=Volume 8 - 2025 YEAR=2026 URL=https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2025.1730614 DOI=10.3389/frai.2025.1730614 ISSN=2624-8212 ABSTRACT=Large Language Models (LLMs) have come a long way in their ability to solve a wide range of problems. Yet, LLM decision-making still relies primarily on pattern recognition, which may limit its ability to make sound decisions under uncertainty. In contrast, human reasoning often makes use of explicit causal models, allowing humans to explain, hypothesize, and extrapolate to different domains in uncertain scenarios. In this article, we explore whether human causal models can be strategically integrated with Large Language Models to improve planning outcomes under uncertainty for object assembly and troubleshooting tasks modeled as Partially Observable Markov Decision Processes (POMDPs). Our contributions consist of two parts: (1) an interactive LLM agent that plans an action at each time step by solving a POMDP targeted at an object assembly or troubleshooting task, and (2) a novel hybrid-reasoning framework that uses confidence scores in both the LLM agent's output and a human causal model to make a final decision on the most appropriate action for the current time step to achieve the task. We demonstrate the efficacy of our approach through detailed simulations and show a significant improvement in task planning reward across three different state-of-the-art LLMs when augmenting the baseline LLM planner with a human causal model.