AUTHOR=Allameh Mahdieh , Kostrzewa Mathew , Gilham Ame , Rahman Labib , Zaman Loutfouz TITLE=Evaluating tutorial and conversational agent methods for Dark Souls—The Board Game with an LLM-enhanced agent extension JOURNAL=Frontiers in Computer Science VOLUME=Volume 7 - 2025 YEAR=2026 URL=https://www.frontiersin.org/journals/computer-science/articles/10.3389/fcomp.2025.1714046 DOI=10.3389/fcomp.2025.1714046 ISSN=2624-9898 ABSTRACT=This study evaluates the effectiveness of two tutoring approaches: a rule-based conversational agent (SoulsBot) and a traditional handholding tutorial, in teaching players the core mechanics of a mini-boss encounter in Dark Souls—The Board Game. Findings from a mixed-methods user study (n = 16) show that neither tool significantly improved learning or engagement: GEQ components showed no statistical differences, SUS scores were comparable (SoulsBot = 69.5, Tutorial = 76.1), quiz performance did not differ (69.38% vs. 63.75%), and gameplay metrics showed no significant effect of the tutoring method. Qualitative feedback indicated that SoulsBot was valued for its on-demand assistance but struggled with limited ontology coverage and rigid intent matching, leading to frequent unanswered questions. Participants also noted that conversational agents like SoulsBot appear more suitable for strategic, RPG, and dungeon-crawl games, but less effective in fast-paced or exploration-focused genres. To address SoulsBot’s limitations, we introduce SoulsBot+, an enhanced version that integrates a pre-trained large language model with retrieval-augmented generation and a fallback mechanism. SoulsBot+ improves the system’s ability to handle unanticipated questions, generate context-aware responses informed by real-time game state, and provide more flexible rule explanations and strategic guidance. These enhancements aim to overcome the identified shortcomings and support future development of adaptive tutoring in complex digital board games.