AUTHOR=Mahrouk Abdelaali TITLE=Symbolic feedback for transparent fault anticipation in neuroergonomic brain-machine interfaces JOURNAL=Frontiers in Robotics and AI VOLUME=Volume 12 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/robotics-and-ai/articles/10.3389/frobt.2025.1656642 DOI=10.3389/frobt.2025.1656642 ISSN=2296-9144 ABSTRACT=BackgroundBrain-Machine Interfaces (BMIs) increasingly mediate human interaction with assistive systems, yet remain sensitive to internal cognitive divergence. Subtle shifts in user intention—due to fatigue, overload, or schema conflict—may affect system reliability. While decoding accuracy has improved, most systems still lack mechanisms to communicate internal uncertainty or reasoning dynamics in real time.ObjectiveWe present NECAP-Interaction, a neuro-symbolic architecture that explores the potential of symbolic feedback to support real-time human-AI alignment. The framework aims to improve neuroergonomic transparency by integrating symbolic trace generation into the BMI control pipeline.MethodsAll evaluations were conducted using high-fidelity synthetic agents across three simulation tasks (motor control, visual attention, cognitive inhibition). NECAP-Interaction generates symbolic descriptors of epistemic shifts, supporting co-adaptive human-system communication. We report trace clarity, response latency, and symbolic coverage using structured replay analysis and interpretability metrics.ResultsNECAP-Interaction anticipated behavioral divergence up to 2.3 ± 0.4 s before error onset and maintained over 90% symbolic trace interpretability across uncertainty tiers. In simulated overlays, symbolic feedback improved user comprehension of system states and reduced latency to trust collapse compared to baseline architectures (CNN, RNN).ConclusionCognitive interpretability is not merely a technical concern—it is a design priority. By embedding symbolic introspection into BMI workflows, NECAP-Interaction supports user transparency and co-regulated interaction in cognitively demanding contexts. These findings contribute to the development of human-centered neurotechnologies where explainability is experienced in real time.