AUTHOR=Alexander Anusha , Vangaveeti V. N. Suchir , Venkatesan Kalaichelvi , Mounsef Jinane , Ramanujam Karthikeyan TITLE=Adaptive emergency response and dynamic crowd navigation for mobile robot using deep reinforcement learning JOURNAL=Frontiers in Robotics and AI VOLUME=Volume 12 - 2025 YEAR=2025 URL=https://www.frontiersin.org/journals/robotics-and-ai/articles/10.3389/frobt.2025.1612392 DOI=10.3389/frobt.2025.1612392 ISSN=2296-9144 ABSTRACT=Mobile robots have emerged as a reliable solution for dynamic navigation in real-world applications. Effective deployment in high-density crowds and emergency scenarios requires not only accurate path planning but also rapid adaptation to changing environments. However, autonomous navigation in such environments remains a significant challenge, particularly in time-sensitive applications such as emergency response. Existing path planning and reinforcement learning approaches often lack adaptability to uncertainties and time-varying obstacles, thereby making them less suitable for unstructured real-world scenarios. To address these limitations, a Deep Reinforcement Learning (DRL) framework for dynamic crowd navigation using three algorithms, Deep Deterministic Policy Gradient (DDPG), Twin Delayed Deep Deterministic Policy Gradient (TD3), and Deep Q-Network (DQN), is proposed. A context-aware state representation that combines Light Detection and Ranging (LiDAR)-based obstacle perception, goal orientation, and robot kinematics to enhance situational awareness is developed. The proposed framework is implemented in a ROS2 Gazebo simulation environment using the TurtleBot3 platform and tested in challenging scenarios to identify the most effective algorithm. Extensive simulation analysis demonstrates that TD3 outperforms the other approaches in terms of success rate, path efficiency, and collision avoidance. This study contributes a reproducible, constraint-aware DRL navigation architecture suitable for real-time, emergency-oriented mobile robot applications.