An Advisor-Based Architecture for a Sample-Efficient Training of Autonomous Navigation Agents with Reinforcement Learning

dc.contributor.authorWijesinghe, R. D.
dc.contributor.authorTissera, D.
dc.contributor.authorVithanage, M. K.
dc.contributor.authorXavier, A.
dc.contributor.authorFernando, S
dc.contributor.authorSamarawickrama, J.
dc.date.accessioned2023-12-01T05:37:46Z
dc.date.available2023-12-01T05:37:46Z
dc.date.issued2023
dc.description.abstractRecent advancements in artificial intelligence have enabled reinforcement learning (RL) agents to exceed human-level performance in various gaming tasks. However, despite the state-of-the-art performance demonstrated by model-free RL algorithms, they suffer from high sample complexity. Hence, it is uncommon to find their applications in robotics, autonomous navigation, and self-driving, as gathering many samples is impractical in real-world hardware systems. Therefore, developing sample-efficient learning algorithms for RL agents is crucial in deploying them in real-world tasks without sacrificing performance. This paper presents an advisor-based learning algorithm, incorporating prior knowledge into the training by modifying the deep deterministic policy gradient algorithm to reduce the sample complexity. Also, we propose an effective method of employing an advisor in data collection to train autonomous navigation agents to maneuver physical platforms, minimizing the risk of collision. We analyze the performance of our methods with the support of simulation and physical experimental setups. Experiments reveal that incorporating an advisor into the training phase significantly reduces the sample complexity without compromising the agent’s performance compared to various benchmark approaches. Also, they show that the advisor’s constant involvement in the data collection process diminishes the agent’s performance, while the limited involvement makes training more effective.en_US
dc.identifier.citationWijesinghe, R. D., Tissera, D., Vithanage, M. K., Xavier, A., Fernando, S., & Samarawickrama, J. (2023). An Advisor-Based Architecture for a Sample-Efficient Training of Autonomous Navigation Agents with Reinforcement Learning. Robotics, 12(5), Article 5. https://doi.org/10.3390/robotics12050133en_US
dc.identifier.databaseMDPIen_US
dc.identifier.doihttps://doi.org/10.3390/robotics12050133en_US
dc.identifier.issn2218-6581en_US
dc.identifier.issue5en_US
dc.identifier.journalRoboticsen_US
dc.identifier.pgnos1-27en_US
dc.identifier.urihttp://dl.lib.uom.lk/handle/123/21862
dc.identifier.volume12en_US
dc.identifier.year2023en_US
dc.language.isoenen_US
dc.subjectadvisor-based architectureen_US
dc.subjectautonomous agentsen_US
dc.subjectreinforcement learningen_US
dc.titleAn Advisor-Based Architecture for a Sample-Efficient Training of Autonomous Navigation Agents with Reinforcement Learningen_US
dc.typeArticle-Full-texten_US

Files