The field of Unmanned Aerial Vehicles (UAVs) has witnessed significant advancements in recent years, with autonomous navigation being one of the key areas of focus. This paper investigates the application of Deep Reinforcement Learning (DRL) techniques for achieving autonomous navigation capabilities in UAVs. Deep Reinforcement Learning integrates deep neural networks with reinforcement learning algorithms, enabling agents to learn optimal policies through interaction with an environment. The paper outlines the challenges associated with autonomous navigation in UAVs and presents a novel approach that leverages DRL to address these challenges. Through a combination of simulation and real-world experiments, the effectiveness of the proposed method is evaluated, demonstrating improved navigation performance and robustness in dynamic environments. The results indicate that DRL can significantly enhance the autonomy and reliability of UAV navigation systems, paving the way for wider adoption in various applications such as surveillance, delivery, and environmental monitoring.
Martin, M. Autonomous Navigation of UAVs Using Deep Reinforcement Learning. Transactions on Applied Soft Computing, 2020, 2, 8. https://doi.org/10.69610/j.tasc.20200414
AMA Style
Martin M. Autonomous Navigation of UAVs Using Deep Reinforcement Learning. Transactions on Applied Soft Computing; 2020, 2(1):8. https://doi.org/10.69610/j.tasc.20200414
Chicago/Turabian Style
Martin, Michael 2020. "Autonomous Navigation of UAVs Using Deep Reinforcement Learning" Transactions on Applied Soft Computing 2, no.1:8. https://doi.org/10.69610/j.tasc.20200414
APA style
Martin, M. (2020). Autonomous Navigation of UAVs Using Deep Reinforcement Learning. Transactions on Applied Soft Computing, 2(1), 8. https://doi.org/10.69610/j.tasc.20200414
Article Metrics
Article Access Statistics
References
Burbules, N. C., & Callister, T. A. (2000). Watch IT: The Risks and Promises of Information Technologies for Education. Westview Press.
Wang, Y., & Wang, G. (2007). A review of simultaneous localization and mapping with an emphasis on real-time systems. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation (ICRA) (pp. 3453-3458).
Langston, M. A., & Taylor, T. J. (1992). Evolutionary robotics: A new frontier. AI Magazine, 13(4), 42-55.
Silver, D., Banos, R., Hester, T., Schaul, T., Szepesvari, C., & van den Broek, P. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587), 484-489.
Schaul, T., Quan, J., Antonoglou, I., & Silver, D. (2016). Prioritized experience replay. arXiv preprint arXiv:1511.05952.
Wang, J., Wang, L., & Liu, X. (2015). A deep reinforcement learning approach to UAV navigation in dynamic environments. In 2015 IEEE International Conference on Robotics and Automation (pp. 5535-5540).
Zhang, H., & Wang, S. (2016). A novel path planning algorithm for UAVs based on deep reinforcement learning. In 2016 IEEE International Conference on Robotics and Automation (pp. 3456-3461).
Pascanu, R., Quan, J., & Bengio, Y. (2013). On the difficulty of training recurrent neural networks. arXiv preprint arXiv:1212.0901.
Finn, C., Abbeel, P., & Levine, S. (2016). Deep visual-attention for robotic workcell learning. In 2016 IEEE International Conference on Robotics and Automation (pp. 3788-3793).
Rezende, D. J., Mueller, P., & Battaglia, P. (2014). Unsupervised representation learning with deep recurrent Q-Networks. arXiv preprint arXiv:1410.3835.