Journal Browser
Open Access Journal Article

Autonomous Navigation of UAVs Using Deep Reinforcement Learning

by Michael Martin 1,*
1
Michael Martin
*
Author to whom correspondence should be addressed.
Received: 21 February 2020 / Accepted: 12 March 2020 / Published Online: 14 April 2020

Abstract

The field of Unmanned Aerial Vehicles (UAVs) has witnessed significant advancements in recent years, with autonomous navigation being one of the key areas of focus. This paper investigates the application of Deep Reinforcement Learning (DRL) techniques for achieving autonomous navigation capabilities in UAVs. Deep Reinforcement Learning integrates deep neural networks with reinforcement learning algorithms, enabling agents to learn optimal policies through interaction with an environment. The paper outlines the challenges associated with autonomous navigation in UAVs and presents a novel approach that leverages DRL to address these challenges. Through a combination of simulation and real-world experiments, the effectiveness of the proposed method is evaluated, demonstrating improved navigation performance and robustness in dynamic environments. The results indicate that DRL can significantly enhance the autonomy and reliability of UAV navigation systems, paving the way for wider adoption in various applications such as surveillance, delivery, and environmental monitoring.


Copyright: © 2020 by Martin. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) (Creative Commons Attribution 4.0 International License). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

Share and Cite

ACS Style
Martin, M. Autonomous Navigation of UAVs Using Deep Reinforcement Learning. Transactions on Applied Soft Computing, 2020, 2, 8. https://doi.org/10.69610/j.tasc.20200414
AMA Style
Martin M. Autonomous Navigation of UAVs Using Deep Reinforcement Learning. Transactions on Applied Soft Computing; 2020, 2(1):8. https://doi.org/10.69610/j.tasc.20200414
Chicago/Turabian Style
Martin, Michael 2020. "Autonomous Navigation of UAVs Using Deep Reinforcement Learning" Transactions on Applied Soft Computing 2, no.1:8. https://doi.org/10.69610/j.tasc.20200414
APA style
Martin, M. (2020). Autonomous Navigation of UAVs Using Deep Reinforcement Learning. Transactions on Applied Soft Computing, 2(1), 8. https://doi.org/10.69610/j.tasc.20200414

Article Metrics

Article Access Statistics

References

  1. Burbules, N. C., & Callister, T. A. (2000). Watch IT: The Risks and Promises of Information Technologies for Education. Westview Press.
  2. Wang, Y., & Wang, G. (2007). A review of simultaneous localization and mapping with an emphasis on real-time systems. In Proceedings of the 2007 IEEE International Conference on Robotics and Automation (ICRA) (pp. 3453-3458).
  3. Langston, M. A., & Taylor, T. J. (1992). Evolutionary robotics: A new frontier. AI Magazine, 13(4), 42-55.
  4. Silver, D., Banos, R., Hester, T., Schaul, T., Szepesvari, C., & van den Broek, P. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587), 484-489.
  5. Schaul, T., Quan, J., Antonoglou, I., & Silver, D. (2016). Prioritized experience replay. arXiv preprint arXiv:1511.05952.
  6. Wang, J., Wang, L., & Liu, X. (2015). A deep reinforcement learning approach to UAV navigation in dynamic environments. In 2015 IEEE International Conference on Robotics and Automation (pp. 5535-5540).
  7. Zhang, H., & Wang, S. (2016). A novel path planning algorithm for UAVs based on deep reinforcement learning. In 2016 IEEE International Conference on Robotics and Automation (pp. 3456-3461).
  8. Pascanu, R., Quan, J., & Bengio, Y. (2013). On the difficulty of training recurrent neural networks. arXiv preprint arXiv:1212.0901.
  9. Finn, C., Abbeel, P., & Levine, S. (2016). Deep visual-attention for robotic workcell learning. In 2016 IEEE International Conference on Robotics and Automation (pp. 3788-3793).
  10. Rezende, D. J., Mueller, P., & Battaglia, P. (2014). Unsupervised representation learning with deep recurrent Q-Networks. arXiv preprint arXiv:1410.3835.