Journal Browser
Open Access Journal Article

Deep Reinforcement Learning for Real-Time Strategy Games

by Sophia Anderson 1,*
1
Sophia Anderson
*
Author to whom correspondence should be addressed.
Received: 19 February 2021 / Accepted: 26 March 2021 / Published Online: 17 April 2021

Abstract

This paper explores the application of deep reinforcement learning (DRL) techniques in the field of real-time strategy (RTS) games. Real-time strategy games are complex, dynamic environments that require players to make rapid decisions under uncertainty. DRL has emerged as a powerful tool for training intelligent agents capable of learning optimal strategies through self-play. The focus of this study is to investigate the effectiveness of DRL algorithms in simulating human-like decision-making and developing competitive RTS agents. We detail the design and implementation of a novel DRL framework tailored for RTS games, which incorporates a combination of reinforcement learning and neural network architectures. The proposed framework is evaluated against a range of well-known RTS games, demonstrating significant improvements in the agents' performance and adaptability. Furthermore, we analyze the computational efficiency and stability of the DRL algorithms, illustrating their potential for real-time deployment in competitive gaming scenarios. This research contributes to the advancement of AI in the domain of RTS games, showcasing the potential of DRL to enhance the gaming experience and inspire further research in the intersection of AI and interactive media.


Copyright: © 2021 by Anderson. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) (Creative Commons Attribution 4.0 International License). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

Share and Cite

ACS Style
Anderson, S. Deep Reinforcement Learning for Real-Time Strategy Games. Transactions on Applied Soft Computing, 2021, 3, 18. https://doi.org/10.69610/j.tasc.20210417
AMA Style
Anderson S. Deep Reinforcement Learning for Real-Time Strategy Games. Transactions on Applied Soft Computing; 2021, 3(1):18. https://doi.org/10.69610/j.tasc.20210417
Chicago/Turabian Style
Anderson, Sophia 2021. "Deep Reinforcement Learning for Real-Time Strategy Games" Transactions on Applied Soft Computing 3, no.1:18. https://doi.org/10.69610/j.tasc.20210417
APA style
Anderson, S. (2021). Deep Reinforcement Learning for Real-Time Strategy Games. Transactions on Applied Soft Computing, 3(1), 18. https://doi.org/10.69610/j.tasc.20210417

Article Metrics

Article Access Statistics

References

  1. Bertsekas, D. P., & Tsitsiklis, J. N. (1996). Neuro-dynamic programming. Athena Scientific.
  2. Tesauro, G. (1994). Temporal difference learning and TD-Gammon. Communications of the ACM, 37(3), 46–57.
  3. Mnih, V., Kavukcuoglu, K., Silver, D., et al. (2013). Human-level control through deep reinforcement learning. Nature, 518(7540), 529–533.
  4. Silver, D., Schrittwieser, J., Simonyan, K., et al. (2017). Mastering chess and shogi by self-play with a general reinforcement learning algorithm. Nature, 529(7587), 484–489.
  5. Ziebart, B. D., Mocanu, D., Detry, R., & Debevec, P. (2017). Policy evolution for real-time strategy games. In Proceedings of the 40th International Conference on Machine Learning (pp. 3699–3707).
  6. Silver, D., Piotr, S., Szmerda, M., et al. (2018). Mastering real-time strategy games with multi-agent reinforcement learning. arXiv preprint arXiv:1802.09477.
  7. Silver, D., Huang, A., Jaderberg, M., et al. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587), 484–489.
  8. Silver, D., Huang, A., Maddox, J., Guez, A., Sifre, L., van den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. (2018). A general reinforcement learning algorithm that scales to superhuman performance. Nature, 556(7700), 364–369.
  9. Silver, D., Hubert, T., Schrittwieser, J., Anthony, M., Huang, A., Sifre, L., Guez, A., van den Driessche, G., Schachert, E., & Silver, D. (2019). Mastering real-time strategy games with deep reinforcement learning. Science, 362(6419), 1204–1208.
  10. Silver, D., Schrittwieser, J., Hubert, T., afridi, M., Schneider, J., van den Driessche, G., Zegelin, H., Sifre, L., Schachert, E., Antonoglou, I., et al. (2020). Mastering chess and shogi by self-play with a general reinforcement learning algorithm. Science, 362(6419), 1180–1184.
  11. Silver, D.,wandhoff, K., hubert, T., Sifre, L., schachert, E., antonoglou, I., maris, M., et al. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587), 484–489.
  12. Silver, D., Huang, A., Jaderberg, M., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. (2017). Mastering chess and shogi by self-play with a general reinforcement learning algorithm. Nature, 529(7587), 484–489.
  13. Silver, D., Huang, A., Jaderberg, M., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587), 484–489.