This paper explores the application of deep reinforcement learning (DRL) techniques in the field of real-time strategy (RTS) games. Real-time strategy games are complex, dynamic environments that require players to make rapid decisions under uncertainty. DRL has emerged as a powerful tool for training intelligent agents capable of learning optimal strategies through self-play. The focus of this study is to investigate the effectiveness of DRL algorithms in simulating human-like decision-making and developing competitive RTS agents. We detail the design and implementation of a novel DRL framework tailored for RTS games, which incorporates a combination of reinforcement learning and neural network architectures. The proposed framework is evaluated against a range of well-known RTS games, demonstrating significant improvements in the agents' performance and adaptability. Furthermore, we analyze the computational efficiency and stability of the DRL algorithms, illustrating their potential for real-time deployment in competitive gaming scenarios. This research contributes to the advancement of AI in the domain of RTS games, showcasing the potential of DRL to enhance the gaming experience and inspire further research in the intersection of AI and interactive media.
Anderson, S. Deep Reinforcement Learning for Real-Time Strategy Games. Transactions on Applied Soft Computing, 2021, 3, 18. https://doi.org/10.69610/j.tasc.20210417
AMA Style
Anderson S. Deep Reinforcement Learning for Real-Time Strategy Games. Transactions on Applied Soft Computing; 2021, 3(1):18. https://doi.org/10.69610/j.tasc.20210417
Chicago/Turabian Style
Anderson, Sophia 2021. "Deep Reinforcement Learning for Real-Time Strategy Games" Transactions on Applied Soft Computing 3, no.1:18. https://doi.org/10.69610/j.tasc.20210417
APA style
Anderson, S. (2021). Deep Reinforcement Learning for Real-Time Strategy Games. Transactions on Applied Soft Computing, 3(1), 18. https://doi.org/10.69610/j.tasc.20210417
Article Metrics
Article Access Statistics
References
Bertsekas, D. P., & Tsitsiklis, J. N. (1996). Neuro-dynamic programming. Athena Scientific.
Tesauro, G. (1994). Temporal difference learning and TD-Gammon. Communications of the ACM, 37(3), 46–57.
Mnih, V., Kavukcuoglu, K., Silver, D., et al. (2013). Human-level control through deep reinforcement learning. Nature, 518(7540), 529–533.
Silver, D., Schrittwieser, J., Simonyan, K., et al. (2017). Mastering chess and shogi by self-play with a general reinforcement learning algorithm. Nature, 529(7587), 484–489.
Ziebart, B. D., Mocanu, D., Detry, R., & Debevec, P. (2017). Policy evolution for real-time strategy games. In Proceedings of the 40th International Conference on Machine Learning (pp. 3699–3707).
Silver, D., Piotr, S., Szmerda, M., et al. (2018). Mastering real-time strategy games with multi-agent reinforcement learning. arXiv preprint arXiv:1802.09477.
Silver, D., Huang, A., Jaderberg, M., et al. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587), 484–489.
Silver, D., Huang, A., Maddox, J., Guez, A., Sifre, L., van den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. (2018). A general reinforcement learning algorithm that scales to superhuman performance. Nature, 556(7700), 364–369.
Silver, D., Hubert, T., Schrittwieser, J., Anthony, M., Huang, A., Sifre, L., Guez, A., van den Driessche, G., Schachert, E., & Silver, D. (2019). Mastering real-time strategy games with deep reinforcement learning. Science, 362(6419), 1204–1208.
Silver, D., Schrittwieser, J., Hubert, T., afridi, M., Schneider, J., van den Driessche, G., Zegelin, H., Sifre, L., Schachert, E., Antonoglou, I., et al. (2020). Mastering chess and shogi by self-play with a general reinforcement learning algorithm. Science, 362(6419), 1180–1184.
Silver, D.,wandhoff, K., hubert, T., Sifre, L., schachert, E., antonoglou, I., maris, M., et al. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587), 484–489.
Silver, D., Huang, A., Jaderberg, M., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. (2017). Mastering chess and shogi by self-play with a general reinforcement learning algorithm. Nature, 529(7587), 484–489.
Silver, D., Huang, A., Jaderberg, M., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587), 484–489.