Skip to content

Self-learned Intelligent Agent in StarCraft II using Deep Reinforcement Learning with Imitation Learning

Notifications You must be signed in to change notification settings

emmas96/DATX02-19-83

Repository files navigation

Self-learned Intelligent Agent in StarCraft II using Deep Reinforcement Learning with Imitation Learning

Bachelor's Thesis in Computer Science and Engineering at Chalmers University of Technology.

Authors: Niclas Johansson, Daniel Willim, Emma Svensson, Markus Veintie, Franz Wang, and Karl-Rehan Chiu Falck.

Abstract

Knowledge of machine learning is becoming more essential in many fields. This thesis explores and outlines the basics of machine learning through the complex game StarCraft II with limited prior knowledge and resources. In particular deep Q-learning in combination with imitation learning was explored in order to reduce the time required for an agent to become capable of playing the game. A few simpler environments were used as initial challenges before StarCraft II was explored. For all environments, the thesis reports a comparison of performance between the agents utilizing imitation learning and those that did not. In the cases of the simpler environments, agents using deep Q-learning combined with imitation learning showed significantly improved training time. Due to problems with the reward structure for the complex game StarCraft II no conclusion could be drawn about the implications of imitation learning in complex environments.

About

Self-learned Intelligent Agent in StarCraft II using Deep Reinforcement Learning with Imitation Learning

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published