Skip to content

Using a deep Q-learning network and searching for optimal hyperparameters in order to solve the lunar lander problem provided by OpenAI Gym.

Notifications You must be signed in to change notification settings

shilpakancharla/deep-rl-lunar-lander

Repository files navigation

Applications of Reinforcement Learning: Lunar Lander Simulation

Abstract

The purpose of the following reinforcement learning experiment is to investigate optimal parameter values for deep Q-learning (DQN) on the Lunar Lander problem provided by OpenAI Gym. The LunarLander-v2 is an environment with uncertainty and this investigation explores optimal parameters that will maximize the mean reward over 400 episodes or less. A deep learning network is designed for the agent and various reinforcement learning parameters are used to carry out the simulation. Through the use of a neural network with two hidden layers, the agent was able to converge to a mean average reward score of 200 with epsilon = 0.9, epsilon-decay = 0.995, alpha (learning rate) = 0.001, and gamma (discount factor) = 0.99 in a little over 250 episodes. A comparative analysis between different parameters used is also performed. The results and the architecture of the model used from this experiment are also compared to other similar experiments that employ the DQN method for the Lunar Lander problem.

About

Using a deep Q-learning network and searching for optimal hyperparameters in order to solve the lunar lander problem provided by OpenAI Gym.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages