Skip to content

sulagnag/Udacity-Deep-Reinforcement-Learning-p1-Navigation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Udacity-Deep-Reinforcement-Learning-p1-Navigation

DQN implementation for an agent to navigate and collect Bananas

The problem description

Environment: Train an agent to navigate (and collect bananas!) in a large, square world. The environment is based on Unity ML-agents.

Note: The Unity ML-Agent team frequently releases updated versions of their environment. We are using the v0.4 interface. The project environment provided by Udacity is similar to, but not identical to the Reacher environment on the Unity ML-Agents GitHub page.

The observation space: The state space has 37 dimensions and contains the agent's velocity, along with ray-based perception of objects around the agent's forward direction. The action space: the agent has to learn how to best select actions. Four discrete actions are available, corresponding to:

  • 0 - move forward.
  • 1 - move backward.
  • 2 - turn left.
  • 3 - turn right.

Reward: A reward of +1 is provided for collecting a yellow banana, and a reward of -1 is provided for collecting a blue banana. Thus, the goal of your agent is to collect as many yellow bananas as possible while avoiding blue bananas.

Task (Episodic/Continuous): The task is episodic.

Solution: In order to solve the environment, the agent must get an average score of +13 over 100 consecutive episodes. There are 2 ways to solve this environment-

  1. Learn using state space values such as velocity along with ray based perception of objects around the forward direction.
  2. Learn using pixels for the state space.

The Navigation.ipynb solves the first version.

Getting started

Installation requirements

  • To begin with, you need to configure a Python 3.6 / PyTorch 0.4.0 environment with the requirements described in Udacity repository

  • Then you need to clone this project and have it accessible in your Python environment

  • For this project, you will not need to install Unity. You need to only select the environment that matches your operating system:

  • Finally, you can unzip the environment archive in the project's environment directory and set the path to the UnityEnvironment in the code.

Instructions

The configuration for the environement, the agent and the DQN parameters are all in the ipynb file.

About

DQN implementation for an agent to navigate and collect Bananas

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published