Deep Deterministic Policy Gradients (DDPG) algorithm based agent for solving Reacher Environment (Unity) as part of Udacity's Deep Reinforcement Nanodegree by Sayon Palit
- For this project we work with Unity's Reacher environment(single version)
-
In this environment, a double-jointed arm can move to target locations. A reward of +0.1 is provided for each step that the agent's hand is in the goal location. Thus, the goal of your agent is to maintain its position at the target location for as many time steps as possible.
-
The observation space consists of 33 variables corresponding to position, rotation, velocity, and angular velocities of the arm. Each action is a vector with four numbers, corresponding to torque applicable to two joints. Every entry in the action vector should be a number between -1 and 1.
- The task is episodic, and in order to solve the environment, your agent must get an average score of +30 over 100 consecutive episodes.
- Watch this YouTube video to see how some researchers were able to train a similar task on a real robot! The accompanying research paper can be found here.
To run the codes, follow the next steps:
- Create a new environment:
- Linux or Mac:
conda create --name drlnd python=3.6 source activate drlnd
- Windows:
conda create --name drlnd python=3.6 activate drlnd
- Perform a minimal install of OpenAI gym
- If using Windows,
- download swig for windows and add it the PATH of windows
- install Microsoft Visual C++ Build Tools
- then run these commands
pip install gym
- If using Windows,
- Download this repository or clone it
git clone https://github.com/sayonpalit2599/p2_continous_control/
- Install the dependencies under the folder python/
cd python
pip install .
- Create an IPython kernel for the
drlnd
environment
python -m ipykernel install --user --name drlnd --display-name "drlnd"
-
Download the Unity Environment specific to your operating system
-
Start jupyter notebook from the root of this python codes
jupyter notebook
-
Once started, change the kernel through the menu
Kernel
>Change kernel
>drlnd
-
If necessary, inside the ipynb files, change the path to the unity environment appropriately
-
Follow the instructions in
Continuous_Control.ipynb
to train the agent and useRun_Agent.ipynb
to see the performance of the agent. -
It is recommended to use a GPU for running this code
- The current model solves the environment in approx ~460 epsiodes on average with an
NVIDIA GTX1050TI
andIntel i8750H
.