Skip to content

Commit

Permalink
update readme
Browse files Browse the repository at this point in the history
  • Loading branch information
wisnunugroho21 committed Sep 13, 2020
1 parent 2902ade commit 564e162
Showing 1 changed file with 9 additions and 3 deletions.
12 changes: 9 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,3 @@
## Version 2
Version 2 is not yet ready. You can use pytorch version of PPO & PPO_continous, but the rest is not yet finished. Currently working on PPO_RND

# PPO-RND

Simple code to demonstrate Deep Reinforcement Learning by using Proximal Policy Optimization and Random Network Distillation in Tensorflow 2 and Pytorch
Expand Down Expand Up @@ -65,6 +62,15 @@ RND incentivizes visiting unfamiliar states by measuring how hard it is to predi

You can read full detail of RND in [here](https://openai.com/blog/reinforcement-learning-with-prediction-based-rewards/)

## Truly Proximal Policy Optimization

Proximal policy optimization (PPO) is one of the most successful deep reinforcement-learning methods, achieving state-of-the-art performance across a wide range of challenging tasks. However, its optimization behavior is still far from being fully understood. In this paper, we show that PPO could neither strictly restrict the likelihood ratio as it attempts to do nor enforce a well-defined trust region constraint, which means that it may still suffer from the risk of performance instability. To address this issue, we present an enhanced PPO method, named Truly PPO. Two critical improvements are made in our method: 1) it adopts a new clipping function to support a rollback behavior to restrict the difference between the new policy and the old one; 2) the triggering condition for clipping is replaced with a trust region-based one, such that optimizing the resulted surrogate objective function provides guaranteed monotonic improvement of the ultimate policy performance. It seems, by adhering more truly to making the algorithm proximal - confining the policy within the trust region, the new algorithm improves the original PPO on both sample efficiency and performance.

You can read full detail of Truly PPO in [here](https://arxiv.org/abs/1903.07940)

## Version 2
Version 2 is not yet ready. You can use pytorch version of PPO & PPO_continous, but the rest is not yet finished. Currently working on PPO_RND

## Result

### LunarLander using PPO (Non RND)
Expand Down

0 comments on commit 564e162

Please sign in to comment.