Skip to content

Commit

Permalink
[RLlib] Some Docs fixes (2). (#26265)
Browse files Browse the repository at this point in the history
  • Loading branch information
christy committed Jul 5, 2022
1 parent 7ea9d91 commit 5b44afe
Show file tree
Hide file tree
Showing 3 changed files with 4 additions and 2 deletions.
2 changes: 1 addition & 1 deletion doc/source/rllib/core-concepts.rst
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ An environment in RL is the agent's world, it is a simulation of the problem to
An RLlib environment consists of:

1. all possible actions (**action space**)
2. a complete omniscient description of the environment, nothing hidden (**state space**)
2. a complete description of the environment, nothing hidden (**state space**)
3. an observation by the agent of certain parts of the state (**observation space**)
4. **reward**, which is the only feedback the agent receives per action.

Expand Down
2 changes: 1 addition & 1 deletion doc/source/rllib/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ To be able to run our Atari examples, you should also install:
After these quick pip installs, you can start coding against RLlib.

Here is an example of running a PPO Trainer on the "`Taxi domain <https://www.gymlibrary.ml/environments/toy_text/taxi/>`_"
Here is an example of running a PPO Trainer on the `Taxi domain <https://www.gymlibrary.ml/environments/toy_text/taxi/>`_
for a few training iterations, then perform a single evaluation loop
(with rendering enabled):

Expand Down
2 changes: 2 additions & 0 deletions doc/source/rllib/rllib-env.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,8 @@ RLlib works with several different types of environments, including `OpenAI Gym

.. image:: images/rllib-envs.svg

.. _configuring-environments:

Configuring Environments
------------------------

Expand Down

0 comments on commit 5b44afe

Please sign in to comment.