Releases: RDLLab/posggym
Releases · RDLLab/posggym
v0.6.0
Major release, but mostly just minor changes and improvements.
- fixed some awkward behavior in
Driving-v1
andDrivingGen-v1
shortest path policies. - made
info
dictionary typing less strict. It can now contain no entries, or entries for keys other than agent IDs. - fixed bug in
sample_initial_agent_state
function inPursuitEvasion-v1
environment - fixed docstring formatting across entire package
- updated documentation
v0.5.1
Minor release to fix a bug with setup.py
and the newer agent models being saved as cuda tensors.
v0.5.0
This release adds some more environments and agents, fixes some bugs, and tweaks some of the existing environments.
Major changes include:
- Added release notes section to the docs
- Added the
AgentEnvWrapper
class which can be used to incorporate aposggym.agents
policy as part of an environment - Added the
StackEnv
wrapper for converting aposggym.Env
into accepting and outputting stacked arrays as opposed to dictionaries. - Added the
CooperativeReaching-v0
grid-world environment along with heuristic policies - Updated the
posggym.agents.Policy
API to make it a bit less confusing - Updated the
Driving
andDrivingGen
envs tov1
which includes a few bug fixes and adds a number of improvements (Driving-v0
is no longer supported, including theDriving-v0
agent policies.) - Added shortest path based policies for all
Driving-v1
andDrivingGen-v1
environments - Updated RL policies for
Driving-v1
- Updated
LevelBasedForaging
environment tov3
which includes a number of small improvements, mainly around removing unused parameters (LevelBasedForaging-v2
is no longer supported, including theLevelBasedForaging-v2
agent policies.) - Updated heuristic policies for
LevelBasedForaging-v2
grid-world environment and added some RL policies for two scenarios - Updated
PursuitEvasion
environment tov1
which removes unused parameters (PursuitEvasion-v0
is no longer supported, including thePursuitEvasion-v0
agent policies.) - Updated agents for
PursuitEvasion-v1
- Update agents for
PredatorPrey-v0
including adding some new RL policies and heuristic policies - Tested agent diversity for most of the grid world environments
- Cleaned up docstrings of a bunch of classes
v0.4.0
This release is the first release of the full POSGGym (environments + agents).
Major changes include:
- Integration of the
posggym.agents
library (migrated from separate repo/library), with agents provided for all continuous environments, and 4/7 grid-world environments - Addition of four continuous environments
- Major updates to documentation
- Improved support for installation via pip
- Many other improvements
v0.3.2
- lowered gymnasium dependency version to
>=0.26
so it's compatible with latest rllib version
v0.3.1
Skipping v0.3.0 due to typo.
- Significant additions to the documentation
- Changed registered environments to have only a single default variation of each environment and then users can pass arguments to
posggym.make
to change the parameters of the environment - updated wrappers including record video, rllib and pettingzoo wrappers
- updated all grid world environments to use pygame rendering
- updated keyboard agent to support 1D continuous actions
- migrated to ruff for linting
- made all environments
observation_first
(removedobservation_first
attribute from envs and models) - added pre-commit support
- some other improvements, bug fixes, and tests
v0.2.1
Patch update
- Updated install instructions to reflect availability of posggym on pypi
v0.2.0
Major update
- Updated environment and model APIs to be more inline with Gymnasium and PettingZoo
- Adding extensive testing
- Updated all environments
- Added documentation
v0.1.0
First release with classic, grid world, and lbf environment.
This is the release used for the BA-POSGMCP paper.
As of the time of the realease POSGGym has been updated and this version is depracated.