Welcome to CompressedBeliefMDPs.jl! This package is part of the POMDPs.jl ecosystem and takes inspiration from Exponential Family PCA for Belief Compression in POMDPs.
This package provides a general framework for applying belief compression in large POMDPs with generic compression, sampling, and planning algorithms.
You can install CompressedBeliefMDPs.jl using Julia's package manager. Open the Julia REPL (press ]
to enter the package manager mode) and run the following command:
pkg> add CompressedBeliefMDPs
Using belief compression is easy. Simplify pick a Sampler
, Compressor
, and a base Policy
and then use the standard POMDPs.jl interface.
using POMDPs, POMDPTools, POMDPModels
using CompressedBeliefMDPs
pomdp = BabyPOMDP()
compressor = PCACompressor(1)
updater = DiscreteUpdater(pomdp)
sampler = BeliefExpansionSampler(pomdp)
solver = CompressedBeliefSolver(
pomdp;
compressor=compressor,
sampler=sampler,
updater=updater,
verbose=true,
max_iterations=100,
n_generative_samples=50,
k=2
)
policy = solve(solver, pomdp)
This example demonstrates using CompressedBeliefMDP in a continuous setting with the LightDark1D
POMDP. It combines particle filters for belief updating and Monte Carlo Tree Search (MCTS) as the solver. While compressing a 1D space is trivial toy problem, this architecture can be easily scaled to larger POMDPs with continuous state and action spaces.
using POMDPs, POMDPModels, POMDPTools
using ParticleFilters
using MCTS
using CompressedBeliefMDPs
pomdp = LightDark1D()
pomdp.movement_cost = 1
base_solver = MCTSSolver(n_iterations=10, depth=50, exploration_constant=5.0)
updater = BootstrapFilter(pomdp, 100)
solver = CompressedBeliefSolver(
pomdp,
base_solver;
updater=updater,
sampler=PolicySampler(pomdp; updater=updater)
)
policy = solve(solver, pomdp)
rs = RolloutSimulator(max_steps=50)
r = simulate(rs, pomdp, policy)
In this example, we tackle a more realistic scenario with the TMaze POMDP, which has 123 states. To handle the larger state space efficiently, we employ a variational auto-encoder (VAE) to compress the belief simplex. By leveraging the VAE's ability to learn a compact representation of the belief state, we focus computational power on the relevant (compressed) belief states during each Bellman update.
using POMDPs, POMDPModels, POMDPTools
using CompressedBeliefMDPs
pomdp = TMaze(60, 0.9)
solver = CompressedBeliefSolver(
pomdp;
compressor=VAECompressor(123, 6; hidden_dim=10, verbose=true, epochs=2),
sampler=PolicySampler(pomdp, n=500),
verbose=true,
max_iterations=1000,
n_generative_samples=30,
k=2
)
policy = solve(solver, pomdp)
rs = RolloutSimulator(max_steps=50)
r = simulate(rs, pomdp, policy)