Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some questions or bugs? #8

Open
densechen opened this issue Jun 10, 2020 · 1 comment
Open

Some questions or bugs? #8

densechen opened this issue Jun 10, 2020 · 1 comment

Comments

@densechen
Copy link

Hi,
Thanks for you providing so nice code.
But when I try to play with "lets-do-irl/mujoco/vail", It make me very confused that the code in train_model.py:
vdb_loss = criterion(learner, torch.ones((states.shape[0], 1))) + \ criterion(expert, torch.zeros((demonstrations.shape[0], 1))) + \ beta * bottleneck_loss
I think this should be:
vdb_loss = criterion(learner, torch.zeros((states.shape[0], 1))) + \ criterion(expert, torch.ones((demonstrations.shape[0], 1))) + \ beta * bottleneck_loss
. In other words, the learner should be pushed to zeros, and expert should be pushed to ones, isn't it? Or both is fine?

By the way, the code in the same file:
beta = max(0, beta + args.alpha_beta * bottleneck_loss)
If beta equals beta + args.alpha_beta * bottleneck_loss, at the next time backward, there will report a bug about beta, which is modified by a inplace operation.

@zlpiscoming
Copy link

1 or zero does not matter. Because the goal of discriminator is to distinguish the traj of expert and learner, who is 1 does not matter.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants