-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[question] LSTM for A2C with discrete action space #814
Comments
I have worked on it and have a good idea of how to solve it. However, because of my full-time job, I don't have the time/energy needed to finish it. Would be great if we could get a Google Summer of Code project on this. I'd be glad to be the mentor. |
Thank you very much for the reply and the proposal and I sincerely appreciate it. I am not sure about my summer schedule yet, but it would be a great opportunity if we could do a project on it. I have sent you a linkedin request as well, if you don't mind to connect. Again, thank you very much and greatly appreciate it! |
Hi, I am relatively new to Tianshou and RL and I have been trying to apply LSTM to A2C algorithm with discrete action space. From the documentation, it says to use recurrent policy we need to use RecurrentActorProb, but it seems this is for continuous action space only.
Is there a way to get it work on discrete action space? I tried to use Recurrent + Actor but that does not seem to work.
Also, I saw a lot of opening issues regarding the potential bugs in the RNN functionality in tianshou. Have those been fixed already?
Thank you very much!
This is the error I got when I try to use Recurrent feature extraction net with Actor (for discrete action space):
I apologize if the question seems rudimentary, but I am not quite sure how to get it to work
The text was updated successfully, but these errors were encountered: