-
Notifications
You must be signed in to change notification settings - Fork 348
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add functorch improvements to inner-loop algorithms #331
Comments
I'm not sure batching tasks will be very helpful because we're already running OOM when looping through individual tasks with MAML. I'll leave this open for now and we can see if there's more interest in the future. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
With PyTorch's latest update, the introduction of the functorch library allows, via composable function transforms, to "efficiently batching together tasks in the inner-loop of MAML".
There seems to be a new way to vectorise gradient computation on a batch of tasks during the adaptation phase. I will try and look into this when I have some time, and maybe submit a PR. Unless someone knows how to do this? :)
The text was updated successfully, but these errors were encountered: