-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Keras 3: Streamlined Backend #159
base: dev
Are you sure you want to change the base?
Conversation
@LarsKue Thank you so much! This already looks amazing! Could you perhaps add a simple fully runnable example here for people to get started playing around with it? It is kind of there above, but I think it would make things easier to have one chunk of example code to copy and edit from there. Everyone, please try out the new interface and tell us what you think! |
@paul-buerkner Yes, I am working on it. I hope I have one ready today. |
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
Great work!! 👏 I'll write down some thoughts on the installation process. Those don't need any changes in the streamlined codebase but are just reminders for our future selves shortly before the release.
|
…r outside parallelization this is technically faster, since environment setup can be parallelized as well, but requires to pass the `--parallel auto` flag to tox
Fixed bug that attempted to reshape over None dimension when training offline. Updated build() for both modules. Condensed comments in lstnet.
Added new kwargs retrieval method in lstnet.py and skip_gru.py
…3/BayesFlow into streamlined-backend
Tuned default arguments for both LSTNet and SkipGRU. Condensed arithmetic in SkipGRU.
…3/BayesFlow into streamlined-backend
Also rename FunctionalSimulator to LambdaSimulator
this makes the implementation more explicit, which is easier to debug but also means a little bit more code clutter all in all I think this is better
…mi-dynamic type checkers
Work In Progress
This PR is still in progress. Please discuss open issues and raise possible concerns below.
Summary
Prior
and other distributionsAmortizer
s are now Keras 3 Models, which allows backend-agnostic trainingConfigurator
intoAmortizer
Amortizer
Dataset
object that takes care of data loading in multiple worker processesTrainer
intoDataset
In Progress
Postponed
We should probably do these things after merging with
dev
and before merging withmain
:README.md
with a workflowIn Discussion
WorkFlow
(name wip) object that encapsulates both amortizer and dataset for easier post-processing and model sharingDropped
Reason: We should enforce a single way to do things. Also poor support with pure jax.
Reason: Too restrictive for data structure.