-
Notifications
You must be signed in to change notification settings - Fork 413
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FEA Implement generator unordered parameter #1463
Conversation
Co-authored-by: Thomas Moreau <[email protected]>
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## master #1463 +/- ##
==========================================
- Coverage 94.93% 94.85% -0.08%
==========================================
Files 45 45
Lines 7515 7546 +31
==========================================
+ Hits 7134 7158 +24
- Misses 381 388 +7
☔ View full report in Codecov by Sentry. |
7bb322f
to
479e10f
Compare
479e10f
to
0d1d076
Compare
…tor_unordered advantages
I don't think the fail on pypy is related to the PR and it's benign |
There are two warnings making the HTML rendering fail on the CI:
could you please fix those to make it possible to review the rendered HTML for the examples expanded in this PR? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great but I will finalize the review once the example is running on the CI.
…aways with reasonably fast tasks
The example is running now. The pypy test for backend abortion fails consistently on this PR though, something new in this PR probably has been delaying backend abortion in pypy to the point that this test fails now 🤔 edit: what seems to happen is that the |
NB: current pipeline issues are not related to the latest commit, it seems that the pipeline setup is down because of dependencies issues. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First pass of comments on the example and the tests. I will do a second pass on the change itself after I finish some local experiments.
Once the above comments are addressed, LGTM. |
Thank you for the review @ogrisel ! Regarding fce3475 I thought about that too but in this case after a new task is fired the main process doesn't keep any reference to it, it could just be garbage collected and then strange behavior might happen, and said behavior could depend on the backend ? I'd prefer keep explicitly the set of references in |
It's not possible for a backend to call a callback successfully without keeping a reference to the callback object. If a backend does not call the callback, then the backend would be broken for any the value of |
But couldn't the opposite be true: because some given backend does not keep a reference to any task object internally, then the callback might not be called if the user-level reference is deleted ? basically a backend could use this for early-interruption of tasks on deletion of the refence. |
Since we call: job = self._backend.apply_async(batch, callback=batch_tracker) and that the backend has to call the The naming is weird, because the |
Please also document the new feature in the changelog, targeting the next release. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! thx @fcharras for this nice implem :)
Closes #1449
The PR builds upon #1458
It looks straightforward, the diff in
parallel.py
outside of documentation is about 10 lines added/removed.More testing to be added, and an extension to the tutorial, that highlights how RAM usage can be further decreased if the tasks have significantly different processing times maybe ?