-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Improvement] Exclude data loading time when benchmarking speed #900
base: master
Are you sure you want to change the base?
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #900 +/- ##
==========================================
+ Coverage 89.05% 89.67% +0.61%
==========================================
Files 112 114 +2
Lines 6060 6431 +371
Branches 970 1007 +37
==========================================
+ Hits 5397 5767 +370
+ Misses 468 462 -6
- Partials 195 202 +7
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Please merge the master branch into your branch, thank you. |
This version also excludes the time to place the data on the GPU. Are you sure you want it like that? From a practical standpoint, the API receiving an image will also handle cpu-gpu placement, along with the inference time. I've combined what's here with the current master, but I've left the cpu to gpu timing like in the master branch.
|
* Initial Wildcard Stable Diffusion Pipeline * Added some additional example usage * style * Added links in README and additional documentation * Initial Wildcard Stable Diffusion Pipeline * Added some additional example usage * style * Added links in README and additional documentation * cleanup readme again * Apply suggestions from code review Co-authored-by: Patrick von Platen <[email protected]>
Motivation
Exclude data loading time when benchmarking speed
Modification
Call
data = next(data_loader_iter)
explicitly.BC-breaking (Optional)
May need to re-run all inference speed benchmark.