-
Notifications
You must be signed in to change notification settings - Fork 275
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(ci): onboard bencher for tracking benchmarks #1174
base: main
Are you sure you want to change the base?
Conversation
Uuh, I'm 99% sure I'm using the correct token but something is wrong 🤔 |
For the general concepts of working with feature branches:
For an example of how to benchmark PRs from forks in GitHub Actions:
Whoever you invite can signup for a Bencher account with the invite. They don't need to already be a user. |
Have you set the
|
Yup, secret is there as |
I can't add another token or delete the existing one. I'm using the same one that I used in my tests in #1092 |
I believe the issue is that you are trying to run this with a pull request from a fork:
|
Ah I see. Then I would like to wait for other maintainer's input to decide whether if we want to run Bencher for pull requests as well (does it have any major benefits?) or just pushes to the main branch. Also, running the |
Yes, one of the major benefits of Continuous Benchmarking is being able to catch performance regressions in CI.
This can definitely be the case if your benchmarks take a while. I just pulled One solution is to only run the benchmarks on PRs if there is a So something like this for the Bencher PR job: https://github.com/diesel-rs/diesel/pull/3849/files#diff-e0ba2f20a84be928e389320224a40fc1d8a83da5fdbcbd535d074438715f3bb6R9 |
Changes should be detected before they are merged. Otherwise, we could just bisect a regression.
Personally I think PRs should be viewed by multiple people anyway so they take like at least 30h to be merged anyway. That's quite enough time for benches to run. |
Hey @epompeii - I just had some time to come back to this again. I see two possible ways of moving forward:
Any ideas/opinions? |
@orhun based on the feedback from @EdJoPaTo, my suggestion would be to implement statistical continuous benchmarking. This differs from the approach that Diesel uses, relative continuous benchmarking. In order to implement statistical continuous benchmarking in GitHub Actions, you need to: This will require three workflow files: one for the first bullet and two for the second bullet.
Bencher can parse the output from Criterion using the When tracking your base branch, you can just have Bencher do this directly from stdout:
For benchmarking PRs, you will need to save the Criterion results to a file in the first step in the
Then in second step, you can upload those results to Bencher using the
If you go with this approach, you don't have to worry about using GitHub environments. I do not recommend using this approach. |
Thanks for the tip! That sounds good, only part that daunts me is having 3 workflows at the end of the day 💀 This sounds like it will make things a bit cumbersome. Is there any other projects that use statistical continuous benchmarking which we can adopt their workflows? |
There are several. I think it would be easiest to just look at the
|
Thanks for the tips @epompeii! I wish organizing workflows in a subdirectory was possible, but I guess it is not yet. In the meantime I adopted the example for Ratatui which seems like it is gonna work! |
Can I get a review on this? @ratatui-org/maintainers I think the benchmark result will be uploaded after merge. |
I have been delaying reviewing this because I know it requires a bit more deep understanding than a superficial one. Will get to it soon. |
closes #1092
https://bencher.dev/console/projects/ratatui-org
I'm not sure how this works with PRs, should we run it only when pushed on
main
? (cc @epompeii)@ratatui-org/maintainers I can send an invite through email for the bencher organization, I'm guessing no registration is needed.