-
Notifications
You must be signed in to change notification settings - Fork 523
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore: Replace benchmark with cronometro for benchmarking. #751
Conversation
@ronag If you run the new benchmark on the reference machine I'll update the README as well. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Excellent work!
Codecov Report
@@ Coverage Diff @@
## main #751 +/- ##
=========================================
Coverage 100.00% 100.00%
=========================================
Files 24 24
Lines 1904 1904
=========================================
Hits 1904 1904 Continue to review full report at Codecov.
|
It's a bit weird that |
To be honest I have no idea. I've checked the code and everything seems to be fine. I wonder if the socket buffers are filled up and thus performance decrease. |
What results do you get if you run it locally? Also what order are they run in? Could it be some kind of warm up thing? Should we run each test in a separate node instance? |
Also the multiple socket results seem weird. Maybe they always have. Do we need those? @mcollina I think in this case pipelining is seems to a disadvantage against core http that doesn't use pipelingin. |
Cronometro runs with "warmup" mode by default. For each test, it executes twice, the first time discarding results. Here's my results:
|
Maybe |
That option uses 1 socket if I am not mistaken, so I suspect that requests are run sequentially in that case. |
Yea, I think it's the |
@ShogunPanda can we split the benchmarks into multi socket and single socket ones? That way the relative values make more sense. |
I think so too because it's using the default Agent behind the scenes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome work
Is it possible to skip the grouping headers? The fastest test is always in the bottom so it's just noise, no? |
Unfortunately not, that's fixed in cronometro. I'll update benchmarks soon. |
Benchmarks updated. We have two different suites now. Let me know if I set everything correct. |
Not related to this to this Pr per se. It’s interesting that the multi socket version has less diff between core and undici. Maybe the server should run cluster? |
* chore: Replace benchmark with cronometro for benchmarking. * chore: Spelling. * chore: Separated tests in single and multiple sockets.
This PR closes #715 and #736 by replacing
benchmark
withcronometro
.I also refactored the benchmark NPM scripts to use
wait-on
instead of a flat timeout for better reliability.