-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
printf scientific notation performance regression #8972
Comments
Hmmmm.......I'll try to look into it. A big difference here is that the double conversion library was rolling its own minimal BigInt, while grisu.jl utilized GMP. I believe some of their code makes use of reusing allocated BigInts, which we probably aren't and would explain all the extra allocation. |
It looks like the slow path is used for most (all?) integers, e.g.: julia> function f(x)
io = IOBuffer()
for i = 1:100000
@printf io "%e" x
end
end;
julia> @time f(1.)
elapsed time: 2.313745457 seconds (264195572 bytes allocated, 52.41% gc time)
julia> @time f(1.+eps())
elapsed time: 0.044512516 seconds (17819740 bytes allocated) I haven't looked into the code to see how |
Bump. This is pretty serious. |
Status update: I've gotten a 5x improvement so far after a first pass. I'm still seeing some gc action in profiling so I'm going to keep digging. |
Awesome, thanks. |
@quinnj would it make sense to commit your 5x improvement? As it stands, this regression is probably bad enough to block 0.4. |
I want to run the full grisu testing suite on my changes first; I'll make a PR tomorrow after it runs. |
PR coming later tonight! (once I fix a few tests) |
Add custom Bignum type for internal grisu performance. Fixes #8972
On Julia 0.3:
On Julia 0.4:
@profile
suggests the time is spent in grisu's GMP codepath, so I'm guessing #7291 is the culprit. cc @quinnjThe text was updated successfully, but these errors were encountered: