Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

inference failure for Float64 constructor? #36783

Closed
stevengj opened this issue Jul 23, 2020 · 5 comments · Fixed by #36795
Closed

inference failure for Float64 constructor? #36783

stevengj opened this issue Jul 23, 2020 · 5 comments · Fixed by #36795
Assignees
Labels
performance Must go faster

Comments

@stevengj
Copy link
Member

This discrepancy doesn't seem right:

julia> x = rand(Float16, 10_000);

julia> @btime sum($Float64, $x);
  776.911 μs (29999 allocations: 468.73 KiB)

julia> @btime sum($(y -> Float64(y)), $x);
  42.417 μs (0 allocations: 0 bytes)
@stevengj stevengj added performance Must go faster compiler:inference Type inference labels Jul 23, 2020
@yuyichao
Copy link
Contributor

JuliaCI/BenchmarkTools.jl#71

Most likely a benchmarktools bug.

@yuyichao
Copy link
Contributor

Actually it's a specialization issue for sum not BenchmarkTools this time. @benchmark sum(Float64, $x) gives the same result this time.

@JeffBezanson
Copy link
Sponsor Member

Yes, it's not specializing on the function argument when it's a Type. Maybe we should disable the heuristic for Types when the argument is called in the body?

@JeffBezanson JeffBezanson removed the compiler:inference Type inference label Jul 23, 2020
@vtjnash
Copy link
Sponsor Member

vtjnash commented Jul 23, 2020

Agreed, seems like called should apply to anything we heuristically limit.

@Jutho
Copy link
Contributor

Jutho commented Aug 12, 2020

Thanks for this. I never really understood the motivation for not specializing on Type variables, as they are usually important to infer concrete types further down the road. This fix just reduced inference time of some example code in our group from 200 seconds down to the normal inference time when you hard code the specific type.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance Must go faster
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants