Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve support for Float16 #74

Merged
merged 5 commits into from
Aug 22, 2023
Merged

Improve support for Float16 #74

merged 5 commits into from
Aug 22, 2023

Conversation

devmotion
Copy link
Member

@devmotion devmotion commented Aug 17, 2023

I noticed that I had a local branch with some improvements for Float16 (IIRC these issues popped up in StatsFuns).

Ref: JuliaLang/julia#42223

@tpapp tpapp self-requested a review August 18, 2023 06:25
@tpapp
Copy link
Collaborator

tpapp commented Aug 18, 2023

It is somewhat surprising that the cexpexp tests fail because of this. Maybe the precision in Float16 is not enough for that function?

@devmotion
Copy link
Member Author

Yes, I assume that's likely the problem. Maybe we should add a specialization that performs calculations with Float32 and converts the result to Float16.

@devmotion
Copy link
Member Author

Tests with Float32 seem to be affected from the accuracy problems as well. I strongly suspect that the initial culprit is JuliaLang/julia#37440 as tests pass with Julia 1.6 but fail with Julia >= 1.7.0-beta1 (cc @oscardssmith).

@oscardssmith
Copy link
Contributor

oscardssmith commented Aug 20, 2023

I'm not fully sure what the accuracy problem would be. expm1(Float16) and exp(Float16) return the correctly rounded result for all inputs.

@devmotion
Copy link
Member Author

The accuracy problem seems to be caused by expm1 (as this one is affected by the linked PR). I assume it is only apparent in the tests of cexpexp since the first exp seems to create inputs in a problematic range. These tests (comparisons with highly accurate BigFloat computations) pass on Julia < 1.7.0-beta1, so to me there seems to be a clear regression.

@oscardssmith
Copy link
Contributor

Iterating over all Float16s, expm1(x) returns identical results to Float16(expm1(big(x))

@devmotion
Copy link
Member Author

The tests clearly show that cexpexp tests for Float32 and Float16 pass with old Julia versions but fail with recent ones. Doesn't this clearly indicate that a regression/bug was introduced at some point?

@devmotion
Copy link
Member Author

Simple example:

On Julia >= 1.7.0-beta1:

julia> Float32(expm1(-exp(big(Float32(0.1)))))
-0.6688457f0

julia> expm1(-exp(Float32(0.1)))
-0.3376915f0

On Julia < 1.7.0-beta1:

julia> Float32(expm1(-exp(big(Float32(0.1)))))
-0.6688457f0

julia> expm1(-exp(Float32(0.1)))
-0.6688458f0

@oscardssmith
Copy link
Contributor

oscardssmith commented Aug 20, 2023

so...

  1. that doesn't use Float16.
  2. I can't reproduce that. (at least on 1.9.2)

@devmotion
Copy link
Member Author

As mentioned above, Float32 is failing as well, so it's not a Float16-exclusive problem. That's why the problem can't be fixed by computing the Float16 result with Float32 (as tried above) and why I was pointing to the Float32-expm1 PR. The same problem can be observed with Float16 as well though:

On Julia >= 1.7.0-beta1:

julia> Float16(expm1(-exp(big(Float16(0.1)))))
Float16(-0.669)

julia> expm1(-exp(Float16(0.1)))
Float16(-0.338)

On Julia < 1.7.0-beta1:

julia> Float16(expm1(-exp(big(Float16(0.1)))))
Float16(-0.669)

julia> Base.expm1(x::Float16) = Float16(expm1(Float32(x))) # not defined on Julia <= 1.6.7

julia> expm1(-exp(Float16(0.1)))
Float16(-0.669)

I can't reproduce that. (at least on 1.9.2)

I can, both the Float32 and the Float16 issue:

julia> Float16(expm1(-exp(big(Float16(0.1)))))
Float16(-0.669)

julia> expm1(-exp(Float16(0.1)))
Float16(-0.338)

julia> Float32(expm1(-exp(big(Float32(0.1)))))
-0.6688457f0

julia> expm1(-exp(Float32(0.1)))
-0.3376915f0

julia> versioninfo()
Julia Version 1.9.2
Commit e4ee485e909 (2023-07-05 09:39 UTC)
Platform Info:
  OS: Linux (x86_64-linux-gnu)
  CPU: 8 × 11th Gen Intel(R) Core(TM) i7-1165G7 @ 2.80GHz
  WORD_SIZE: 64
  LIBM: libopenlibm
  LLVM: libLLVM-14.0.6 (ORCJIT, tigerlake)
  Threads: 8 on 8 virtual cores
Environment:
  JULIA_PKG_USE_CLI_GIT = true
  JULIA_NUM_THREADS = auto

I assume that maybe you use a different OS and/or CPU?

@oscardssmith
Copy link
Contributor

Something is very seriously wrong. Luckily I have exactly the same CPU as you on my laptop and I can reproduce this. To make things very weird, this doesn't show up in the debugger

@oscardssmith
Copy link
Contributor

Thank you very much for finding this. This appears to be a very bad bug in LLVM.

@oscardssmith
Copy link
Contributor

never mind. It was a bug in expm1 JuliaLang/julia#50989. I can't believe no one noticed this for 2 years...

@devmotion
Copy link
Member Author

devmotion commented Aug 21, 2023

I disabled tests of cexpexp with Float32 and Float16 for the affected Julia versions (on Julia 1.0 and Julia nightly tests are run, only on Julia 1.9.2 they are omitted).

@devmotion
Copy link
Member Author

Fine with the PR and the disabled tests @tpapp? 🙂

Copy link
Collaborator

@tpapp tpapp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@devmotion devmotion merged commit a1c4fda into master Aug 22, 2023
4 checks passed
@devmotion devmotion deleted the dw/float16 branch August 22, 2023 07:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants