Documentation | Build Status | Package Details |
---|---|---|
A Julia implementation of Google's Brain Float16 (BFloat16). A BFloat16, as the name suggests, uses 16 bits to represent a floating point number. It uses 8 bits for the exponenet like a single precision number, but only has 7 bits for the mantissa. In this way it is a truncated Float32, which makes it easy for back and forth conversion between the two. The tradeoff is that it has reduced precision (only about 2-3 decimal digits).