Skip to content
This repository has been archived by the owner on Feb 24, 2021. It is now read-only.
/ BFloat.jl Public archive

Julia implementation of Google's Brain Float

License

Notifications You must be signed in to change notification settings

adknudson/BFloat.jl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

BFloat

Documentation Build Status Package Details
Build Status Licence
Release

A Julia implementation of Google's Brain Float16 (BFloat16). A BFloat16, as the name suggests, uses 16 bits to represent a floating point number. It uses 8 bits for the exponenet like a single precision number, but only has 7 bits for the mantissa. In this way it is a truncated Float32, which makes it easy for back and forth conversion between the two. The tradeoff is that it has reduced precision (only about 2-3 decimal digits).

About

Julia implementation of Google's Brain Float

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages