Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Which precision should I use? float32 or float64 #71

Closed
luweizheng opened this issue Apr 7, 2022 · 2 comments
Closed

Which precision should I use? float32 or float64 #71

luweizheng opened this issue Apr 7, 2022 · 2 comments

Comments

@luweizheng
Copy link

Hi tff team,

Thank you so much for developing this project. My question may be vague. Which precision should I use when doing derivative pricing? Is there any industry standards? Is float32 enough for most cases? Or it depends case by case?

Other quant libraries on CPU, for example QuantLib and QuantLib.jl, they use float64 for almost all scenarios (analytically method, Monte-Carlo or PDE). CPUs usually have float64 support.

I find TFF usually uses tf.float64 when choosing dtype. TFF can run on accelerators: GPUs or TPUs. NVIDIA GPUs have float64 CUDA cores and are capable of handing float64. However, TPUs may not have float64 support. But I find a paper from Google about doing Monte-Carlo on TPUs. They say in most cases, TPUs can do Monte-Carlo. On the hardware side, lower precision usually means faster computing speed. Other academic papers do some experiments on mix-precision.

Is there a simple or quick guide on precision? Like:

  • float32 is enough for most Monte-Carlo cases and black-sholes analytically method?
  • float64 is required for PDE?

If precision is depended by cases. How to measure the precision is enough for my case?

Thanks!

@cyrilchim
Copy link
Contributor

Hi @luweizheng

AFAIK, double precision is the standard. I think there is research on utilizing mixed precision to speed up calculations (see, e.g., here). At the time the paper you are mentioning was released, TPU did not support double precision but as of today it does (the last time I checked there was a double precision emulator). The paper demonstrates advantage of a TPU in terms of speed and cost but you'd still have to be very careful managing the error. I am not aware if there is a rule of thumb for single precision calculations (check out the reference above). I can imagine there are cases where single precision can be used. For example, for model calibration, where you can verify the calibrated model by testing against the market data.

@luweizheng
Copy link
Author

@cyrilchim Thanks a lot!

I'm happy to hear other discussions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants