You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to help, but I have no idea where to start. I could certainly work on a DMath helper library, but as for compiling to PTX and implementing to the __int128 data type, l lack sufficient knowledge.
Thanks.
Chris
The text was updated successfully, but these errors were encountered:
chrisaliotta
changed the title
Higher precision floats (decimal) support?
Higher precision float (decimal) support?
Jan 2, 2024
Hi @chrisaliotta, welcome to the ILGPU community and a happy new year! This would be really amazing to have and we would love assisting you with integrating these types. As for the PTX-lowering part, this is something @MoFtZ and I can focus on.
Hi @chrisaliotta !! I'm happy to let you know that my software implementation of IEEE 754 binary128 for .NET Core now fully supports ILGPU! I've even successfully used it with my own Mandelbrot fractal rendering library, also written in C#.
@m4rs-mt and team, excellent work on the library! Is it currently possible to support higher precision floats (128bit) with the current library?
Is this something we can implement:
https://developer.nvidia.com/blog/implementing-high-precision-decimal-arithmetic-with-cuda-int128/
I would like to help, but I have no idea where to start. I could certainly work on a
DMath
helper library, but as for compiling to PTX and implementing to the__int128
data type, l lack sufficient knowledge.Thanks.
Chris
The text was updated successfully, but these errors were encountered: