Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Develop #125

Merged
merged 26 commits into from
Aug 1, 2023
Merged

Develop #125

merged 26 commits into from
Aug 1, 2023

Conversation

DTolm
Copy link
Owner

@DTolm DTolm commented Aug 1, 2023

Version 1.3 update of VkFFT
-Major library design change - from single header to multiple header approach, which improves structure and maintainability. Now instead of copying a single file, the user has to copy the vkFFT folder contents.
-VkFFT has been rewritten to follow the multiple-level platform structure, described in the VkFFT whitepaper. All algorithms have been split into respective files, which should ease an understanding of the library design by everybody. Multiple code duplication places have been restructured and unified (mainly the read/write part of kernels and pre/post-processing).
-All math operations and most variables have been abstracted to a union container approach, that can either contain numbers or variable names. Not a full compiler, but the code generated is close to machine-like. There are no math sprintf calls in the actual code generator now. More details can be found here: https://youtu.be/lHlFPqlOezo
-VkFFT supports arbitrary number of dimensions now. By defining VKFFT_MAX_FFT_DIMENSIONS, it is now possible to mimic fftw guru interface. Default 4. Innermost stride is always fixed to be 1, but there can be an arbitrary number of outer strides. to achieve innermost batching, initialize N+1 dim FFT and omit the innermost one using omitDimension[0] = 1.
-Enabled fp16 for all backends.
-Accuracy verification of the new version can be found here: vincefn/pyvkfft#25
-The new code structure will facilitate the implementation of many new features and performance improvements, so stay tuned.

DTolm and others added 26 commits April 2, 2023 23:40
-Development version of VkFFT 1.3. Major library design change - from single header to multiple header approach. Now instead of copying a single file, the user has to copy the vkFFT folder contents. As the library is already used by some projects, the 1.3 version will stay as a development branch for two months to perform thorough checks and improve functionality. After that, it will be merged into the main branch.
-VkFFT has been rewritten to follow the multiple-level platform structure, described in the VkFFT whitepaper. All algorithms have been split into respective files, which should ease an understanding of the library design by everybody. Multiple code duplication places have been restructured and unified (mainly the read/write part of kernels and pre/post-processing).
-All math operations and most variables have been abstracted to a union container, that can either contain numbers or variable names. Not a full compiler, but the code generated is close to machine-like. There are no math sprintf calls in the actual code generator now. More info will be added with comments later this month and presented in late April at IWOCL.
-The platform prefixes (which are vk for now) will be changed to avoid confusion with Vulkan, as this is a cross-API platform now.
-This version has been tested on provided benchmark scripts RTX2080 (Vulkan, CUDA, OpenCL), MI250 (HIP), A100 (CUDA), UHD610 (Level Zero) and M1 Pro (Metal). There are no performance regressions compared to the stable v1.3.33 version noticed and all precision tests pass. The two-month window will be used to verify it on more systems.
-The new code structure will facilitate the implementation of many new features and performance improvements, so stay tuned.
-Added groupedBatch parameter to contol batching of sequences
Handle VKFFT_ERROR_MATH_FAILED in getVkFFTErrorString
-Fixed incorrect address calculation in some R2C/C2R cases - straightforward indexing does not work with mergeSequencesR2C optimization
-Fixed Metal backend typo
-Should be fixed: Bluestein algorithm reading data out of bounds and producing errors
-Reorganized and fixed push constants
Wc++11-narrowing is a fatal error in AppleClang 14
-Fixed mistake in calculation of used registers in C2R Bluestein
-fixed some int container type setup
Fix clang compilation errors (`develop` branch)
-Switched vkFFT folder structer for easier installation (vincefn/pyvkfft#25)
-Fixed the 7700 and other errors on AMD GPUs with OpenCL when the number of threads is limited to 256 (vincefn/pyvkfft#25)
-Renamed ZERO_INIT to VKFFT_ZERO_INIT to avoid possible redefenitions
-Fixed another issue for systems with small maxThreadsNum (<=256). Rader FFT doesn't support more than one radix per thread for Rader primes now, which puts low-bound restrictions on how many threads are required. We check and switch to two uploads for these cases and if this doesn't help, we disable Rader's algorithm and switch to Bluestein's.
-Fixed R2C not performing the shared memory data exchange if C2R read decomposition is performed with more threads than used by first radix in some cases.
-Fixed missing VKFFT_ZERO_INIT.
-By defining VKFFT_MAX_FFT_DIMENSIONS, it is now possible to mimic fftw guru interface. Default 4.
-Innermost stride is always fixed to be 1, but there can be arbitrary number of outer strides. to achieve innermost batching, initialize N+1 dim FFT and omit the innermost one using omitDimension[0] = 1.
-Fixed FFTs of length 1 in C2C/R2C/C2R and FFTs of length 2 for DCT-IV.
-Added (#109), though using new Nvidia hardware with old cuda versions can result in undefined architectures.
-Enabled fp16 for all backends. for CUDA, need to provide CUDA_TOOLKIT_ROOT_DIR to access cuda_fp16.h
-Fixed double check of omitDimension[0]
…es are found in a day)

-Renamed most Vk prefixes (that are not full VkFFT) to Pf to avoid confusion. Renamed test suite.
-Verified builds for all backends (there are some issues with half precision in HIP on Windows that will be fixed later)
-Updated documentation
-Fixed some compiler warnings
API requires that the application destroys all kernel
handles created from the module before destroying
the module itself.
LevelZero: Fix the order of kernel/module destruction
@DTolm DTolm merged commit a7a5397 into master Aug 1, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants