Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add FFT LinOp #701

Merged
merged 18 commits into from
Sep 17, 2021
Merged

Add FFT LinOp #701

merged 18 commits into from
Sep 17, 2021

Conversation

upsj
Copy link
Member

@upsj upsj commented Feb 4, 2021

This PR adds an FFT matrix format which supports power-of-two 1D/2D/3D FFTs and inverse FFTs. The test matrices for Reference were generated using MATLAB.

Additionally, it contains an example solving the non-linear Schrödinger equation (with an additional potential term for nicer visuals, which actually makes it the Gross–Pitaevskii equation) using a Lie splitting.

TODO:

  • Decide whether we actually want/need this
  • Decide whether we need FFT for arbitrary sizes (Rader/Bluestein/Good-Thomas's algorithm), since cuFFT and hipFFT support it as well.
  • Add hipFFT support to containers
  • Add hipFFT support to PR
  • Maybe: Use Add common interface for simple kernels #733 to apply the non-linear numerical flow transparently across executors This should be handled by a later PR.
  • Add write to allow generation of FFT matrices.
nls.mp4

@upsj upsj added the 1:ST:ready-for-review This PR is ready for review label Feb 4, 2021
@upsj upsj self-assigned this Feb 4, 2021
@ginkgo-bot ginkgo-bot added mod:all This touches all Ginkgo modules. reg:build This is related to the build system. reg:example This is related to the examples. reg:testing This is related to testing. type:matrix-format This is related to the Matrix formats labels Feb 4, 2021
@codecov
Copy link

codecov bot commented Feb 4, 2021

Codecov Report

Merging #701 (fb244f0) into develop (3bc2ce3) will decrease coverage by 0.04%.
The diff coverage is 91.84%.

Impacted file tree graph

@@             Coverage Diff             @@
##           develop     #701      +/-   ##
===========================================
- Coverage    94.29%   94.24%   -0.05%     
===========================================
  Files          423      429       +6     
  Lines        34659    35402     +743     
===========================================
+ Hits         32680    33364     +684     
- Misses        1979     2038      +59     
Impacted Files Coverage Δ
core/device_hooks/common_kernels.inc.cpp 0.00% <0.00%> (ø)
core/test/utils/assertions.hpp 68.42% <0.00%> (+0.27%) ⬆️
include/ginkgo/core/base/exception_helpers.hpp 90.90% <ø> (ø)
include/ginkgo/core/matrix/fft.hpp 55.00% <55.00%> (ø)
core/matrix/fft.cpp 74.50% <74.50%> (ø)
reference/test/matrix/fft_kernels.cpp 97.26% <97.26%> (ø)
core/base/allocator.hpp 100.00% <100.00%> (ø)
core/device_hooks/cuda_hooks.cpp 58.82% <100.00%> (+2.57%) ⬆️
core/device_hooks/hip_hooks.cpp 58.82% <100.00%> (+2.57%) ⬆️
core/test/base/allocator.cpp 100.00% <100.00%> (ø)
... and 13 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 3bc2ce3...fb244f0. Read the comment docs.

cuda/matrix/fft_kernels.cu Outdated Show resolved Hide resolved
core/matrix/fft.cpp Outdated Show resolved Hide resolved
@Slaedr
Copy link
Contributor

Slaedr commented Feb 5, 2021

I just took a quick look for now. I think it might be useful to have an interface to FFT in Ginkgo, like your current wrapper for CuFFT on Nvidia GPUs. Also your reference/omp implementation seems pretty neat; great job on that and the example. However, while it's great that you wrote the reference/omp implementation, I'm not yet convinced we should add that to Ginkgo. Assuming it's nice to have an FFT interface in Ginkgo, should we instead consider adding an optional interface to FFTW3 or something for CPUs? My initial thought is that we should include our own FFT implementations for reference and omp only if we intend for FFT to become a proper research topic.

@upsj
Copy link
Member Author

upsj commented Feb 5, 2021

I agree on the OpenMP part (could probably replace that by FFTW or MKL FFT, if that comes at some point), though for reference, I would prefer if we have our own implementation without external dependencies. Also, it was a nice excercise for me :)

Copy link
Member

@thoasm thoasm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice integration and very neat example!

I have some minor comments (missing header, missing const and maybe add a template parameter for dimensionality).
Currently, I don't see a lot of disadvantages to integrage this into Ginko, so I would be fine with it.

core/matrix/fft_kernels.hpp Outdated Show resolved Hide resolved
core/matrix/fft_kernels.hpp Outdated Show resolved Hide resolved
core/base/allocator.hpp Outdated Show resolved Hide resolved
core/test/base/allocator.cpp Show resolved Hide resolved
core/test/utils/assertions.hpp Outdated Show resolved Hide resolved
reference/matrix/fft_kernels.cpp Outdated Show resolved Hide resolved
reference/matrix/fft_kernels.cpp Outdated Show resolved Hide resolved
reference/matrix/fft_kernels.cpp Outdated Show resolved Hide resolved
reference/test/matrix/fft_kernels.cpp Outdated Show resolved Hide resolved
include/ginkgo/core/matrix/fft.hpp Show resolved Hide resolved
@upsj upsj force-pushed the fft_linop branch 2 times, most recently from e3787e0 to 0addf0f Compare February 17, 2021 11:50
Copy link
Collaborator

@fritzgoebel fritzgoebel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that it would be nice to have FFT in ginkgo.
Also the schroedinger example is really cool!

I didn't check the reference and omp implementations yet, for that I, too, have to take some time to refresh my FFT-memory:)

include/ginkgo/core/matrix/fft.hpp Outdated Show resolved Hide resolved
cuda/matrix/fft_kernels.cu Show resolved Hide resolved
reference/matrix/fft_kernels.cpp Outdated Show resolved Hide resolved
@upsj
Copy link
Member Author

upsj commented Feb 23, 2021

format!

@upsj upsj force-pushed the fft_linop branch 2 times, most recently from b6f2f39 to e390e5a Compare February 26, 2021 19:36
Copy link
Member

@yhmtsai yhmtsai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you also gives some reference of this implementations?

cmake/create_test.cmake Outdated Show resolved Hide resolved
core/matrix/fft.cpp Outdated Show resolved Hide resolved
core/matrix/fft.cpp Outdated Show resolved Hide resolved
core/test/utils/assertions.hpp Show resolved Hide resolved
if (t - last_t > 1.0 / fps) {
last_t = t;
std::cout << t << std::endl;
output_timestep(output, n, amplitude->get_const_values());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you also do the same thing as heat-equation such that it can works on gpu exeuctor?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Due to the point-wise operations for the non-linear part of the equation, we cannot do this without implementing a custom kernel. I would like to avoid this for now, but my simplified kernel launch example from a while ago would make this pretty simple.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can template_clone not handle it?
I see.
do you mean that the following part of modification on frequency and amplitude such that the whole program needs on cpu not this output_timeste?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, exactly. We'd need to move the common kernel interface into the public interface and add some CMake magic to compile a separate file for all available backends, but then this would be possible as well.

hip/CMakeLists.txt Outdated Show resolved Hide resolved
reference/test/matrix/fft_kernels.cpp Outdated Show resolved Hide resolved
@sonarcloud
Copy link

sonarcloud bot commented Mar 18, 2021

upsj and others added 10 commits September 11, 2021 13:38
Found with MSVC's debug assertions
* Const-correctness
* Remove hand-generated FFT test inputs
* Limit power-of-two requirement to OMP/Reference

Co-authored-by: Thomas Grützmacher <[email protected]>
Co-authored-by: Yuhsiang Tsai <[email protected]>
Co-authored-by: Fritz Goebel <[email protected]>
Copy link
Member

@MarcelKoch MarcelKoch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the first part of my review, I will finish it tomorrow.

So far it looks good to me, it's an interesting choice of a linear operator and helps to show the broadness of the LinOp.

Smaller notes on the code are below, but I also have some more general remarks.

The Fft classes export a value_type which is fixed to complex<double>. Since the fft itself is agnostic to the value type its applied to, fixing the value type seems not useful. Is there a requirement on this class to export the type? If not, it should be removed, and otherwise we could perhaps use a different type to signal that the linop accepts any complex type.

I think the documentation of the FFT linop should make it clear that this is more of a case study to show how non-matrix operators could work, and not a target of fine-tuning and optimizations from our side.

Some parts of the code support real/complex applications or vice-versa. Either we allow to use the FFT also for these kinds of inputs in general, or we remove the additional cases. I would opt for removing them at the moment.

There does not seem to be a test for writing out an FFT.

core/base/allocator.hpp Outdated Show resolved Hide resolved
core/base/allocator.hpp Outdated Show resolved Hide resolved
core/base/allocator.hpp Outdated Show resolved Hide resolved
core/matrix/fft.cpp Outdated Show resolved Hide resolved
core/matrix/fft.cpp Outdated Show resolved Hide resolved
examples/schroedinger-splitting/doc/intro.dox Show resolved Hide resolved
hip/matrix/fft_kernels.hip.cpp Show resolved Hide resolved
hip/matrix/fft_kernels.hip.cpp Outdated Show resolved Hide resolved
hip/test/matrix/fft_kernels.hip.cpp Outdated Show resolved Hide resolved
Copy link
Member

@MarcelKoch MarcelKoch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the second part of the review, I've added only some smaller remarks. I'm not familiar with the FFT algorithm itself, so I've not deeply analyzed the reference/omp implementation.

include/ginkgo/core/base/math.hpp Outdated Show resolved Hide resolved
include/ginkgo/core/matrix/fft.hpp Show resolved Hide resolved
include/ginkgo/core/matrix/fft.hpp Show resolved Hide resolved
omp/matrix/fft_kernels.cpp Show resolved Hide resolved
omp/matrix/fft_kernels.cpp Show resolved Hide resolved
reference/matrix/fft_kernels.cpp Show resolved Hide resolved
reference/matrix/fft_kernels.cpp Show resolved Hide resolved
reference/matrix/fft_kernels.cpp Show resolved Hide resolved
reference/test/matrix/fft_kernels.cpp Outdated Show resolved Hide resolved
reference/test/matrix/fft_kernels.cpp Show resolved Hide resolved
@upsj
Copy link
Member Author

upsj commented Sep 15, 2021

Thanks for the review! On your questions:

  • There are some parts of the code that assume we have a value_type for each LinOp type, e.g. GKO_ASSERT_MTX_NEAR
  • Except for the OpenMP implementation (which Aditya already suggested using FFTW for at some point), the vendor backends should provide decent to excellent performance, so I would be hestitant to add such a comment to the class.
  • Yes, I can remove the R2C and C2R function wrappers.
  • The FFT write functions are being tested in the reference tests.

@MarcelKoch
Copy link
Member

Regarding the value_type, since you can't apply GKO_ASSERT_MTX_NEAR to the Fft operator anyway, is there a more relevant example, where the value type of the fft is required?
One place I've found that requires it is the write to an outstream, but I don't think that should be the only reason to keep it.

upsj and others added 3 commits September 16, 2021 11:35
* documentation improvements
* remove r2c and c2r hip/cuFFT wrappers
* simplify allocator constructors

Co-authored-by: Marcel Koch <[email protected]>
@upsj
Copy link
Member Author

upsj commented Sep 16, 2021

Yes, thank you, write was also the place I thought of originally. I think it might be a bit surprising to have a write member function, that doesn't work together with gko::write.

@upsj
Copy link
Member Author

upsj commented Sep 17, 2021

I made the hipFFT dependency optional, compiling GKO_NOT_IMPLEMENTED if it's missing. I tested that everything works by deleting /opt/rocm/hipfft in a container.

Ginkgo development automation moved this from Awaiting Review to Awaiting Merge Sep 17, 2021
@sonarcloud
Copy link

sonarcloud bot commented Sep 17, 2021

SonarCloud Quality Gate failed.    Quality Gate failed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 68 Code Smells

86.8% 86.8% Coverage
14.4% 14.4% Duplication

@upsj upsj merged commit 2cff4a8 into develop Sep 17, 2021
@upsj upsj deleted the fft_linop branch September 17, 2021 19:39
tcojean added a commit that referenced this pull request Nov 12, 2022
Advertise release 1.5.0 and last changes

+ Add changelog,
+ Update third party libraries
+ A small fix to a CMake file

See PR: #1195

The Ginkgo team is proud to announce the new Ginkgo minor release 1.5.0. This release brings many important new features such as:
- MPI-based multi-node support for all matrix formats and most solvers;
- full DPC++/SYCL support,
- functionality and interface for GPU-resident sparse direct solvers,
- an interface for wrapping solvers with scaling and reordering applied,
- a new algebraic Multigrid solver/preconditioner,
- improved mixed-precision support,
- support for device matrix assembly,

and much more.

If you face an issue, please first check our [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues) and the [open issues list](https://github.com/ginkgo-project/ginkgo/issues) and if you do not find a solution, feel free to [open a new issue](https://github.com/ginkgo-project/ginkgo/issues/new/choose) or ask a question using the [github discussions](https://github.com/ginkgo-project/ginkgo/discussions).

Supported systems and requirements:
+ For all platforms, CMake 3.13+
+ C++14 compliant compiler
+ Linux and macOS
  + GCC: 5.5+
  + clang: 3.9+
  + Intel compiler: 2018+
  + Apple LLVM: 8.0+
  + NVHPC: 22.7+
  + Cray Compiler: 14.0.1+
  + CUDA module: CUDA 9.2+ or NVHPC 22.7+
  + HIP module: ROCm 4.0+
  + DPC++ module: Intel OneAPI 2021.3 with oneMKL and oneDPL. Set the CXX compiler to `dpcpp`.
+ Windows
  + MinGW and Cygwin: GCC 5.5+
  + Microsoft Visual Studio: VS 2019
  + CUDA module: CUDA 9.2+, Microsoft Visual Studio
  + OpenMP module: MinGW or Cygwin.


Algorithm and important feature additions:
+ Add MPI-based multi-node for all matrix formats and solvers (except GMRES and IDR). ([#676](#676), [#908](#908), [#909](#909), [#932](#932), [#951](#951), [#961](#961), [#971](#971), [#976](#976), [#985](#985), [#1007](#1007), [#1030](#1030), [#1054](#1054), [#1100](#1100), [#1148](#1148))
+ Porting the remaining algorithms (preconditioners like ISAI, Jacobi, Multigrid, ParILU(T) and ParIC(T)) to DPC++/SYCL, update to SYCL 2020, and improve support and performance ([#896](#896), [#924](#924), [#928](#928), [#929](#929), [#933](#933), [#943](#943), [#960](#960), [#1057](#1057), [#1110](#1110),  [#1142](#1142))
+ Add a Sparse Direct interface supporting GPU-resident numerical LU factorization, symbolic Cholesky factorization, improved triangular solvers, and more ([#957](#957), [#1058](#1058), [#1072](#1072), [#1082](#1082))
+ Add a ScaleReordered interface that can wrap solvers and automatically apply reorderings and scalings ([#1059](#1059))
+ Add a Multigrid solver and improve the aggregation based PGM coarsening scheme ([#542](#542), [#913](#913), [#980](#980), [#982](#982),  [#986](#986))
+ Add infrastructure for unified, lambda-based, backend agnostic, kernels and utilize it for some simple kernels ([#833](#833), [#910](#910), [#926](#926))
+ Merge different CUDA, HIP, DPC++ and OpenMP tests under a common interface ([#904](#904), [#973](#973), [#1044](#1044), [#1117](#1117))
+ Add a device_matrix_data type for device-side matrix assembly ([#886](#886), [#963](#963), [#965](#965))
+ Add support for mixed real/complex BLAS operations ([#864](#864))
+ Add a FFT LinOp for all but DPC++/SYCL ([#701](#701))
+ Add FBCSR support for NVIDIA and AMD GPUs and CPUs with OpenMP ([#775](#775))
+ Add CSR scaling ([#848](#848))
+ Add array::const_view and equivalent to create constant matrices from non-const data ([#890](#890))
+ Add a RowGatherer LinOp supporting mixed precision to gather dense matrix rows ([#901](#901))
+ Add mixed precision SparsityCsr SpMV support ([#970](#970))
+ Allow creating CSR submatrix including from (possibly discontinuous) index sets ([#885](#885), [#964](#964))
+ Add a scaled identity addition (M <- aI + bM) feature interface and impls for Csr and Dense ([#942](#942))


Deprecations and important changes:
+ Deprecate AmgxPgm in favor of the new Pgm name. ([#1149](#1149)).
+ Deprecate specialized residual norm classes in favor of a common `ResidualNorm` class ([#1101](#1101))
+ Deprecate CamelCase non-polymorphic types in favor of snake_case versions (like array, machine_topology, uninitialized_array, index_set) ([#1031](#1031), [#1052](#1052))
+ Bug fix: restrict gko::share to rvalue references (*possible interface break*) ([#1020](#1020))
+ Bug fix: when using cuSPARSE's triangular solvers, specifying the factory parameter `num_rhs` is now required when solving for more than one right-hand side, otherwise an exception is thrown ([#1184](#1184)).
+ Drop official support for old CUDA < 9.2 ([#887](#887))


Improved performance additions:
+ Reuse tmp storage in reductions in solvers and add a mutable workspace to all solvers ([#1013](#1013), [#1028](#1028))
+ Add HIP unsafe atomic option for AMD ([#1091](#1091))
+ Prefer vendor implementations for Dense dot, conj_dot and norm2 when available ([#967](#967)).
+ Tuned OpenMP SellP, COO, and ELL SpMV kernels for a small number of RHS ([#809](#809))


Fixes:
+ Fix various compilation warnings ([#1076](#1076), [#1183](#1183), [#1189](#1189))
+ Fix issues with hwloc-related tests ([#1074](#1074))
+ Fix include headers for GCC 12 ([#1071](#1071))
+ Fix for simple-solver-logging example ([#1066](#1066))
+ Fix for potential memory leak in Logger ([#1056](#1056))
+ Fix logging of mixin classes ([#1037](#1037))
+ Improve value semantics for LinOp types, like moved-from state in cross-executor copy/clones ([#753](#753))
+ Fix some matrix SpMV and conversion corner cases ([#905](#905), [#978](#978))
+ Fix uninitialized data ([#958](#958))
+ Fix CUDA version requirement for cusparseSpSM ([#953](#953))
+ Fix several issues within bash-script ([#1016](#1016))
+ Fixes for `NVHPC` compiler support ([#1194](#1194))


Other additions:
+ Simplify and properly name GMRES kernels ([#861](#861))
+ Improve pkg-config support for non-CMake libraries ([#923](#923), [#1109](#1109))
+ Improve gdb pretty printer ([#987](#987), [#1114](#1114))
+ Add a logger highlighting inefficient allocation and copy patterns ([#1035](#1035))
+ Improved and optimized test random matrix generation ([#954](#954), [#1032](#1032))
+ Better CSR strategy defaults ([#969](#969))
+ Add `move_from` to `PolymorphicObject` ([#997](#997))
+ Remove unnecessary device_guard usage ([#956](#956))
+ Improvements to the generic accessor for mixed-precision ([#727](#727))
+ Add a naive lower triangular solver implementation for CUDA ([#764](#764))
+ Add support for int64 indices from CUDA 11 onward with SpMV and SpGEMM ([#897](#897))
+ Add a L1 norm implementation ([#900](#900))
+ Add reduce_add for arrays ([#831](#831))
+ Add utility to simplify Dense View creation from an existing Dense vector ([#1136](#1136)).
+ Add a custom transpose implementation for Fbcsr and Csr transpose for unsupported vendor types ([#1123](#1123))
+ Make IDR random initilization deterministic ([#1116](#1116))
+ Move the algorithm choice for triangular solvers from Csr::strategy_type to a factory parameter ([#1088](#1088))
+ Update CUDA archCoresPerSM ([#1175](#1116))
+ Add kernels for Csr sparsity pattern lookup ([#994](#994))
+ Differentiate between structural and numerical zeros in Ell/Sellp ([#1027](#1027))
+ Add a binary IO format for matrix data ([#984](#984))
+ Add a tuple zip_iterator implementation ([#966](#966))
+ Simplify kernel stubs and declarations ([#888](#888))
+ Simplify GKO_REGISTER_OPERATION with lambdas ([#859](#859))
+ Simplify copy to device in tests and examples ([#863](#863))
+ More verbose output to array assertions ([#858](#858))
+ Allow parallel compilation for Jacobi kernels ([#871](#871))
+ Change clang-format pointer alignment to left ([#872](#872))
+ Various improvements and fixes to the benchmarking framework ([#750](#750), [#759](#759), [#870](#870), [#911](#911), [#1033](#1033), [#1137](#1137))
+ Various documentation improvements ([#892](#892), [#921](#921), [#950](#950), [#977](#977), [#1021](#1021), [#1068](#1068), [#1069](#1069), [#1080](#1080), [#1081](#1081), [#1108](#1108), [#1153](#1153), [#1154](#1154))
+ Various CI improvements ([#868](#868), [#874](#874), [#884](#884), [#889](#889), [#899](#899), [#903](#903),  [#922](#922), [#925](#925), [#930](#930), [#936](#936), [#937](#937), [#958](#958), [#882](#882), [#1011](#1011), [#1015](#1015), [#989](#989), [#1039](#1039), [#1042](#1042), [#1067](#1067), [#1073](#1073), [#1075](#1075), [#1083](#1083), [#1084](#1084), [#1085](#1085), [#1139](#1139), [#1178](#1178), [#1187](#1187))
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:ready-for-review This PR is ready for review 1:ST:run-full-test mod:all This touches all Ginkgo modules. reg:build This is related to the build system. reg:example This is related to the examples. reg:testing This is related to testing. type:matrix-format This is related to the Matrix formats
Projects
Ginkgo development
Awaiting Merge
Development

Successfully merging this pull request may close these issues.

None yet

8 participants