Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use vendor implementations for some Dense kernels in Ginkgo #967

Merged
merged 7 commits into from
Feb 14, 2022

Conversation

pratikvn
Copy link
Member

@pratikvn pratikvn commented Feb 8, 2022

This PR adds a strategy enum to allow runtime switching between vendor and Ginkgo provided kernels.

+ Is not kernel specific, but Dense object specific.
+ create_with_config_of and create_with_type_of set the same strategy as their parent.
+ Strategies of objects created within Ginkgo cannot be controlled, but set to default, which is currently using Ginkgo kernels.

This is still a WIP. Only dot has been implemented as a prototype. You can see how it works in the cuda/hip dense kernel tests.

The implementation switch is now inside the kernel and the single vector implementation is the vendor implementation (where available) and multi-vector always uses Ginkgo's implementation. Only the following routines are replaced:

  1. Dot
  2. Conjugate dot
  3. Norm2

@pratikvn pratikvn self-assigned this Feb 8, 2022
@ginkgo-bot ginkgo-bot added mod:all This touches all Ginkgo modules. reg:testing This is related to testing. type:matrix-format This is related to the Matrix formats labels Feb 8, 2022
@pratikvn pratikvn added 1:ST:do-not-merge Please do not merge PR this yet. 1:ST:need-feedback The PR is somewhat ready but feedback on a blocking topic is required before a proper review. is:affects-performance This is related to something which affects performance. labels Feb 8, 2022
@upsj
Copy link
Member

upsj commented Feb 8, 2022

What is the purpose of this strategy approach? From the user side, I mainly care about performance and correctness. Opposite to Csr, both of those are not really data-dependent, as the only data with performance impact are the matrix dimensions/stride.
If we want to use the vendor libraries for #rhs = 1, why don't we do it on the kernel side?
If this is about benchmarking, we can add hipBLAS/cuBLAS wrappers to the BLAS benchmark like we did for Csr.

Slaedr
Slaedr previously requested changes Feb 8, 2022
Copy link
Contributor

@Slaedr Slaedr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The general idea makes sense. However, I think we, and users, may want to use different strategies (I guess they're really blas-backends or something) for different operations. See my comments below.

cuda/matrix/dense_kernels.cu Outdated Show resolved Hide resolved
cuda/matrix/dense_kernels.cu Show resolved Hide resolved
hip/matrix/dense_kernels.hip.cpp Outdated Show resolved Hide resolved
include/ginkgo/core/matrix/dense.hpp Outdated Show resolved Hide resolved
reference/matrix/dense_kernels.cpp Outdated Show resolved Hide resolved
include/ginkgo/core/matrix/dense.hpp Outdated Show resolved Hide resolved
@Slaedr
Copy link
Contributor

Slaedr commented Feb 8, 2022

@upsj You make a good point; we could go that way too. But if the idea is to give the user choice of blas backend, I'm for that as well. From time to time, we or the vendor might introduce improvements that might change the calculus for the user. We may not be able to or want to react to changes from the vendor side quickly enough.

@pratikvn pratikvn changed the title Strategy to switch between Dense kernels in Ginkgo Use vendor implementations for some Dense kernels in Ginkgo Feb 8, 2022
@pratikvn pratikvn added 1:ST:WIP This PR is a work in progress. Not ready for review. and removed 1:ST:do-not-merge Please do not merge PR this yet. 1:ST:need-feedback The PR is somewhat ready but feedback on a blocking topic is required before a proper review. labels Feb 8, 2022
@tcojean tcojean added this to the Ginkgo 1.5.0 milestone Feb 10, 2022
@pratikvn pratikvn requested review from Slaedr and a team February 10, 2022 21:27
@pratikvn pratikvn added 1:ST:ready-for-review This PR is ready for review and removed 1:ST:WIP This PR is a work in progress. Not ready for review. labels Feb 10, 2022
Copy link
Member

@upsj upsj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Can you add your tuning results for the generic reductions here, as well?

hip/matrix/dense_kernels.hip.cpp Outdated Show resolved Hide resolved
@pratikvn
Copy link
Member Author

I only ran the tuning results with single vector and with dot product in mind. I did not really verify how that affects the other kernels using this generic reduction, so I am a bit wary of changing the oversubscription directly in the code as of now. I will try to do a more comprehensive run and update that in another PR.

cuda/matrix/dense_kernels.cu Outdated Show resolved Hide resolved
Copy link
Member

@yhmtsai yhmtsai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. could you also add the norm2 test for one vector?
There are 1/20 vectors test for dot and conj_dot but only 20 vectors test for norm2

dpcpp/matrix/dense_kernels.dp.cpp Show resolved Hide resolved
@pratikvn pratikvn added 1:ST:ready-to-merge This PR is ready to merge. 1:ST:run-full-test and removed 1:ST:ready-for-review This PR is ready for review labels Feb 11, 2022
@sonarcloud
Copy link

sonarcloud bot commented Feb 14, 2022

SonarCloud Quality Gate failed.    Quality Gate failed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 10 Code Smells

42.1% 42.1% Coverage
44.5% 44.5% Duplication

@codecov
Copy link

codecov bot commented Feb 14, 2022

Codecov Report

Merging #967 (3ea6e0c) into develop (5efed26) will increase coverage by 0.00%.
The diff coverage is 92.50%.

Impacted file tree graph

@@           Coverage Diff            @@
##           develop     #967   +/-   ##
========================================
  Coverage    93.38%   93.39%           
========================================
  Files          480      480           
  Lines        39577    39598   +21     
========================================
+ Hits         36959    36981   +22     
+ Misses        2618     2617    -1     
Impacted Files Coverage Δ
core/device_hooks/common_kernels.inc.cpp 0.00% <0.00%> (ø)
core/matrix/dense.cpp 94.54% <100.00%> (ø)
omp/matrix/dense_kernels.cpp 79.01% <100.00%> (+1.67%) ⬆️
omp/test/matrix/dense_kernels.cpp 99.69% <100.00%> (+<0.01%) ⬆️
reference/matrix/dense_kernels.cpp 89.06% <100.00%> (+0.17%) ⬆️
reference/base/index_set_kernels.cpp 100.00% <0.00%> (ø)
devices/machine_topology.cpp 82.71% <0.00%> (+4.93%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 5efed26...3ea6e0c. Read the comment docs.

@pratikvn pratikvn added this to Awaiting Merge in Ginkgo development Feb 14, 2022
@pratikvn pratikvn merged commit b34ae57 into develop Feb 14, 2022
@pratikvn pratikvn deleted the dense-strategy branch February 14, 2022 07:08
tcojean added a commit that referenced this pull request Nov 12, 2022
Advertise release 1.5.0 and last changes

+ Add changelog,
+ Update third party libraries
+ A small fix to a CMake file

See PR: #1195

The Ginkgo team is proud to announce the new Ginkgo minor release 1.5.0. This release brings many important new features such as:
- MPI-based multi-node support for all matrix formats and most solvers;
- full DPC++/SYCL support,
- functionality and interface for GPU-resident sparse direct solvers,
- an interface for wrapping solvers with scaling and reordering applied,
- a new algebraic Multigrid solver/preconditioner,
- improved mixed-precision support,
- support for device matrix assembly,

and much more.

If you face an issue, please first check our [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues) and the [open issues list](https://github.com/ginkgo-project/ginkgo/issues) and if you do not find a solution, feel free to [open a new issue](https://github.com/ginkgo-project/ginkgo/issues/new/choose) or ask a question using the [github discussions](https://github.com/ginkgo-project/ginkgo/discussions).

Supported systems and requirements:
+ For all platforms, CMake 3.13+
+ C++14 compliant compiler
+ Linux and macOS
  + GCC: 5.5+
  + clang: 3.9+
  + Intel compiler: 2018+
  + Apple LLVM: 8.0+
  + NVHPC: 22.7+
  + Cray Compiler: 14.0.1+
  + CUDA module: CUDA 9.2+ or NVHPC 22.7+
  + HIP module: ROCm 4.0+
  + DPC++ module: Intel OneAPI 2021.3 with oneMKL and oneDPL. Set the CXX compiler to `dpcpp`.
+ Windows
  + MinGW and Cygwin: GCC 5.5+
  + Microsoft Visual Studio: VS 2019
  + CUDA module: CUDA 9.2+, Microsoft Visual Studio
  + OpenMP module: MinGW or Cygwin.


Algorithm and important feature additions:
+ Add MPI-based multi-node for all matrix formats and solvers (except GMRES and IDR). ([#676](#676), [#908](#908), [#909](#909), [#932](#932), [#951](#951), [#961](#961), [#971](#971), [#976](#976), [#985](#985), [#1007](#1007), [#1030](#1030), [#1054](#1054), [#1100](#1100), [#1148](#1148))
+ Porting the remaining algorithms (preconditioners like ISAI, Jacobi, Multigrid, ParILU(T) and ParIC(T)) to DPC++/SYCL, update to SYCL 2020, and improve support and performance ([#896](#896), [#924](#924), [#928](#928), [#929](#929), [#933](#933), [#943](#943), [#960](#960), [#1057](#1057), [#1110](#1110),  [#1142](#1142))
+ Add a Sparse Direct interface supporting GPU-resident numerical LU factorization, symbolic Cholesky factorization, improved triangular solvers, and more ([#957](#957), [#1058](#1058), [#1072](#1072), [#1082](#1082))
+ Add a ScaleReordered interface that can wrap solvers and automatically apply reorderings and scalings ([#1059](#1059))
+ Add a Multigrid solver and improve the aggregation based PGM coarsening scheme ([#542](#542), [#913](#913), [#980](#980), [#982](#982),  [#986](#986))
+ Add infrastructure for unified, lambda-based, backend agnostic, kernels and utilize it for some simple kernels ([#833](#833), [#910](#910), [#926](#926))
+ Merge different CUDA, HIP, DPC++ and OpenMP tests under a common interface ([#904](#904), [#973](#973), [#1044](#1044), [#1117](#1117))
+ Add a device_matrix_data type for device-side matrix assembly ([#886](#886), [#963](#963), [#965](#965))
+ Add support for mixed real/complex BLAS operations ([#864](#864))
+ Add a FFT LinOp for all but DPC++/SYCL ([#701](#701))
+ Add FBCSR support for NVIDIA and AMD GPUs and CPUs with OpenMP ([#775](#775))
+ Add CSR scaling ([#848](#848))
+ Add array::const_view and equivalent to create constant matrices from non-const data ([#890](#890))
+ Add a RowGatherer LinOp supporting mixed precision to gather dense matrix rows ([#901](#901))
+ Add mixed precision SparsityCsr SpMV support ([#970](#970))
+ Allow creating CSR submatrix including from (possibly discontinuous) index sets ([#885](#885), [#964](#964))
+ Add a scaled identity addition (M <- aI + bM) feature interface and impls for Csr and Dense ([#942](#942))


Deprecations and important changes:
+ Deprecate AmgxPgm in favor of the new Pgm name. ([#1149](#1149)).
+ Deprecate specialized residual norm classes in favor of a common `ResidualNorm` class ([#1101](#1101))
+ Deprecate CamelCase non-polymorphic types in favor of snake_case versions (like array, machine_topology, uninitialized_array, index_set) ([#1031](#1031), [#1052](#1052))
+ Bug fix: restrict gko::share to rvalue references (*possible interface break*) ([#1020](#1020))
+ Bug fix: when using cuSPARSE's triangular solvers, specifying the factory parameter `num_rhs` is now required when solving for more than one right-hand side, otherwise an exception is thrown ([#1184](#1184)).
+ Drop official support for old CUDA < 9.2 ([#887](#887))


Improved performance additions:
+ Reuse tmp storage in reductions in solvers and add a mutable workspace to all solvers ([#1013](#1013), [#1028](#1028))
+ Add HIP unsafe atomic option for AMD ([#1091](#1091))
+ Prefer vendor implementations for Dense dot, conj_dot and norm2 when available ([#967](#967)).
+ Tuned OpenMP SellP, COO, and ELL SpMV kernels for a small number of RHS ([#809](#809))


Fixes:
+ Fix various compilation warnings ([#1076](#1076), [#1183](#1183), [#1189](#1189))
+ Fix issues with hwloc-related tests ([#1074](#1074))
+ Fix include headers for GCC 12 ([#1071](#1071))
+ Fix for simple-solver-logging example ([#1066](#1066))
+ Fix for potential memory leak in Logger ([#1056](#1056))
+ Fix logging of mixin classes ([#1037](#1037))
+ Improve value semantics for LinOp types, like moved-from state in cross-executor copy/clones ([#753](#753))
+ Fix some matrix SpMV and conversion corner cases ([#905](#905), [#978](#978))
+ Fix uninitialized data ([#958](#958))
+ Fix CUDA version requirement for cusparseSpSM ([#953](#953))
+ Fix several issues within bash-script ([#1016](#1016))
+ Fixes for `NVHPC` compiler support ([#1194](#1194))


Other additions:
+ Simplify and properly name GMRES kernels ([#861](#861))
+ Improve pkg-config support for non-CMake libraries ([#923](#923), [#1109](#1109))
+ Improve gdb pretty printer ([#987](#987), [#1114](#1114))
+ Add a logger highlighting inefficient allocation and copy patterns ([#1035](#1035))
+ Improved and optimized test random matrix generation ([#954](#954), [#1032](#1032))
+ Better CSR strategy defaults ([#969](#969))
+ Add `move_from` to `PolymorphicObject` ([#997](#997))
+ Remove unnecessary device_guard usage ([#956](#956))
+ Improvements to the generic accessor for mixed-precision ([#727](#727))
+ Add a naive lower triangular solver implementation for CUDA ([#764](#764))
+ Add support for int64 indices from CUDA 11 onward with SpMV and SpGEMM ([#897](#897))
+ Add a L1 norm implementation ([#900](#900))
+ Add reduce_add for arrays ([#831](#831))
+ Add utility to simplify Dense View creation from an existing Dense vector ([#1136](#1136)).
+ Add a custom transpose implementation for Fbcsr and Csr transpose for unsupported vendor types ([#1123](#1123))
+ Make IDR random initilization deterministic ([#1116](#1116))
+ Move the algorithm choice for triangular solvers from Csr::strategy_type to a factory parameter ([#1088](#1088))
+ Update CUDA archCoresPerSM ([#1175](#1116))
+ Add kernels for Csr sparsity pattern lookup ([#994](#994))
+ Differentiate between structural and numerical zeros in Ell/Sellp ([#1027](#1027))
+ Add a binary IO format for matrix data ([#984](#984))
+ Add a tuple zip_iterator implementation ([#966](#966))
+ Simplify kernel stubs and declarations ([#888](#888))
+ Simplify GKO_REGISTER_OPERATION with lambdas ([#859](#859))
+ Simplify copy to device in tests and examples ([#863](#863))
+ More verbose output to array assertions ([#858](#858))
+ Allow parallel compilation for Jacobi kernels ([#871](#871))
+ Change clang-format pointer alignment to left ([#872](#872))
+ Various improvements and fixes to the benchmarking framework ([#750](#750), [#759](#759), [#870](#870), [#911](#911), [#1033](#1033), [#1137](#1137))
+ Various documentation improvements ([#892](#892), [#921](#921), [#950](#950), [#977](#977), [#1021](#1021), [#1068](#1068), [#1069](#1069), [#1080](#1080), [#1081](#1081), [#1108](#1108), [#1153](#1153), [#1154](#1154))
+ Various CI improvements ([#868](#868), [#874](#874), [#884](#884), [#889](#889), [#899](#899), [#903](#903),  [#922](#922), [#925](#925), [#930](#930), [#936](#936), [#937](#937), [#958](#958), [#882](#882), [#1011](#1011), [#1015](#1015), [#989](#989), [#1039](#1039), [#1042](#1042), [#1067](#1067), [#1073](#1073), [#1075](#1075), [#1083](#1083), [#1084](#1084), [#1085](#1085), [#1139](#1139), [#1178](#1178), [#1187](#1187))
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:ready-to-merge This PR is ready to merge. 1:ST:run-full-test is:affects-performance This is related to something which affects performance. mod:all This touches all Ginkgo modules. reg:testing This is related to testing. type:matrix-format This is related to the Matrix formats
Projects
Ginkgo development
Awaiting Merge
Development

Successfully merging this pull request may close these issues.

None yet

6 participants