Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fbcsr kernels for Cuda and OpenMP #775

Merged
merged 48 commits into from
Oct 26, 2021
Merged

Fbcsr kernels for Cuda and OpenMP #775

merged 48 commits into from
Oct 26, 2021

Conversation

Slaedr
Copy link
Contributor

@Slaedr Slaedr commented May 25, 2021

Implementations of several kernels for the Fbcsr matrix type for the Cuda and OpenMP backends.

Synthesizer lists (compile-time lists) are used for specifying the list of block sizes to compile for, where applicable. Also, the precision dispatch capability is now used in core.

For now, several kernels for the Cuda backend just call a Cusparse wrapper.

@ginkgo-bot ginkgo-bot added mod:core This is related to the core module. mod:cuda This is related to the CUDA module. mod:hip This is related to the HIP module. mod:openmp This is related to the OpenMP module. mod:reference This is related to the reference module. reg:build This is related to the build system. reg:testing This is related to testing. type:matrix-format This is related to the Matrix formats labels May 25, 2021
@Slaedr Slaedr added the 1:ST:WIP This PR is a work in progress. Not ready for review. label May 25, 2021
@codecov
Copy link

codecov bot commented Jun 23, 2021

Codecov Report

Merging #775 (3fa1b4e) into develop (2901950) will increase coverage by 0.07%.
The diff coverage is 96.37%.

Impacted file tree graph

@@             Coverage Diff             @@
##           develop     #775      +/-   ##
===========================================
+ Coverage    94.73%   94.80%   +0.07%     
===========================================
  Files          434      436       +2     
  Lines        35708    36008     +300     
===========================================
+ Hits         33827    34137     +310     
+ Misses        1881     1871      -10     
Impacted Files Coverage Δ
core/base/utils.hpp 100.00% <ø> (ø)
omp/base/kernel_launch.hpp 84.61% <ø> (ø)
omp/base/kernel_launch_reduction.hpp 96.62% <ø> (ø)
core/test/utils/fb_matrix_generator.hpp 90.32% <90.32%> (ø)
reference/matrix/fbcsr_kernels.cpp 98.42% <90.90%> (+1.52%) ⬆️
core/test/utils/fb_matrix_generator_test.cpp 92.85% <92.85%> (ø)
omp/matrix/fbcsr_kernels.cpp 93.06% <96.55%> (+93.06%) ⬆️
core/matrix/fbcsr.cpp 97.25% <100.00%> (+0.01%) ⬆️
omp/test/matrix/fbcsr_kernels.cpp 100.00% <100.00%> (ø)
reference/test/matrix/fbcsr_kernels.cpp 100.00% <100.00%> (ø)
... and 2 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 2901950...3fa1b4e. Read the comment docs.

@Slaedr Slaedr added 1:ST:ready-for-review This PR is ready for review and removed 1:ST:WIP This PR is a work in progress. Not ready for review. labels Jun 26, 2021
@sonarcloud
Copy link

sonarcloud bot commented Jun 26, 2021

Kudos, SonarCloud Quality Gate passed!

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 18 Code Smells

9.0% 9.0% Coverage
7.1% 7.1% Duplication

@Slaedr Slaedr requested a review from a team July 14, 2021 08:19
@sonarcloud
Copy link

sonarcloud bot commented Jul 27, 2021

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 18 Code Smells

9.0% 9.0% Coverage
7.1% 7.1% Duplication

Copy link
Member

@MarcelKoch MarcelKoch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm in general, I've some minor suggestions below.

Also, I've some questions/remarks regarding the precompiled kernels for special block sizes:

  1. Why exactly these sizes? I guess because they were used before, but I think now would be a good chance to see if they still make sense or if others/more should be used.
  2. Perhaps a generic kernel could be added as a fall back if the current block size is not part of the precompiled ones.

(I've also added these remark in a comment somewhere)

omp/matrix/fbcsr_kernels.cpp Show resolved Hide resolved
omp/test/matrix/fbcsr_kernels.cpp Outdated Show resolved Hide resolved
omp/test/matrix/fbcsr_kernels.cpp Outdated Show resolved Hide resolved
reference/test/matrix/fbcsr_kernels.cpp Outdated Show resolved Hide resolved
omp/matrix/fbcsr_kernels.cpp Show resolved Hide resolved
core/matrix/fbcsr.cpp Outdated Show resolved Hide resolved
cuda/matrix/fbcsr_kernels.cu Outdated Show resolved Hide resolved
cuda/matrix/fbcsr_kernels.cu Outdated Show resolved Hide resolved
common/matrix/fbcsr_kernels.hpp.inc Outdated Show resolved Hide resolved
cuda/base/cusparse_block_bindings.hpp Show resolved Hide resolved
@Slaedr Slaedr self-assigned this Jul 30, 2021
Copy link
Member

@yhmtsai yhmtsai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the core/test/utils/fb_matrix... should be under core/test/matrix/fbcsr_matrix...?
should cusparse_block_binding.hpp be separate file agains cusparase_binding.hpp?

common/matrix/fbcsr_kernels.hpp.inc Outdated Show resolved Hide resolved
common/matrix/fbcsr_kernels.hpp.inc Outdated Show resolved Hide resolved
origblocks[sw_id_in_threadblock * mat_blk_sz_2 + i] =
values[ibz * mat_blk_sz_2 + i];
}
subwarp_grp.sync();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should it be thread_block.sync()?
if the communication is only in subwarp, maybe use warp shuffle to get the data?
(it also depends on how many elements per subwarp)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The communication is only in the warp. Maybe I could use shuffles, but it's an in-place transpose so it would need some work, I guess. This is probably not that performance-critical, so I'll come back to this later if need be.

core/test/utils/fb_matrix_generator.hpp Outdated Show resolved Hide resolved
core/test/utils/fb_matrix_generator.hpp Outdated Show resolved Hide resolved
cuda/test/matrix/fbcsr_kernels.cpp Outdated Show resolved Hide resolved
cuda/test/matrix/fbcsr_kernels.cpp Show resolved Hide resolved
cuda/test/matrix/fbcsr_kernels.cpp Show resolved Hide resolved
reference/matrix/fbcsr_kernels.cpp Show resolved Hide resolved
reference/matrix/fbcsr_kernels.cpp Show resolved Hide resolved
@Slaedr
Copy link
Contributor Author

Slaedr commented Oct 4, 2021

format!

@Slaedr
Copy link
Contributor Author

Slaedr commented Oct 5, 2021

@yhmtsai

  • core/test/utils/fb_matrix_generator.hpp will be used as a header by all backends, very similar to matrix_generator.hpp.
  • I thought cusparse_bindings is already quite long, and cusparse_block_bindings has the potential to be pretty long too. It might be better to just keep them separate, maybe? Do you think it's better to merge them?

@Slaedr Slaedr removed their assignment Oct 6, 2021
@Slaedr Slaedr requested a review from yhmtsai October 7, 2021 08:40
Copy link
Member

@yhmtsai yhmtsai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one potential gpu race condition, some nit on format, testing.

Comment on lines 67 to 69
values[ibz * mat_blk_sz_2 + i] =
origblocks[sw_id_in_threadblock * mat_blk_sz_2 + in_pos];
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it needs additional sync after for. otherwise write may be before read in next loop.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since threads within a subwarp will not diverge for this kernel, it should not be necessary. But I'll add it anyway, it's probably better that way.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. but you put the sync after read and before write, so it should be the same sync after write before read.
I think the transpose index is reflect between both direction.
Is there bank conflict here?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Kernels can still diverge in execution, even if they all execute the same commands.
Also, for a kernel that reads data once and then writes it once in a permuted fashion, do we really need all of this additional code? Do we expect shared memory to help?



namespace gko {
namespace fixedblock {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should it be under cuda/hip matrix/fb_csr there?
like jacobi generate stuff.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It will be used by all backends, so it cannot be cuda or hip. Further, any algorithm that uses static fixed-size blocks, like the ParBILU that I was working on, will also use this. So I decided to have a common fixedblock namespace for such common things.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see, that makes sense.
Is ParBILU for blockCSR or different format?

Copy link
Contributor Author

@Slaedr Slaedr Oct 20, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At least initially, ParBILU will only be for Fbcsr.

Comment on lines 159 to 162
if (auto b_fbcsr = dynamic_cast<const Fbcsr<ValueType, IndexType>*>(b)) {
// if b is a FBCSR matrix, we need an SpGeMM
GKO_NOT_SUPPORTED(b_fbcsr);
} else {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

precision_dispatch_real_complex also throw the error when input not dense, so this part is unncessary

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But I don't want the spmv kernel to be called when b is an Fbcsr. Fbcsr is convertible to Dense, so I guess I need the check?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think temporary_conversion is implemented by dynamic_cast. Maybe @upsj can correct me.
when all dynamic_cast<*dense>(fbcsr) are failed, it throws the error

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, we check against the exact type, not ConvertibleTo

core/test/utils/fb_matrix_generator.hpp Outdated Show resolved Hide resolved
gko::test::detail::get_rand_value<ValueType>(
off_diag_dist, rand_engine);
}
if (col_idxs[ibz] == ibrow) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When it is not row diag dominated, you use norm_dist. Can it be the off_diag_dist, too?
if it can be, you can merge these two part together if (row_diag_dominant && ...)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, you're right. I removed the norm_dist.

const auto nnzb = static_cast<IndexType>(a->get_num_stored_blocks());
const auto nrhs = static_cast<IndexType>(b->get_size()[1]);
assert(nrhs == c->get_size()[1]);
if (nrhs == 1) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it can also be the condition with the line98

Copy link
Contributor Author

@Slaedr Slaedr Oct 18, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a bsrmm but it does not work properly. The one on line 98 is for supported value types, I'd rather keep them separate.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does bsrmm with transpose not work?

cuda/test/matrix/fbcsr_kernels.cpp Show resolved Hide resolved
cuda/test/matrix/fbcsr_kernels.cpp Outdated Show resolved Hide resolved
reference/test/matrix/fbcsr_kernels.cpp Show resolved Hide resolved
omp/test/matrix/fbcsr_kernels.cpp Show resolved Hide resolved
Slaedr and others added 20 commits October 24, 2021 17:32
Co-authored-by: Aditya <[email protected]>
Co-authored-by: Yu-hsiang Tsai <[email protected]>
- cusparse block trsm and ilu0 struct create functions now return
unique_ptrs.

Co-authored-by: Yu-Hsiang Tsai <[email protected]>
Co-authored-by: Thomas Grützmacher <[email protected]>
- cusparse_block_bindings.hpp now includes cusparse_bindings.hpp for
things like "not_implemented"
@Slaedr Slaedr added 1:ST:ready-to-merge This PR is ready to merge. and removed 1:ST:ready-for-review This PR is ready for review labels Oct 25, 2021
@yhmtsai
Copy link
Member

yhmtsai commented Oct 25, 2021

@Slaedr about bank conflict, it can not avoid but maybe reduce the amount.
However, you use the register not shared memory now, so we do not need to consider it.

@Slaedr Slaedr self-assigned this Oct 26, 2021
@Slaedr Slaedr merged commit 8123284 into develop Oct 26, 2021
@Slaedr Slaedr deleted the fbcsr-cuda-omp branch October 26, 2021 08:39
@sonarcloud
Copy link

sonarcloud bot commented Oct 28, 2021

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

No Coverage information No Coverage information
No Duplication information No Duplication information

tcojean added a commit that referenced this pull request Nov 12, 2022
Advertise release 1.5.0 and last changes

+ Add changelog,
+ Update third party libraries
+ A small fix to a CMake file

See PR: #1195

The Ginkgo team is proud to announce the new Ginkgo minor release 1.5.0. This release brings many important new features such as:
- MPI-based multi-node support for all matrix formats and most solvers;
- full DPC++/SYCL support,
- functionality and interface for GPU-resident sparse direct solvers,
- an interface for wrapping solvers with scaling and reordering applied,
- a new algebraic Multigrid solver/preconditioner,
- improved mixed-precision support,
- support for device matrix assembly,

and much more.

If you face an issue, please first check our [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues) and the [open issues list](https://github.com/ginkgo-project/ginkgo/issues) and if you do not find a solution, feel free to [open a new issue](https://github.com/ginkgo-project/ginkgo/issues/new/choose) or ask a question using the [github discussions](https://github.com/ginkgo-project/ginkgo/discussions).

Supported systems and requirements:
+ For all platforms, CMake 3.13+
+ C++14 compliant compiler
+ Linux and macOS
  + GCC: 5.5+
  + clang: 3.9+
  + Intel compiler: 2018+
  + Apple LLVM: 8.0+
  + NVHPC: 22.7+
  + Cray Compiler: 14.0.1+
  + CUDA module: CUDA 9.2+ or NVHPC 22.7+
  + HIP module: ROCm 4.0+
  + DPC++ module: Intel OneAPI 2021.3 with oneMKL and oneDPL. Set the CXX compiler to `dpcpp`.
+ Windows
  + MinGW and Cygwin: GCC 5.5+
  + Microsoft Visual Studio: VS 2019
  + CUDA module: CUDA 9.2+, Microsoft Visual Studio
  + OpenMP module: MinGW or Cygwin.


Algorithm and important feature additions:
+ Add MPI-based multi-node for all matrix formats and solvers (except GMRES and IDR). ([#676](#676), [#908](#908), [#909](#909), [#932](#932), [#951](#951), [#961](#961), [#971](#971), [#976](#976), [#985](#985), [#1007](#1007), [#1030](#1030), [#1054](#1054), [#1100](#1100), [#1148](#1148))
+ Porting the remaining algorithms (preconditioners like ISAI, Jacobi, Multigrid, ParILU(T) and ParIC(T)) to DPC++/SYCL, update to SYCL 2020, and improve support and performance ([#896](#896), [#924](#924), [#928](#928), [#929](#929), [#933](#933), [#943](#943), [#960](#960), [#1057](#1057), [#1110](#1110),  [#1142](#1142))
+ Add a Sparse Direct interface supporting GPU-resident numerical LU factorization, symbolic Cholesky factorization, improved triangular solvers, and more ([#957](#957), [#1058](#1058), [#1072](#1072), [#1082](#1082))
+ Add a ScaleReordered interface that can wrap solvers and automatically apply reorderings and scalings ([#1059](#1059))
+ Add a Multigrid solver and improve the aggregation based PGM coarsening scheme ([#542](#542), [#913](#913), [#980](#980), [#982](#982),  [#986](#986))
+ Add infrastructure for unified, lambda-based, backend agnostic, kernels and utilize it for some simple kernels ([#833](#833), [#910](#910), [#926](#926))
+ Merge different CUDA, HIP, DPC++ and OpenMP tests under a common interface ([#904](#904), [#973](#973), [#1044](#1044), [#1117](#1117))
+ Add a device_matrix_data type for device-side matrix assembly ([#886](#886), [#963](#963), [#965](#965))
+ Add support for mixed real/complex BLAS operations ([#864](#864))
+ Add a FFT LinOp for all but DPC++/SYCL ([#701](#701))
+ Add FBCSR support for NVIDIA and AMD GPUs and CPUs with OpenMP ([#775](#775))
+ Add CSR scaling ([#848](#848))
+ Add array::const_view and equivalent to create constant matrices from non-const data ([#890](#890))
+ Add a RowGatherer LinOp supporting mixed precision to gather dense matrix rows ([#901](#901))
+ Add mixed precision SparsityCsr SpMV support ([#970](#970))
+ Allow creating CSR submatrix including from (possibly discontinuous) index sets ([#885](#885), [#964](#964))
+ Add a scaled identity addition (M <- aI + bM) feature interface and impls for Csr and Dense ([#942](#942))


Deprecations and important changes:
+ Deprecate AmgxPgm in favor of the new Pgm name. ([#1149](#1149)).
+ Deprecate specialized residual norm classes in favor of a common `ResidualNorm` class ([#1101](#1101))
+ Deprecate CamelCase non-polymorphic types in favor of snake_case versions (like array, machine_topology, uninitialized_array, index_set) ([#1031](#1031), [#1052](#1052))
+ Bug fix: restrict gko::share to rvalue references (*possible interface break*) ([#1020](#1020))
+ Bug fix: when using cuSPARSE's triangular solvers, specifying the factory parameter `num_rhs` is now required when solving for more than one right-hand side, otherwise an exception is thrown ([#1184](#1184)).
+ Drop official support for old CUDA < 9.2 ([#887](#887))


Improved performance additions:
+ Reuse tmp storage in reductions in solvers and add a mutable workspace to all solvers ([#1013](#1013), [#1028](#1028))
+ Add HIP unsafe atomic option for AMD ([#1091](#1091))
+ Prefer vendor implementations for Dense dot, conj_dot and norm2 when available ([#967](#967)).
+ Tuned OpenMP SellP, COO, and ELL SpMV kernels for a small number of RHS ([#809](#809))


Fixes:
+ Fix various compilation warnings ([#1076](#1076), [#1183](#1183), [#1189](#1189))
+ Fix issues with hwloc-related tests ([#1074](#1074))
+ Fix include headers for GCC 12 ([#1071](#1071))
+ Fix for simple-solver-logging example ([#1066](#1066))
+ Fix for potential memory leak in Logger ([#1056](#1056))
+ Fix logging of mixin classes ([#1037](#1037))
+ Improve value semantics for LinOp types, like moved-from state in cross-executor copy/clones ([#753](#753))
+ Fix some matrix SpMV and conversion corner cases ([#905](#905), [#978](#978))
+ Fix uninitialized data ([#958](#958))
+ Fix CUDA version requirement for cusparseSpSM ([#953](#953))
+ Fix several issues within bash-script ([#1016](#1016))
+ Fixes for `NVHPC` compiler support ([#1194](#1194))


Other additions:
+ Simplify and properly name GMRES kernels ([#861](#861))
+ Improve pkg-config support for non-CMake libraries ([#923](#923), [#1109](#1109))
+ Improve gdb pretty printer ([#987](#987), [#1114](#1114))
+ Add a logger highlighting inefficient allocation and copy patterns ([#1035](#1035))
+ Improved and optimized test random matrix generation ([#954](#954), [#1032](#1032))
+ Better CSR strategy defaults ([#969](#969))
+ Add `move_from` to `PolymorphicObject` ([#997](#997))
+ Remove unnecessary device_guard usage ([#956](#956))
+ Improvements to the generic accessor for mixed-precision ([#727](#727))
+ Add a naive lower triangular solver implementation for CUDA ([#764](#764))
+ Add support for int64 indices from CUDA 11 onward with SpMV and SpGEMM ([#897](#897))
+ Add a L1 norm implementation ([#900](#900))
+ Add reduce_add for arrays ([#831](#831))
+ Add utility to simplify Dense View creation from an existing Dense vector ([#1136](#1136)).
+ Add a custom transpose implementation for Fbcsr and Csr transpose for unsupported vendor types ([#1123](#1123))
+ Make IDR random initilization deterministic ([#1116](#1116))
+ Move the algorithm choice for triangular solvers from Csr::strategy_type to a factory parameter ([#1088](#1088))
+ Update CUDA archCoresPerSM ([#1175](#1116))
+ Add kernels for Csr sparsity pattern lookup ([#994](#994))
+ Differentiate between structural and numerical zeros in Ell/Sellp ([#1027](#1027))
+ Add a binary IO format for matrix data ([#984](#984))
+ Add a tuple zip_iterator implementation ([#966](#966))
+ Simplify kernel stubs and declarations ([#888](#888))
+ Simplify GKO_REGISTER_OPERATION with lambdas ([#859](#859))
+ Simplify copy to device in tests and examples ([#863](#863))
+ More verbose output to array assertions ([#858](#858))
+ Allow parallel compilation for Jacobi kernels ([#871](#871))
+ Change clang-format pointer alignment to left ([#872](#872))
+ Various improvements and fixes to the benchmarking framework ([#750](#750), [#759](#759), [#870](#870), [#911](#911), [#1033](#1033), [#1137](#1137))
+ Various documentation improvements ([#892](#892), [#921](#921), [#950](#950), [#977](#977), [#1021](#1021), [#1068](#1068), [#1069](#1069), [#1080](#1080), [#1081](#1081), [#1108](#1108), [#1153](#1153), [#1154](#1154))
+ Various CI improvements ([#868](#868), [#874](#874), [#884](#884), [#889](#889), [#899](#899), [#903](#903),  [#922](#922), [#925](#925), [#930](#930), [#936](#936), [#937](#937), [#958](#958), [#882](#882), [#1011](#1011), [#1015](#1015), [#989](#989), [#1039](#1039), [#1042](#1042), [#1067](#1067), [#1073](#1073), [#1075](#1075), [#1083](#1083), [#1084](#1084), [#1085](#1085), [#1139](#1139), [#1178](#1178), [#1187](#1187))
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:ready-to-merge This PR is ready to merge. mod:core This is related to the core module. mod:cuda This is related to the CUDA module. mod:hip This is related to the HIP module. mod:openmp This is related to the OpenMP module. mod:reference This is related to the reference module. reg:build This is related to the build system. reg:testing This is related to testing. type:matrix-format This is related to the Matrix formats
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants