Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tune OpenMP SellP, COO and ELL SpMV kernels for small number of rhs #809

Merged
merged 4 commits into from
Oct 2, 2021

Conversation

upsj
Copy link
Member

@upsj upsj commented Jun 30, 2021

This PR adds special-cases for small numbers of rhs and uses blocked operations for larger numbers of rhs.

As a side-effect, this also makes ELL mixed-precision SpMV more precise, since it uses the highest available precision.

  • Benchmark SpMV

@upsj upsj added 1:ST:WIP This PR is a work in progress. Not ready for review. is:affects-performance This is related to something which affects performance. labels Jun 30, 2021
@upsj upsj added this to the Ginkgo 1.4.0 milestone Jun 30, 2021
@upsj upsj self-assigned this Jun 30, 2021
@ginkgo-bot ginkgo-bot added mod:openmp This is related to the OpenMP module. reg:testing This is related to testing. type:matrix-format This is related to the Matrix formats labels Jun 30, 2021
@upsj upsj changed the title Tune OpenMP SpMV kernels for small number of rhs in OpenMP Tune OpenMP SellP, COO and ELL SpMV kernels for small number of rhs Jul 1, 2021
@upsj upsj removed this from the Ginkgo 1.4.0 milestone Jul 1, 2021
@upsj upsj added 1:ST:ready-for-review This PR is ready for review and removed 1:ST:WIP This PR is a work in progress. Not ready for review. labels Sep 14, 2021
@upsj upsj requested a review from a team September 14, 2021 08:20
@upsj upsj added this to the Ginkgo 1.5.0 milestone Sep 14, 2021
@upsj upsj force-pushed the omp_spmv_rhs branch 3 times, most recently from 16cb1f3 to ab37062 Compare September 18, 2021 07:32
@codecov
Copy link

codecov bot commented Sep 18, 2021

Codecov Report

Merging #809 (eea1466) into develop (d2542f0) will decrease coverage by 0.05%.
The diff coverage is 83.33%.

Impacted file tree graph

@@             Coverage Diff             @@
##           develop     #809      +/-   ##
===========================================
- Coverage    94.78%   94.73%   -0.06%     
===========================================
  Files          429      429              
  Lines        35201    35298      +97     
===========================================
+ Hits         33365    33438      +73     
- Misses        1836     1860      +24     
Impacted Files Coverage Δ
include/ginkgo/core/base/math.hpp 100.00% <ø> (ø)
omp/matrix/sellp_kernels.cpp 80.26% <73.80%> (-9.99%) ⬇️
omp/matrix/coo_kernels.cpp 91.80% <80.76%> (-8.20%) ⬇️
omp/matrix/ell_kernels.cpp 85.00% <81.39%> (-5.25%) ⬇️
omp/test/matrix/coo_kernels.cpp 100.00% <100.00%> (ø)
omp/test/matrix/ell_kernels.cpp 100.00% <100.00%> (ø)
omp/test/matrix/sellp_kernels.cpp 100.00% <100.00%> (ø)
reference/matrix/ell_kernels.cpp 87.77% <100.00%> (+0.13%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d2542f0...eea1466. Read the comment docs.

Copy link
Member

@tcojean tcojean left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Some small comments.

Comment on lines +465 to +484
template <typename T1, typename T2>
struct highest_precision_impl {
using type = decltype(T1{} + T2{});
};

template <typename T1, typename T2>
struct highest_precision_impl<std::complex<T1>, std::complex<T2>> {
using type = std::complex<typename highest_precision_impl<T1, T2>::type>;
};

template <typename Head, typename... Tail>
struct highest_precision_variadic {
using type = typename highest_precision_impl<
Head, typename highest_precision_variadic<Tail...>::type>::type;
};

template <typename Head>
struct highest_precision_variadic<Head> {
using type = Head;
};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this also applies to the other precision-related types and functions in this file but doesn't this better fit into types.hpp? Also, technically there should be some more formal documentation since this is public.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added some documentation to the public facing implementation (highest_precision). With all the other related parts (reduce_precision, next_precision, ...), I think this file might be more appropriate.

for (size_type j = 0; j < c->get_size()[1]; j++) {
c->at(row, j) = zero<OutputValueType>();
}
std::array<arithmetic_type, num_rhs> partial_sum;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think there is much overhead associated with this since it doesn't use dynamic memory, but you can also write the code so that this is only specified once per thread, like:

#pragma omp parallel
{
    std::array<arithmetic_type, num_rhs> partial_sum;
    #pragma omp for nowait
    for (size_type j = 0; /* .... */)
    // ....
}

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the worst case, this should be stored on the stack (I would expect the stack pointer manipulation to happen only once per loop), but the compilers eagerly move it to registers anyways, so I think this is not necessary.

}
}
#pragma unroll
for (size_type j = 0; j < num_rhs; j++) {
[&] { c->at(row, j) = out(row, j, partial_sum[j]); }();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add a comment somewhere in the files or maybe in the developer guidelines/known issues so that we remember this is due to icpc + openmp? It looks like something to check periodically whether it's fixed and to remember for other similar codes.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think Intel basically confirmed that they will not be fixing this bug (since they no longer maintain icpc), so we will have to keep this around until we drop support.

}
// handle row overlap with following thread: block partial sums
partial_sum.fill(zero<ValueType>());
for (; nz < end; nz++) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the full section for the last row is the same as the first one except that you need an extra coo_row[local_nz] == first for the first one in two for loops. Would it make sense to create a small algorithm to treat these special cases and call it with the extra boolean condition when needed?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good observation! I'm not sure whether this is worth the effort, since it is pretty specific to the COO SpMV?

Copy link
Member

@MarcelKoch MarcelKoch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, did you get around to running some benchmarks for this?

omp/matrix/coo_kernels.cpp Outdated Show resolved Hide resolved
omp/matrix/coo_kernels.cpp Outdated Show resolved Hide resolved
omp/matrix/coo_kernels.cpp Show resolved Hide resolved
omp/matrix/coo_kernels.cpp Outdated Show resolved Hide resolved
omp/matrix/coo_kernels.cpp Show resolved Hide resolved
omp/matrix/coo_kernels.cpp Outdated Show resolved Hide resolved
omp/matrix/coo_kernels.cpp Show resolved Hide resolved
matrix::Dense<ValueType>* c, ValueType scale)
{
const auto num_rhs = b->get_size()[1];
if (num_rhs <= 0) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do you motivate this choice of dispatches? Did you test, if spmv2_small_rhs is slower than spmv2_blocked for more than 4 rhs? I'm not against this choice, just curious.

omp/matrix/sellp_kernels.cpp Outdated Show resolved Hide resolved
omp/matrix/sellp_kernels.cpp Outdated Show resolved Hide resolved
@upsj
Copy link
Member Author

upsj commented Sep 21, 2021

@MarcelKoch For some reason, OMP_NUM_THREADS is defined as 1 on the batch jobs, so I'll have to rerun them.
The block size was chosen based on hand-wavy reasons around vector sizes for double in AVX2, balanced with binary code size and compilation time. block size 8 might be another sensible choice, I'll have to benchmark that.

@upsj
Copy link
Member Author

upsj commented Oct 1, 2021

omp-tuning-plot

temporary (?) link

@upsj
Copy link
Member Author

upsj commented Oct 1, 2021

rebase!

@upsj upsj added 1:ST:ready-to-merge This PR is ready to merge. and removed 1:ST:ready-for-review This PR is ready for review labels Oct 1, 2021
upsj and others added 3 commits October 1, 2021 23:44
icpc has issues with generic lambdas being called
directly inside OpenMP for loops.
* remove unused variables
* add documentation to highest_precision helper

Co-authored-by: Terry Cojean <[email protected]>
@sonarcloud
Copy link

sonarcloud bot commented Oct 2, 2021

SonarCloud Quality Gate failed.    Quality Gate failed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 31 Code Smells

7.8% 7.8% Coverage
21.3% 21.3% Duplication

@upsj upsj merged commit b89eb71 into develop Oct 2, 2021
@upsj upsj deleted the omp_spmv_rhs branch October 2, 2021 20:25
tcojean added a commit that referenced this pull request Nov 12, 2022
Advertise release 1.5.0 and last changes

+ Add changelog,
+ Update third party libraries
+ A small fix to a CMake file

See PR: #1195

The Ginkgo team is proud to announce the new Ginkgo minor release 1.5.0. This release brings many important new features such as:
- MPI-based multi-node support for all matrix formats and most solvers;
- full DPC++/SYCL support,
- functionality and interface for GPU-resident sparse direct solvers,
- an interface for wrapping solvers with scaling and reordering applied,
- a new algebraic Multigrid solver/preconditioner,
- improved mixed-precision support,
- support for device matrix assembly,

and much more.

If you face an issue, please first check our [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues) and the [open issues list](https://github.com/ginkgo-project/ginkgo/issues) and if you do not find a solution, feel free to [open a new issue](https://github.com/ginkgo-project/ginkgo/issues/new/choose) or ask a question using the [github discussions](https://github.com/ginkgo-project/ginkgo/discussions).

Supported systems and requirements:
+ For all platforms, CMake 3.13+
+ C++14 compliant compiler
+ Linux and macOS
  + GCC: 5.5+
  + clang: 3.9+
  + Intel compiler: 2018+
  + Apple LLVM: 8.0+
  + NVHPC: 22.7+
  + Cray Compiler: 14.0.1+
  + CUDA module: CUDA 9.2+ or NVHPC 22.7+
  + HIP module: ROCm 4.0+
  + DPC++ module: Intel OneAPI 2021.3 with oneMKL and oneDPL. Set the CXX compiler to `dpcpp`.
+ Windows
  + MinGW and Cygwin: GCC 5.5+
  + Microsoft Visual Studio: VS 2019
  + CUDA module: CUDA 9.2+, Microsoft Visual Studio
  + OpenMP module: MinGW or Cygwin.


Algorithm and important feature additions:
+ Add MPI-based multi-node for all matrix formats and solvers (except GMRES and IDR). ([#676](#676), [#908](#908), [#909](#909), [#932](#932), [#951](#951), [#961](#961), [#971](#971), [#976](#976), [#985](#985), [#1007](#1007), [#1030](#1030), [#1054](#1054), [#1100](#1100), [#1148](#1148))
+ Porting the remaining algorithms (preconditioners like ISAI, Jacobi, Multigrid, ParILU(T) and ParIC(T)) to DPC++/SYCL, update to SYCL 2020, and improve support and performance ([#896](#896), [#924](#924), [#928](#928), [#929](#929), [#933](#933), [#943](#943), [#960](#960), [#1057](#1057), [#1110](#1110),  [#1142](#1142))
+ Add a Sparse Direct interface supporting GPU-resident numerical LU factorization, symbolic Cholesky factorization, improved triangular solvers, and more ([#957](#957), [#1058](#1058), [#1072](#1072), [#1082](#1082))
+ Add a ScaleReordered interface that can wrap solvers and automatically apply reorderings and scalings ([#1059](#1059))
+ Add a Multigrid solver and improve the aggregation based PGM coarsening scheme ([#542](#542), [#913](#913), [#980](#980), [#982](#982),  [#986](#986))
+ Add infrastructure for unified, lambda-based, backend agnostic, kernels and utilize it for some simple kernels ([#833](#833), [#910](#910), [#926](#926))
+ Merge different CUDA, HIP, DPC++ and OpenMP tests under a common interface ([#904](#904), [#973](#973), [#1044](#1044), [#1117](#1117))
+ Add a device_matrix_data type for device-side matrix assembly ([#886](#886), [#963](#963), [#965](#965))
+ Add support for mixed real/complex BLAS operations ([#864](#864))
+ Add a FFT LinOp for all but DPC++/SYCL ([#701](#701))
+ Add FBCSR support for NVIDIA and AMD GPUs and CPUs with OpenMP ([#775](#775))
+ Add CSR scaling ([#848](#848))
+ Add array::const_view and equivalent to create constant matrices from non-const data ([#890](#890))
+ Add a RowGatherer LinOp supporting mixed precision to gather dense matrix rows ([#901](#901))
+ Add mixed precision SparsityCsr SpMV support ([#970](#970))
+ Allow creating CSR submatrix including from (possibly discontinuous) index sets ([#885](#885), [#964](#964))
+ Add a scaled identity addition (M <- aI + bM) feature interface and impls for Csr and Dense ([#942](#942))


Deprecations and important changes:
+ Deprecate AmgxPgm in favor of the new Pgm name. ([#1149](#1149)).
+ Deprecate specialized residual norm classes in favor of a common `ResidualNorm` class ([#1101](#1101))
+ Deprecate CamelCase non-polymorphic types in favor of snake_case versions (like array, machine_topology, uninitialized_array, index_set) ([#1031](#1031), [#1052](#1052))
+ Bug fix: restrict gko::share to rvalue references (*possible interface break*) ([#1020](#1020))
+ Bug fix: when using cuSPARSE's triangular solvers, specifying the factory parameter `num_rhs` is now required when solving for more than one right-hand side, otherwise an exception is thrown ([#1184](#1184)).
+ Drop official support for old CUDA < 9.2 ([#887](#887))


Improved performance additions:
+ Reuse tmp storage in reductions in solvers and add a mutable workspace to all solvers ([#1013](#1013), [#1028](#1028))
+ Add HIP unsafe atomic option for AMD ([#1091](#1091))
+ Prefer vendor implementations for Dense dot, conj_dot and norm2 when available ([#967](#967)).
+ Tuned OpenMP SellP, COO, and ELL SpMV kernels for a small number of RHS ([#809](#809))


Fixes:
+ Fix various compilation warnings ([#1076](#1076), [#1183](#1183), [#1189](#1189))
+ Fix issues with hwloc-related tests ([#1074](#1074))
+ Fix include headers for GCC 12 ([#1071](#1071))
+ Fix for simple-solver-logging example ([#1066](#1066))
+ Fix for potential memory leak in Logger ([#1056](#1056))
+ Fix logging of mixin classes ([#1037](#1037))
+ Improve value semantics for LinOp types, like moved-from state in cross-executor copy/clones ([#753](#753))
+ Fix some matrix SpMV and conversion corner cases ([#905](#905), [#978](#978))
+ Fix uninitialized data ([#958](#958))
+ Fix CUDA version requirement for cusparseSpSM ([#953](#953))
+ Fix several issues within bash-script ([#1016](#1016))
+ Fixes for `NVHPC` compiler support ([#1194](#1194))


Other additions:
+ Simplify and properly name GMRES kernels ([#861](#861))
+ Improve pkg-config support for non-CMake libraries ([#923](#923), [#1109](#1109))
+ Improve gdb pretty printer ([#987](#987), [#1114](#1114))
+ Add a logger highlighting inefficient allocation and copy patterns ([#1035](#1035))
+ Improved and optimized test random matrix generation ([#954](#954), [#1032](#1032))
+ Better CSR strategy defaults ([#969](#969))
+ Add `move_from` to `PolymorphicObject` ([#997](#997))
+ Remove unnecessary device_guard usage ([#956](#956))
+ Improvements to the generic accessor for mixed-precision ([#727](#727))
+ Add a naive lower triangular solver implementation for CUDA ([#764](#764))
+ Add support for int64 indices from CUDA 11 onward with SpMV and SpGEMM ([#897](#897))
+ Add a L1 norm implementation ([#900](#900))
+ Add reduce_add for arrays ([#831](#831))
+ Add utility to simplify Dense View creation from an existing Dense vector ([#1136](#1136)).
+ Add a custom transpose implementation for Fbcsr and Csr transpose for unsupported vendor types ([#1123](#1123))
+ Make IDR random initilization deterministic ([#1116](#1116))
+ Move the algorithm choice for triangular solvers from Csr::strategy_type to a factory parameter ([#1088](#1088))
+ Update CUDA archCoresPerSM ([#1175](#1116))
+ Add kernels for Csr sparsity pattern lookup ([#994](#994))
+ Differentiate between structural and numerical zeros in Ell/Sellp ([#1027](#1027))
+ Add a binary IO format for matrix data ([#984](#984))
+ Add a tuple zip_iterator implementation ([#966](#966))
+ Simplify kernel stubs and declarations ([#888](#888))
+ Simplify GKO_REGISTER_OPERATION with lambdas ([#859](#859))
+ Simplify copy to device in tests and examples ([#863](#863))
+ More verbose output to array assertions ([#858](#858))
+ Allow parallel compilation for Jacobi kernels ([#871](#871))
+ Change clang-format pointer alignment to left ([#872](#872))
+ Various improvements and fixes to the benchmarking framework ([#750](#750), [#759](#759), [#870](#870), [#911](#911), [#1033](#1033), [#1137](#1137))
+ Various documentation improvements ([#892](#892), [#921](#921), [#950](#950), [#977](#977), [#1021](#1021), [#1068](#1068), [#1069](#1069), [#1080](#1080), [#1081](#1081), [#1108](#1108), [#1153](#1153), [#1154](#1154))
+ Various CI improvements ([#868](#868), [#874](#874), [#884](#884), [#889](#889), [#899](#899), [#903](#903),  [#922](#922), [#925](#925), [#930](#930), [#936](#936), [#937](#937), [#958](#958), [#882](#882), [#1011](#1011), [#1015](#1015), [#989](#989), [#1039](#1039), [#1042](#1042), [#1067](#1067), [#1073](#1073), [#1075](#1075), [#1083](#1083), [#1084](#1084), [#1085](#1085), [#1139](#1139), [#1178](#1178), [#1187](#1187))
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:ready-to-merge This PR is ready to merge. is:affects-performance This is related to something which affects performance. mod:openmp This is related to the OpenMP module. reg:testing This is related to testing. type:matrix-format This is related to the Matrix formats
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants