Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a class for partitions of intervals #909

Merged
merged 16 commits into from
Nov 29, 2021
Merged

Conversation

MarcelKoch
Copy link
Member

This PR adds a partition class, which handles an interval that is distributed across multiple ranks.

Partially closes #907.

Main contributions are from @upsj.

@MarcelKoch MarcelKoch self-assigned this Oct 22, 2021
@MarcelKoch MarcelKoch added this to In progress in Distributed Ginkgo via automation Oct 22, 2021
@ginkgo-bot ginkgo-bot added mod:all This touches all Ginkgo modules. reg:build This is related to the build system. reg:testing This is related to testing. labels Oct 22, 2021
@MarcelKoch MarcelKoch linked an issue Oct 22, 2021 that may be closed by this pull request
8 tasks
@MarcelKoch MarcelKoch changed the title Partition Add a class for partitions of intervals Oct 22, 2021
upsj
upsj previously requested changes Oct 22, 2021
Copy link
Member

@upsj upsj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I won't approve this myself, since it is a lot of my own code. The validation part needs to be consolidated with #770, the interface needs to be matched with #676, and I will add GPU kernels for this, but in general, I don't have much to comment here :)

core/device_hooks/common_kernels.inc.cpp Outdated Show resolved Hide resolved
core/distributed/partition.cpp Outdated Show resolved Hide resolved
include/ginkgo/core/distributed/partition.hpp Outdated Show resolved Hide resolved
include/ginkgo/core/distributed/partition.hpp Outdated Show resolved Hide resolved
Distributed Ginkgo automation moved this from In progress to Review in progress Oct 22, 2021
@upsj
Copy link
Member

upsj commented Oct 23, 2021

I put implementations for most of the kernels into tmp_partition_upsj

@MarcelKoch
Copy link
Member Author

I put implementations for most of the kernels into tmp_partition_upsj

Thanks for that, I've added it to this branch.

@MarcelKoch MarcelKoch force-pushed the partition branch 2 times, most recently from 2fe4037 to 47cd80c Compare October 26, 2021 14:27
@MarcelKoch MarcelKoch force-pushed the partition branch 3 times, most recently from 635c7ba to ade66d4 Compare October 27, 2021 08:26
@MarcelKoch MarcelKoch added the 1:ST:ready-for-review This PR is ready for review label Oct 27, 2021
common/CMakeLists.txt Outdated Show resolved Hide resolved
common/unified/distributed/partition_kernels.cpp Outdated Show resolved Hide resolved
core/distributed/partition.cpp Outdated Show resolved Hide resolved
cuda/distributed/partition_kernels.cu Outdated Show resolved Hide resolved
dpcpp/distributed/partition_kernels.dp.cpp Outdated Show resolved Hide resolved
int num_parts, LocalIndexType* ranks,
LocalIndexType* sizes)
{
Array<LocalIndexType> range_sizes{exec, num_ranges};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We might consider moving similar code to common/cuda_hip thanks to the identical thrust interface.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead, I would suggest moving the common kernels below to common/unified so that they can be used in DPC++ as well. The rest (thrust calls etc.) could be left here.

include/ginkgo/core/base/types.hpp Outdated Show resolved Hide resolved
omp/test/distributed/partition_kernels.cpp Outdated Show resolved Hide resolved
@upsj upsj dismissed their stale review October 27, 2021 08:54

has been addressed

Copy link
Member

@yhmtsai yhmtsai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I only review the interface and partial reference to understand better.

include/ginkgo/core/distributed/partition.hpp Show resolved Hide resolved
include/ginkgo/core/distributed/partition.hpp Outdated Show resolved Hide resolved
include/ginkgo/core/distributed/partition.hpp Outdated Show resolved Hide resolved
include/ginkgo/core/distributed/partition.hpp Outdated Show resolved Hide resolved
include/ginkgo/core/distributed/partition.hpp Outdated Show resolved Hide resolved
include/ginkgo/core/distributed/partition.hpp Outdated Show resolved Hide resolved
include/ginkgo/core/distributed/partition.hpp Show resolved Hide resolved
core/distributed/partition.cpp Outdated Show resolved Hide resolved
MarcelKoch and others added 11 commits November 29, 2021 09:34
- renaming
- update kernels
- add number of empty parts

Co-authored-by: Tobias Ribizel <[email protected]>
Co-authored-by: Yu-Hsiang Tsai <[email protected]>
- clarify documentation
- renaming
- small kernel fix
- simplify tests

Co-authored-by: Pratik Nayak <[email protected]>
- renaming
- unifying kernels for computing local starting indices
- documentation

Co-authored-by: Aditya Kashi <[email protected]>
Co-authored-by: Yu-Hsiang Tsai <[email protected]>
Copy link
Member

@yhmtsai yhmtsai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Is the cuda9.2 issue also from executing some kernel with zero grid size?

Distributed Ginkgo automation moved this from Review in progress to Reviewer approved Nov 29, 2021
@upsj
Copy link
Member

upsj commented Nov 29, 2021

@yhmtsai Yes, as far as I can tell, Thrust internally checks whether a previous kernel launch failed, and even though this kind of failure never cascades, it means that any Thrust kernel call after an empty CUDA kernel call causes an exception to be thrown.

@yhmtsai
Copy link
Member

yhmtsai commented Nov 29, 2021

I see. There were some kernels with zero grid size but do not give the error in gtest because no thrust kernel after that to raise it.

- merge /common/unified/.../partitions_kernels.hpp.inc into /commun/cuda_hip/.../partitions_kernsls.hpp.inc
- documentation

Co-authored-by: Yu-Hsiang Tsai <[email protected]>
@ginkgo-bot
Copy link
Member

Note: This PR changes the Ginkgo ABI:

Functions changes summary: 0 Removed, 0 Changed, 340 Added functions
Variables changes summary: 0 Removed, 0 Changed, 0 Added variable

For details check the full ABI diff under Artifacts here

@upsj upsj added 1:ST:ready-to-merge This PR is ready to merge. and removed 1:ST:ready-for-review This PR is ready for review labels Nov 29, 2021
@codecov
Copy link

codecov bot commented Nov 29, 2021

Codecov Report

Merging #909 (5bb5436) into distributed-develop (a5f5e93) will increase coverage by 0.04%.
The diff coverage is 97.52%.

Impacted file tree graph

@@                   Coverage Diff                   @@
##           distributed-develop     #909      +/-   ##
=======================================================
+ Coverage                93.30%   93.35%   +0.04%     
=======================================================
  Files                      459      467       +8     
  Lines                    37578    37992     +414     
=======================================================
+ Hits                     35063    35467     +404     
- Misses                    2515     2525      +10     
Impacted Files Coverage Δ
core/device_hooks/common_kernels.inc.cpp 0.00% <0.00%> (ø)
core/test/utils.hpp 100.00% <ø> (ø)
include/ginkgo/core/base/types.hpp 92.59% <ø> (ø)
include/ginkgo/core/distributed/partition.hpp 90.90% <90.90%> (ø)
core/distributed/partition.cpp 97.82% <97.82%> (ø)
test/distributed/partition_kernels.cpp 99.25% <99.25%> (ø)
common/unified/distributed/partition_kernels.cpp 100.00% <100.00%> (ø)
include/ginkgo/core/base/array.hpp 94.92% <100.00%> (+0.07%) ⬆️
omp/distributed/partition_kernels.cpp 100.00% <100.00%> (ø)
reference/distributed/partition_kernels.cpp 100.00% <100.00%> (ø)
... and 3 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update a5f5e93...5bb5436. Read the comment docs.

@MarcelKoch MarcelKoch merged commit ed77851 into distributed-develop Nov 29, 2021
Distributed Ginkgo automation moved this from Reviewer approved to Done Nov 29, 2021
@MarcelKoch MarcelKoch deleted the partition branch November 29, 2021 13:44
@sonarcloud
Copy link

sonarcloud bot commented Nov 29, 2021

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

No Coverage information No Coverage information
No Duplication information No Duplication information

pratikvn added a commit that referenced this pull request Nov 29, 2021
This PR merges the basic distributed capability into Ginkgo, namely

1. MPI layer (#908)
2. Partition class and kernels (#909)

The discussion regarding the interfaces for the above functionalities can be found in the above linked PR's.

Related PR: #932
tcojean added a commit that referenced this pull request Nov 12, 2022
Advertise release 1.5.0 and last changes

+ Add changelog,
+ Update third party libraries
+ A small fix to a CMake file

See PR: #1195

The Ginkgo team is proud to announce the new Ginkgo minor release 1.5.0. This release brings many important new features such as:
- MPI-based multi-node support for all matrix formats and most solvers;
- full DPC++/SYCL support,
- functionality and interface for GPU-resident sparse direct solvers,
- an interface for wrapping solvers with scaling and reordering applied,
- a new algebraic Multigrid solver/preconditioner,
- improved mixed-precision support,
- support for device matrix assembly,

and much more.

If you face an issue, please first check our [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues) and the [open issues list](https://github.com/ginkgo-project/ginkgo/issues) and if you do not find a solution, feel free to [open a new issue](https://github.com/ginkgo-project/ginkgo/issues/new/choose) or ask a question using the [github discussions](https://github.com/ginkgo-project/ginkgo/discussions).

Supported systems and requirements:
+ For all platforms, CMake 3.13+
+ C++14 compliant compiler
+ Linux and macOS
  + GCC: 5.5+
  + clang: 3.9+
  + Intel compiler: 2018+
  + Apple LLVM: 8.0+
  + NVHPC: 22.7+
  + Cray Compiler: 14.0.1+
  + CUDA module: CUDA 9.2+ or NVHPC 22.7+
  + HIP module: ROCm 4.0+
  + DPC++ module: Intel OneAPI 2021.3 with oneMKL and oneDPL. Set the CXX compiler to `dpcpp`.
+ Windows
  + MinGW and Cygwin: GCC 5.5+
  + Microsoft Visual Studio: VS 2019
  + CUDA module: CUDA 9.2+, Microsoft Visual Studio
  + OpenMP module: MinGW or Cygwin.


Algorithm and important feature additions:
+ Add MPI-based multi-node for all matrix formats and solvers (except GMRES and IDR). ([#676](#676), [#908](#908), [#909](#909), [#932](#932), [#951](#951), [#961](#961), [#971](#971), [#976](#976), [#985](#985), [#1007](#1007), [#1030](#1030), [#1054](#1054), [#1100](#1100), [#1148](#1148))
+ Porting the remaining algorithms (preconditioners like ISAI, Jacobi, Multigrid, ParILU(T) and ParIC(T)) to DPC++/SYCL, update to SYCL 2020, and improve support and performance ([#896](#896), [#924](#924), [#928](#928), [#929](#929), [#933](#933), [#943](#943), [#960](#960), [#1057](#1057), [#1110](#1110),  [#1142](#1142))
+ Add a Sparse Direct interface supporting GPU-resident numerical LU factorization, symbolic Cholesky factorization, improved triangular solvers, and more ([#957](#957), [#1058](#1058), [#1072](#1072), [#1082](#1082))
+ Add a ScaleReordered interface that can wrap solvers and automatically apply reorderings and scalings ([#1059](#1059))
+ Add a Multigrid solver and improve the aggregation based PGM coarsening scheme ([#542](#542), [#913](#913), [#980](#980), [#982](#982),  [#986](#986))
+ Add infrastructure for unified, lambda-based, backend agnostic, kernels and utilize it for some simple kernels ([#833](#833), [#910](#910), [#926](#926))
+ Merge different CUDA, HIP, DPC++ and OpenMP tests under a common interface ([#904](#904), [#973](#973), [#1044](#1044), [#1117](#1117))
+ Add a device_matrix_data type for device-side matrix assembly ([#886](#886), [#963](#963), [#965](#965))
+ Add support for mixed real/complex BLAS operations ([#864](#864))
+ Add a FFT LinOp for all but DPC++/SYCL ([#701](#701))
+ Add FBCSR support for NVIDIA and AMD GPUs and CPUs with OpenMP ([#775](#775))
+ Add CSR scaling ([#848](#848))
+ Add array::const_view and equivalent to create constant matrices from non-const data ([#890](#890))
+ Add a RowGatherer LinOp supporting mixed precision to gather dense matrix rows ([#901](#901))
+ Add mixed precision SparsityCsr SpMV support ([#970](#970))
+ Allow creating CSR submatrix including from (possibly discontinuous) index sets ([#885](#885), [#964](#964))
+ Add a scaled identity addition (M <- aI + bM) feature interface and impls for Csr and Dense ([#942](#942))


Deprecations and important changes:
+ Deprecate AmgxPgm in favor of the new Pgm name. ([#1149](#1149)).
+ Deprecate specialized residual norm classes in favor of a common `ResidualNorm` class ([#1101](#1101))
+ Deprecate CamelCase non-polymorphic types in favor of snake_case versions (like array, machine_topology, uninitialized_array, index_set) ([#1031](#1031), [#1052](#1052))
+ Bug fix: restrict gko::share to rvalue references (*possible interface break*) ([#1020](#1020))
+ Bug fix: when using cuSPARSE's triangular solvers, specifying the factory parameter `num_rhs` is now required when solving for more than one right-hand side, otherwise an exception is thrown ([#1184](#1184)).
+ Drop official support for old CUDA < 9.2 ([#887](#887))


Improved performance additions:
+ Reuse tmp storage in reductions in solvers and add a mutable workspace to all solvers ([#1013](#1013), [#1028](#1028))
+ Add HIP unsafe atomic option for AMD ([#1091](#1091))
+ Prefer vendor implementations for Dense dot, conj_dot and norm2 when available ([#967](#967)).
+ Tuned OpenMP SellP, COO, and ELL SpMV kernels for a small number of RHS ([#809](#809))


Fixes:
+ Fix various compilation warnings ([#1076](#1076), [#1183](#1183), [#1189](#1189))
+ Fix issues with hwloc-related tests ([#1074](#1074))
+ Fix include headers for GCC 12 ([#1071](#1071))
+ Fix for simple-solver-logging example ([#1066](#1066))
+ Fix for potential memory leak in Logger ([#1056](#1056))
+ Fix logging of mixin classes ([#1037](#1037))
+ Improve value semantics for LinOp types, like moved-from state in cross-executor copy/clones ([#753](#753))
+ Fix some matrix SpMV and conversion corner cases ([#905](#905), [#978](#978))
+ Fix uninitialized data ([#958](#958))
+ Fix CUDA version requirement for cusparseSpSM ([#953](#953))
+ Fix several issues within bash-script ([#1016](#1016))
+ Fixes for `NVHPC` compiler support ([#1194](#1194))


Other additions:
+ Simplify and properly name GMRES kernels ([#861](#861))
+ Improve pkg-config support for non-CMake libraries ([#923](#923), [#1109](#1109))
+ Improve gdb pretty printer ([#987](#987), [#1114](#1114))
+ Add a logger highlighting inefficient allocation and copy patterns ([#1035](#1035))
+ Improved and optimized test random matrix generation ([#954](#954), [#1032](#1032))
+ Better CSR strategy defaults ([#969](#969))
+ Add `move_from` to `PolymorphicObject` ([#997](#997))
+ Remove unnecessary device_guard usage ([#956](#956))
+ Improvements to the generic accessor for mixed-precision ([#727](#727))
+ Add a naive lower triangular solver implementation for CUDA ([#764](#764))
+ Add support for int64 indices from CUDA 11 onward with SpMV and SpGEMM ([#897](#897))
+ Add a L1 norm implementation ([#900](#900))
+ Add reduce_add for arrays ([#831](#831))
+ Add utility to simplify Dense View creation from an existing Dense vector ([#1136](#1136)).
+ Add a custom transpose implementation for Fbcsr and Csr transpose for unsupported vendor types ([#1123](#1123))
+ Make IDR random initilization deterministic ([#1116](#1116))
+ Move the algorithm choice for triangular solvers from Csr::strategy_type to a factory parameter ([#1088](#1088))
+ Update CUDA archCoresPerSM ([#1175](#1116))
+ Add kernels for Csr sparsity pattern lookup ([#994](#994))
+ Differentiate between structural and numerical zeros in Ell/Sellp ([#1027](#1027))
+ Add a binary IO format for matrix data ([#984](#984))
+ Add a tuple zip_iterator implementation ([#966](#966))
+ Simplify kernel stubs and declarations ([#888](#888))
+ Simplify GKO_REGISTER_OPERATION with lambdas ([#859](#859))
+ Simplify copy to device in tests and examples ([#863](#863))
+ More verbose output to array assertions ([#858](#858))
+ Allow parallel compilation for Jacobi kernels ([#871](#871))
+ Change clang-format pointer alignment to left ([#872](#872))
+ Various improvements and fixes to the benchmarking framework ([#750](#750), [#759](#759), [#870](#870), [#911](#911), [#1033](#1033), [#1137](#1137))
+ Various documentation improvements ([#892](#892), [#921](#921), [#950](#950), [#977](#977), [#1021](#1021), [#1068](#1068), [#1069](#1069), [#1080](#1080), [#1081](#1081), [#1108](#1108), [#1153](#1153), [#1154](#1154))
+ Various CI improvements ([#868](#868), [#874](#874), [#884](#884), [#889](#889), [#899](#899), [#903](#903),  [#922](#922), [#925](#925), [#930](#930), [#936](#936), [#937](#937), [#958](#958), [#882](#882), [#1011](#1011), [#1015](#1015), [#989](#989), [#1039](#1039), [#1042](#1042), [#1067](#1067), [#1073](#1073), [#1075](#1075), [#1083](#1083), [#1084](#1084), [#1085](#1085), [#1139](#1139), [#1178](#1178), [#1187](#1187))
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:ready-to-merge This PR is ready to merge. mod:all This touches all Ginkgo modules. reg:build This is related to the build system. reg:testing This is related to testing.
Projects
No open projects
Development

Successfully merging this pull request may close these issues.

None yet

6 participants