Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create submatrix for sparse matrix formats #885

Merged
merged 14 commits into from
Nov 12, 2021
Merged

Conversation

pratikvn
Copy link
Member

@pratikvn pratikvn commented Sep 13, 2021

This PR adds support for creating submatrices from spans for sparse matrix formats. Dense already has this functionality.

An additional interface is added to ensure the same function signatures for the different matrix formats.

Only CSR is supported for now, but other formats could be easily added.

TODO

@pratikvn pratikvn added is:new-feature A request or implementation of a feature that does not exist yet. type:matrix-format This is related to the Matrix formats 1:ST:ready-for-review This PR is ready for review mod:all This touches all Ginkgo modules. labels Sep 13, 2021
@pratikvn pratikvn requested a review from a team September 13, 2021 10:43
@pratikvn pratikvn self-assigned this Sep 13, 2021
@ginkgo-bot ginkgo-bot added reg:build This is related to the build system. reg:testing This is related to testing. labels Sep 13, 2021
@codecov
Copy link

codecov bot commented Sep 13, 2021

Codecov Report

Merging #885 (7494456) into develop (b4fc0c7) will increase coverage by 0.02%.
The diff coverage is 98.50%.

Impacted file tree graph

@@             Coverage Diff             @@
##           develop     #885      +/-   ##
===========================================
+ Coverage    94.80%   94.82%   +0.02%     
===========================================
  Files          443      443              
  Lines        36459    36592     +133     
===========================================
+ Hits         34565    34699     +134     
+ Misses        1894     1893       -1     
Impacted Files Coverage Δ
core/device_hooks/common_kernels.inc.cpp 0.00% <0.00%> (ø)
include/ginkgo/core/matrix/csr.hpp 45.36% <ø> (ø)
core/matrix/csr.cpp 98.88% <100.00%> (+0.07%) ⬆️
omp/matrix/csr_kernels.cpp 94.81% <100.00%> (+0.28%) ⬆️
omp/test/matrix/csr_kernels.cpp 100.00% <100.00%> (ø)
reference/matrix/csr_kernels.cpp 99.57% <100.00%> (+0.02%) ⬆️
reference/test/matrix/csr_kernels.cpp 99.79% <100.00%> (+0.01%) ⬆️
omp/reorder/rcm_kernels.cpp 97.53% <0.00%> (-0.61%) ⬇️
omp/components/format_conversion.hpp 100.00% <0.00%> (ø)
core/base/extended_float.hpp 92.23% <0.00%> (+0.97%) ⬆️
... and 2 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update b4fc0c7...7494456. Read the comment docs.

Copy link
Member

@upsj upsj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks pretty useful! I'm guessing this is related to RAS overlaps? Does this degree of genericity suffice, i.e. are ranges enough to describe overlapping regions, or can there be gaps?

The code itself also looks good, only minor comments and questions (I will review the array reduce part in the other PR). Nonzero filtering is also part of the ParILUT implementation, not sure though whether it makes sense to try and merge those two.

common/cuda_hip/matrix/csr_kernels.hpp.inc Outdated Show resolved Hide resolved
core/matrix/csr.cpp Show resolved Hide resolved
core/matrix/csr.cpp Outdated Show resolved Hide resolved
cuda/test/matrix/csr_kernels2.cpp Outdated Show resolved Hide resolved
include/ginkgo/core/base/lin_op.hpp Outdated Show resolved Hide resolved
omp/matrix/csr_kernels.cpp Outdated Show resolved Hide resolved
omp/test/matrix/csr_kernels2.cpp Outdated Show resolved Hide resolved
reference/matrix/csr_kernels.cpp Outdated Show resolved Hide resolved
cuda/test/matrix/CMakeLists.txt Outdated Show resolved Hide resolved
@pratikvn
Copy link
Member Author

pratikvn commented Oct 4, 2021

Yes, ideally, I think for Schwarz type algorithms you would need a superset of this functionality, which i think could probably take a set of ranges, for example an IndexSet and build the submatrix from that. But I think that is slightly more complicated to implement. I will look into that once we have the IndexSet merged.

Nevertheless, I think this simple one with just a continuous range is also useful.

@pratikvn pratikvn force-pushed the sparse-submatrix branch 6 times, most recently from dc4a97e to 38f99ff Compare October 11, 2021 14:19
@pratikvn pratikvn force-pushed the sparse-submatrix branch 3 times, most recently from 7df3fb4 to 535d6cc Compare October 19, 2021 14:43
@pratikvn pratikvn requested review from upsj and a team October 19, 2021 15:17
@pratikvn pratikvn force-pushed the sparse-submatrix branch 2 times, most recently from b5d7e81 to d7e5663 Compare October 20, 2021 08:24
Copy link
Contributor

@Slaedr Slaedr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work. This seems like it might be a nice feature. Could you describe the use case you have in mind for this?

common/cuda_hip/matrix/csr_kernels.hpp.inc Outdated Show resolved Hide resolved
common/cuda_hip/matrix/csr_kernels.hpp.inc Outdated Show resolved Hide resolved
common/cuda_hip/matrix/csr_kernels.hpp.inc Outdated Show resolved Hide resolved
cuda/test/matrix/CMakeLists.txt Outdated Show resolved Hide resolved
cuda/test/matrix/csr_kernels2.cpp Outdated Show resolved Hide resolved
include/ginkgo/core/matrix/csr.hpp Outdated Show resolved Hide resolved
upsj
upsj previously requested changes Nov 8, 2021
Copy link
Member

@upsj upsj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM in both implementation and tests, really nice job there. The warp-parallel implementation would be nice to have, but not necessary in this PR from my side.
I only don't think creating a separate class requiring a CRTP parameter gives any advantage over directly adding the function to Csr though. Also, I would like to avoid the addition of another constructor to Csr, since we already have a way to do this with little more core.

include/ginkgo/core/matrix/csr.hpp Outdated Show resolved Hide resolved
dpcpp/matrix/csr_kernels.dp.cpp Outdated Show resolved Hide resolved
core/matrix/csr.cpp Outdated Show resolved Hide resolved
core/matrix/csr.cpp Outdated Show resolved Hide resolved
core/matrix/csr.cpp Outdated Show resolved Hide resolved
cuda/test/matrix/csr_kernels.cpp Outdated Show resolved Hide resolved
hip/test/matrix/csr_kernels.hip.cpp Outdated Show resolved Hide resolved
hip/test/matrix/csr_kernels.hip.cpp Outdated Show resolved Hide resolved
hip/test/matrix/csr_kernels.hip.cpp Outdated Show resolved Hide resolved
@Slaedr
Copy link
Contributor

Slaedr commented Nov 8, 2021

I would still like to know what the use case is for this functionality. I think that context is important in deciding whether this is a useful feature or not. For example, I think this might be useful if the create_submatrix could work on more general IndexSets rather than spans.

@pratikvn
Copy link
Member Author

pratikvn commented Nov 8, 2021

Extraction of submatrices could be used in many algorithms:

  1. A quick test for Schur complement type algorithms.
  2. In Domain decomposition algorithms where you would need to extract blocks, possibly overlapping of a global matrix.
  3. Maybe in other blocking algorithms (atleast for the generation stage), as this allows you to extract any sub-block of a matrix, given a row and a column span.

While IndexSet would be more general, the kernels added here would be used for creating sub matrices which take an IndexSet as well. Additionally, IndexSet sub-matrix creation might not be the one that has a lot of use cases, because that much generality might be unnecessary, but a simple span based sub-matrix extraction will be more useful.

@upsj
Copy link
Member

upsj commented Nov 8, 2021

On the domain decomposition algorithms, is it normally possible to represent blocks and overlaps as contiguous ranges? In a straightforward 5pt stencil implementation, you can only get slices that way, and if it was stored in a blocked fashion, the overlaps probably need to consist of many separate ranges except for a single contiguous one?

@Slaedr
Copy link
Contributor

Slaedr commented Nov 8, 2021

@pratikvn I guess you have fine-grain domain decomposition in mind, where you would have many subdomains in a single process? In that case, I suppose you plan to reorder the DOFs such that those within a subdomain are always contiguous?

@pratikvn
Copy link
Member Author

pratikvn commented Nov 8, 2021

In general, overlaps are just extensions of the diagonal block by a few elements along the diagonal in either direction and then seeing the entire matrix with the rows to the left/right and columns to the top/bottom of the overlap elements. You would like to try to get the entire matrix (including the overlap) as a single block as that improves performance due to just a single apply call.

@Slaedr, yes, both fine-grained and coarse grained should be possible. For fine grained implmentations, you probably would not use the core side implementations, but call the kernels themselves in your algorithms. But to compose the algorithms for testing and prototyping purposes, this would be nevertheless useful.

@Slaedr
Copy link
Contributor

Slaedr commented Nov 8, 2021

I missed @upsj 's point about overlaps. If you need to extract overlapping sub-blocks, it would not always be possible to have the subdomains contiguous in the global ordering. I guess that significantly decreases the current interface's utility for domain decomposition?

You might also consider having IndexSet in the interface, but implement it only for a single contiguous range for now, and expand it later if need be.

@pratikvn
Copy link
Member Author

pratikvn commented Nov 9, 2021

@Slaedr , yes, I plan to add some functions which take in a IndexSet, but will probably do it another PR, as that will take some more time.

Copy link
Contributor

@Slaedr Slaedr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Except for the question of IndexSet vs span, I think this looks good. I agree with you that this is relevant for all matrix formats.

In general, I think IndexSet makes sense for this. You could add a small convenience function to get a vector of spans from an index set, and possibly also to also return the first span of an index set.

include/ginkgo/core/matrix/csr.hpp Show resolved Hide resolved
reference/test/matrix/csr_kernels.cpp Show resolved Hide resolved
Co-authored-by: Tobias Ribizel <[email protected]>
Copy link
Collaborator

@greole greole left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some very minor things from my side.

reference/test/matrix/csr_kernels.cpp Outdated Show resolved Hide resolved
reference/test/matrix/csr_kernels.cpp Outdated Show resolved Hide resolved
dpcpp/matrix/csr_kernels.dp.cpp Outdated Show resolved Hide resolved
Copy link
Member

@upsj upsj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! I have only a few naming/readability/type nits for all implementations.

common/cuda_hip/matrix/csr_kernels.hpp.inc Outdated Show resolved Hide resolved
common/cuda_hip/matrix/csr_kernels.hpp.inc Outdated Show resolved Hide resolved
include/ginkgo/core/matrix/csr.hpp Outdated Show resolved Hide resolved
Co-authored-by: Tobias Ribizel <[email protected]>
@pratikvn pratikvn added 1:ST:ready-to-merge This PR is ready to merge. and removed 1:ST:ready-for-review This PR is ready for review labels Nov 11, 2021
@sonarcloud
Copy link

sonarcloud bot commented Nov 11, 2021

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 10 Code Smells

70.6% 70.6% Coverage
9.3% 9.3% Duplication

Copy link
Contributor

@Slaedr Slaedr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. I still don't quite know good uses for this, but I guess that's okay. It might at least come in handy for playing around (initial experiments) with stuff, and maybe some testing, as you said.

What would be interesting is creating sparse submatrix views. That could actually be used in production code for things like multi-physics problems, but that's a whole other story. We could have 4-array (-view) version of CSR format for that, for example.

@pratikvn pratikvn merged commit d9dcea6 into develop Nov 12, 2021
@pratikvn pratikvn deleted the sparse-submatrix branch November 12, 2021 08:39
tcojean added a commit that referenced this pull request Nov 12, 2022
Advertise release 1.5.0 and last changes

+ Add changelog,
+ Update third party libraries
+ A small fix to a CMake file

See PR: #1195

The Ginkgo team is proud to announce the new Ginkgo minor release 1.5.0. This release brings many important new features such as:
- MPI-based multi-node support for all matrix formats and most solvers;
- full DPC++/SYCL support,
- functionality and interface for GPU-resident sparse direct solvers,
- an interface for wrapping solvers with scaling and reordering applied,
- a new algebraic Multigrid solver/preconditioner,
- improved mixed-precision support,
- support for device matrix assembly,

and much more.

If you face an issue, please first check our [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues) and the [open issues list](https://github.com/ginkgo-project/ginkgo/issues) and if you do not find a solution, feel free to [open a new issue](https://github.com/ginkgo-project/ginkgo/issues/new/choose) or ask a question using the [github discussions](https://github.com/ginkgo-project/ginkgo/discussions).

Supported systems and requirements:
+ For all platforms, CMake 3.13+
+ C++14 compliant compiler
+ Linux and macOS
  + GCC: 5.5+
  + clang: 3.9+
  + Intel compiler: 2018+
  + Apple LLVM: 8.0+
  + NVHPC: 22.7+
  + Cray Compiler: 14.0.1+
  + CUDA module: CUDA 9.2+ or NVHPC 22.7+
  + HIP module: ROCm 4.0+
  + DPC++ module: Intel OneAPI 2021.3 with oneMKL and oneDPL. Set the CXX compiler to `dpcpp`.
+ Windows
  + MinGW and Cygwin: GCC 5.5+
  + Microsoft Visual Studio: VS 2019
  + CUDA module: CUDA 9.2+, Microsoft Visual Studio
  + OpenMP module: MinGW or Cygwin.


Algorithm and important feature additions:
+ Add MPI-based multi-node for all matrix formats and solvers (except GMRES and IDR). ([#676](#676), [#908](#908), [#909](#909), [#932](#932), [#951](#951), [#961](#961), [#971](#971), [#976](#976), [#985](#985), [#1007](#1007), [#1030](#1030), [#1054](#1054), [#1100](#1100), [#1148](#1148))
+ Porting the remaining algorithms (preconditioners like ISAI, Jacobi, Multigrid, ParILU(T) and ParIC(T)) to DPC++/SYCL, update to SYCL 2020, and improve support and performance ([#896](#896), [#924](#924), [#928](#928), [#929](#929), [#933](#933), [#943](#943), [#960](#960), [#1057](#1057), [#1110](#1110),  [#1142](#1142))
+ Add a Sparse Direct interface supporting GPU-resident numerical LU factorization, symbolic Cholesky factorization, improved triangular solvers, and more ([#957](#957), [#1058](#1058), [#1072](#1072), [#1082](#1082))
+ Add a ScaleReordered interface that can wrap solvers and automatically apply reorderings and scalings ([#1059](#1059))
+ Add a Multigrid solver and improve the aggregation based PGM coarsening scheme ([#542](#542), [#913](#913), [#980](#980), [#982](#982),  [#986](#986))
+ Add infrastructure for unified, lambda-based, backend agnostic, kernels and utilize it for some simple kernels ([#833](#833), [#910](#910), [#926](#926))
+ Merge different CUDA, HIP, DPC++ and OpenMP tests under a common interface ([#904](#904), [#973](#973), [#1044](#1044), [#1117](#1117))
+ Add a device_matrix_data type for device-side matrix assembly ([#886](#886), [#963](#963), [#965](#965))
+ Add support for mixed real/complex BLAS operations ([#864](#864))
+ Add a FFT LinOp for all but DPC++/SYCL ([#701](#701))
+ Add FBCSR support for NVIDIA and AMD GPUs and CPUs with OpenMP ([#775](#775))
+ Add CSR scaling ([#848](#848))
+ Add array::const_view and equivalent to create constant matrices from non-const data ([#890](#890))
+ Add a RowGatherer LinOp supporting mixed precision to gather dense matrix rows ([#901](#901))
+ Add mixed precision SparsityCsr SpMV support ([#970](#970))
+ Allow creating CSR submatrix including from (possibly discontinuous) index sets ([#885](#885), [#964](#964))
+ Add a scaled identity addition (M <- aI + bM) feature interface and impls for Csr and Dense ([#942](#942))


Deprecations and important changes:
+ Deprecate AmgxPgm in favor of the new Pgm name. ([#1149](#1149)).
+ Deprecate specialized residual norm classes in favor of a common `ResidualNorm` class ([#1101](#1101))
+ Deprecate CamelCase non-polymorphic types in favor of snake_case versions (like array, machine_topology, uninitialized_array, index_set) ([#1031](#1031), [#1052](#1052))
+ Bug fix: restrict gko::share to rvalue references (*possible interface break*) ([#1020](#1020))
+ Bug fix: when using cuSPARSE's triangular solvers, specifying the factory parameter `num_rhs` is now required when solving for more than one right-hand side, otherwise an exception is thrown ([#1184](#1184)).
+ Drop official support for old CUDA < 9.2 ([#887](#887))


Improved performance additions:
+ Reuse tmp storage in reductions in solvers and add a mutable workspace to all solvers ([#1013](#1013), [#1028](#1028))
+ Add HIP unsafe atomic option for AMD ([#1091](#1091))
+ Prefer vendor implementations for Dense dot, conj_dot and norm2 when available ([#967](#967)).
+ Tuned OpenMP SellP, COO, and ELL SpMV kernels for a small number of RHS ([#809](#809))


Fixes:
+ Fix various compilation warnings ([#1076](#1076), [#1183](#1183), [#1189](#1189))
+ Fix issues with hwloc-related tests ([#1074](#1074))
+ Fix include headers for GCC 12 ([#1071](#1071))
+ Fix for simple-solver-logging example ([#1066](#1066))
+ Fix for potential memory leak in Logger ([#1056](#1056))
+ Fix logging of mixin classes ([#1037](#1037))
+ Improve value semantics for LinOp types, like moved-from state in cross-executor copy/clones ([#753](#753))
+ Fix some matrix SpMV and conversion corner cases ([#905](#905), [#978](#978))
+ Fix uninitialized data ([#958](#958))
+ Fix CUDA version requirement for cusparseSpSM ([#953](#953))
+ Fix several issues within bash-script ([#1016](#1016))
+ Fixes for `NVHPC` compiler support ([#1194](#1194))


Other additions:
+ Simplify and properly name GMRES kernels ([#861](#861))
+ Improve pkg-config support for non-CMake libraries ([#923](#923), [#1109](#1109))
+ Improve gdb pretty printer ([#987](#987), [#1114](#1114))
+ Add a logger highlighting inefficient allocation and copy patterns ([#1035](#1035))
+ Improved and optimized test random matrix generation ([#954](#954), [#1032](#1032))
+ Better CSR strategy defaults ([#969](#969))
+ Add `move_from` to `PolymorphicObject` ([#997](#997))
+ Remove unnecessary device_guard usage ([#956](#956))
+ Improvements to the generic accessor for mixed-precision ([#727](#727))
+ Add a naive lower triangular solver implementation for CUDA ([#764](#764))
+ Add support for int64 indices from CUDA 11 onward with SpMV and SpGEMM ([#897](#897))
+ Add a L1 norm implementation ([#900](#900))
+ Add reduce_add for arrays ([#831](#831))
+ Add utility to simplify Dense View creation from an existing Dense vector ([#1136](#1136)).
+ Add a custom transpose implementation for Fbcsr and Csr transpose for unsupported vendor types ([#1123](#1123))
+ Make IDR random initilization deterministic ([#1116](#1116))
+ Move the algorithm choice for triangular solvers from Csr::strategy_type to a factory parameter ([#1088](#1088))
+ Update CUDA archCoresPerSM ([#1175](#1116))
+ Add kernels for Csr sparsity pattern lookup ([#994](#994))
+ Differentiate between structural and numerical zeros in Ell/Sellp ([#1027](#1027))
+ Add a binary IO format for matrix data ([#984](#984))
+ Add a tuple zip_iterator implementation ([#966](#966))
+ Simplify kernel stubs and declarations ([#888](#888))
+ Simplify GKO_REGISTER_OPERATION with lambdas ([#859](#859))
+ Simplify copy to device in tests and examples ([#863](#863))
+ More verbose output to array assertions ([#858](#858))
+ Allow parallel compilation for Jacobi kernels ([#871](#871))
+ Change clang-format pointer alignment to left ([#872](#872))
+ Various improvements and fixes to the benchmarking framework ([#750](#750), [#759](#759), [#870](#870), [#911](#911), [#1033](#1033), [#1137](#1137))
+ Various documentation improvements ([#892](#892), [#921](#921), [#950](#950), [#977](#977), [#1021](#1021), [#1068](#1068), [#1069](#1069), [#1080](#1080), [#1081](#1081), [#1108](#1108), [#1153](#1153), [#1154](#1154))
+ Various CI improvements ([#868](#868), [#874](#874), [#884](#884), [#889](#889), [#899](#899), [#903](#903),  [#922](#922), [#925](#925), [#930](#930), [#936](#936), [#937](#937), [#958](#958), [#882](#882), [#1011](#1011), [#1015](#1015), [#989](#989), [#1039](#1039), [#1042](#1042), [#1067](#1067), [#1073](#1073), [#1075](#1075), [#1083](#1083), [#1084](#1084), [#1085](#1085), [#1139](#1139), [#1178](#1178), [#1187](#1187))
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:ready-to-merge This PR is ready to merge. is:new-feature A request or implementation of a feature that does not exist yet. mod:all This touches all Ginkgo modules. reg:build This is related to the build system. reg:testing This is related to testing. type:matrix-format This is related to the Matrix formats
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants