Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds support for MPI distributed matrix #971

Merged
merged 46 commits into from
Jul 8, 2022

Conversation

MarcelKoch
Copy link
Member

@MarcelKoch MarcelKoch commented Feb 14, 2022

This PR adds a row-wise distributed matrix. Each rank owns a number of rows of the global matrix, as defined in a partition. Locally, the rows are split into a diagonal part which contains only entries with column indices that belong to the local row range.
The remaining entries are gathered into an offdiagonal part. The offdiagonal columns are renumbered such that empty columns are removed. This renumbering is stored in the matrix, although there is no access to it at the moment.

Current limitations:

  • only reference + openMP read kernels
  • only square global matrices are allowed
  • can't apply to vectors with multiple columns

Todo:

  • evaluate if we should add a column partition to the interface

Partially addresses #907

@MarcelKoch MarcelKoch added 1:ST:WIP This PR is a work in progress. Not ready for review. mod:mpi This is related to the MPI module 1:ST:need-feedback The PR is somewhat ready but feedback on a blocking topic is required before a proper review. type:distributed-functionality labels Feb 14, 2022
@MarcelKoch MarcelKoch added this to the Ginkgo 1.5.0 milestone Feb 14, 2022
@MarcelKoch MarcelKoch added this to In progress in Distributed Ginkgo via automation Feb 14, 2022
@MarcelKoch MarcelKoch self-assigned this Feb 14, 2022
@ginkgo-bot ginkgo-bot added mod:all This touches all Ginkgo modules. reg:build This is related to the build system. reg:testing This is related to testing. labels Feb 14, 2022
@MarcelKoch MarcelKoch added 1:ST:ready-for-review This PR is ready for review and removed 1:ST:WIP This PR is a work in progress. Not ready for review. 1:ST:need-feedback The PR is somewhat ready but feedback on a blocking topic is required before a proper review. labels Feb 18, 2022
@codecov
Copy link

codecov bot commented Feb 22, 2022

Codecov Report

Merging #971 (2379dab) into distributed-develop (34fca79) will increase coverage by 0.08%.
The diff coverage is 96.52%.

@@                   Coverage Diff                   @@
##           distributed-develop     #971      +/-   ##
=======================================================
+ Coverage                91.82%   91.91%   +0.08%     
=======================================================
  Files                      509      517       +8     
  Lines                    43711    44642     +931     
=======================================================
+ Hits                     40139    41034     +895     
- Misses                    3572     3608      +36     
Impacted Files Coverage Δ
core/device_hooks/common_kernels.inc.cpp 0.00% <0.00%> (ø)
include/ginkgo/core/distributed/partition.hpp 100.00% <ø> (ø)
test/mpi/distributed/vector.cpp 100.00% <ø> (ø)
core/distributed/vector.cpp 82.35% <50.00%> (-0.78%) ⬇️
core/distributed/matrix.cpp 89.67% <89.67%> (ø)
core/test/mpi/distributed/matrix.cpp 96.39% <96.39%> (ø)
test/mpi/distributed/matrix.cpp 97.40% <97.40%> (ø)
omp/test/distributed/matrix_kernels.cpp 98.38% <98.38%> (ø)
core/test/utils/matrix_generator.hpp 100.00% <100.00%> (ø)
include/ginkgo/core/base/mpi.hpp 92.54% <100.00%> (+0.23%) ⬆️
... and 8 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 920140d...2379dab. Read the comment docs.

@sonarcloud
Copy link

sonarcloud bot commented Feb 23, 2022

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 36 Code Smells

89.5% 89.5% Coverage
8.3% 8.3% Duplication

Copy link
Member

@upsj upsj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First half of my review, mostly interface comments

include/ginkgo/core/base/device_matrix_data.hpp Outdated Show resolved Hide resolved
core/distributed/matrix.cpp Outdated Show resolved Hide resolved
dpcpp/distributed/matrix_kernels.dp.cpp Outdated Show resolved Hide resolved
hip/distributed/matrix_kernels.hip.cpp Outdated Show resolved Hide resolved
include/ginkgo/core/base/mpi.hpp Show resolved Hide resolved
include/ginkgo/core/distributed/matrix.hpp Outdated Show resolved Hide resolved
omp/distributed/matrix_kernels.cpp Show resolved Hide resolved
@MarcelKoch MarcelKoch force-pushed the new-distributed-matrix branch 2 times, most recently from 4acb343 to cdd5bf9 Compare March 1, 2022 08:33
Base automatically changed from distributed-vector to distributed-develop March 10, 2022 10:05
Copy link
Member

@pratikvn pratikvn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First part of the review

core/test/utils/matrix_generator.hpp Outdated Show resolved Hide resolved
include/ginkgo/core/distributed/matrix.hpp Outdated Show resolved Hide resolved
include/ginkgo/core/distributed/matrix.hpp Outdated Show resolved Hide resolved
reference/distributed/matrix_kernels.cpp Outdated Show resolved Hide resolved
reference/distributed/matrix_kernels.cpp Outdated Show resolved Hide resolved
reference/test/distributed/matrix_kernels.cpp Outdated Show resolved Hide resolved
reference/test/distributed/matrix_kernels.cpp Outdated Show resolved Hide resolved
@upsj upsj self-requested a review March 17, 2022 11:14
Copy link
Member

@upsj upsj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really nice, I like how it's come along. I would probably give this a second pass after updates before approving it, but most things are minor

core/distributed/matrix.cpp Outdated Show resolved Hide resolved
core/distributed/matrix.cpp Outdated Show resolved Hide resolved
core/distributed/matrix.cpp Outdated Show resolved Hide resolved
Comment on lines 61 to 62
device_matrix_data<ValueType, LocalIndexType>& diag_data, \
device_matrix_data<ValueType, LocalIndexType>& offdiag_data, \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

probably needs access to the individual arrays instead?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need to set the size/dim of the data, which at least for the offdiag case is only known during the kernel.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good point. I am really not happy with mixing core and kernel code in that fashion, an alternative would be a single out-parameter for the number of offdiag columns?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I guess the only core code that is acceptable is array code, right? I would still need either the resize or the assign operator.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes. At some point in the future (probably 2.0), we will need to split everything into a core-core (probably executor-only stuff + header-only types like Array that depend directly on it) and regular core (all other Ginkgo types) so that core and kernels depend on core-core, and core depends on kernels.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

perhaps to clarify what I'm doing, I create a new device matrix data with the correct size and move that into the input data. So the only implementation that I need here is the device_matrix_data constructor that takes the arrays as parameter. Do you expect that we also move that constructor into core?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm, that's difficult to judge. I would prefer if we kept them separate (especially because in Windows, all members of a class need to be part of the same DLL unless you tag each one individually with different __dllspec attributes, which misses some special member functions etc.), but that would mean that the constructor can't be called from the kernel.

core/distributed/matrix.cpp Show resolved Hide resolved
* by the process. Entries for those rows are discarded.
*
* @param data The device_matrix_data structure.
* @param partition The global row and column partition.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe we should get the term symmetric in there somehow?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The matrix itself is not necessarily symmetric, so I don't think we should add it here. It would probably be more confusing.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was talking about the partition being symmetric, which is the term I recall seeing most of the time associated with this kind of partition.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An alternative would be "1D partition"

include/ginkgo/core/distributed/matrix.hpp Show resolved Hide resolved
include/ginkgo/core/distributed/matrix.hpp Show resolved Hide resolved
omp/test/distributed/matrix_kernels.cpp Show resolved Hide resolved
reference/test/distributed/matrix_kernels.cpp Show resolved Hide resolved
MarcelKoch and others added 6 commits July 6, 2022 11:57
this is necessary on some systems to make sure that the row_gather kernels has finished before the data is communicated.
- documentation
- renaming

Co-authored-by: Tobias Ribizel <[email protected]>
- fix omp test
- documentation

Co-authored-by: Yuhsiang Tsai <[email protected]>
- renaming
- formatting

Co-authored-by: Pratik Nayak <[email protected]>
- documentation
- naming
- const correctness
- fix include

Co-authored-by: Gregor Olenik <[email protected]>
@MarcelKoch
Copy link
Member Author

format!

Co-authored-by: Marcel Koch <[email protected]>
@MarcelKoch MarcelKoch added 1:ST:ready-to-merge This PR is ready to merge. and removed 1:ST:ready-for-review This PR is ready for review labels Jul 7, 2022
Copy link
Member

@yhmtsai yhmtsai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to ensure the testing matrix's property is what you want to test before merging it.
Others comments are for format and I don't mind you do that in other pr to avoid the pipeline

Comment on lines 508 to 509
* with other processors.
* @param local_b The full local vector to be communicated. The subset of
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* with other processors.
* @param local_b The full local vector to be communicated. The subset of
* with other processors.
*
* @param local_b The full local vector to be communicated. The subset of

nit

Comment on lines 445 to 446
* constraints as LocalMatrixType.
* @param exec Executor associated with this matrix.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* constraints as LocalMatrixType.
* @param exec Executor associated with this matrix.
* constraints as LocalMatrixType.
*
* @param exec Executor associated with this matrix.

Nit

Comment on lines +231 to +232
std::uniform_int_distribution<int>(static_cast<int>(num_cols),
static_cast<int>(num_cols)),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generating the fully dense matrix is what you want, right?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I might have wanted to ensure that no parts are empty, so I used a dense matrix here. I don't think it would make any difference though. I will keep it as it is.

non_local++;
}
}
// store localonal data to output
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// store localonal data to output
// store local data to output

I guess the left part from diagonal to local

Distributed Ginkgo automation moved this from Review in progress to Reviewer approved Jul 7, 2022
- documentation

Co-authored-by: Yuhsiang Tsai <[email protected]>
@sonarcloud
Copy link

sonarcloud bot commented Jul 8, 2022

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 41 Code Smells

89.8% 89.8% Coverage
8.4% 8.4% Duplication

@MarcelKoch MarcelKoch merged commit 73ed0da into distributed-develop Jul 8, 2022
Distributed Ginkgo automation moved this from Reviewer approved to Done Jul 8, 2022
@MarcelKoch MarcelKoch deleted the new-distributed-matrix branch July 8, 2022 13:49
MarcelKoch added a commit that referenced this pull request Jul 8, 2022
This PR adds support for an MPI distributed matrix. The matrix is distributed row-wise. This means each rank owns a number of rows of the global matrix, as defined in a partition. Locally, the rows are split into a local part, which contains only entries with column indices that belong to the local row range.
The remaining entries are gathered into a non-local part. The non-local columns are renumbered such that empty columns are removed. This renumbering is stored in the matrix, although there is no access to it at the moment.

Currently, only reference and openMP kernels are available for reading the matrix. All other operations are supported by all executors.

Related PR: #971
MarcelKoch added a commit that referenced this pull request Aug 16, 2022
This PR adds support for an MPI distributed matrix. The matrix is distributed row-wise. This means each rank owns a number of rows of the global matrix, as defined in a partition. Locally, the rows are split into a local part, which contains only entries with column indices that belong to the local row range.
The remaining entries are gathered into a non-local part. The non-local columns are renumbered such that empty columns are removed. This renumbering is stored in the matrix, although there is no access to it at the moment.

Currently, only reference and openMP kernels are available for reading the matrix. All other operations are supported by all executors.

Related PR: #971
MarcelKoch added a commit that referenced this pull request Sep 28, 2022
This PR will enable using distributed matrices and vector (#971 and #961) in the following iterative solvers:
- Bicgstab
- Cg
- Cgs
- Fcg
- Ir

Currently not supported are:
- Bicg
- [cb_]Gmres
- Idr
- Multigrid
- Lower/Upper_trs

The handling of the distributed/non-distributed data is done via additional dispatch routines that expand on precision_dispatch_real_complex, and helper routines to extract the underlying dense matrix from either a distributed or dense vector. Also, the residual norm stopping criteria implementation has been changed to also use a similar dispatch approach.

This also contains some fixes regarding the doxygen documentation for the other distributed classes.

Related PR: #976
MarcelKoch added a commit that referenced this pull request Oct 5, 2022
This PR adds support for an MPI distributed matrix. The matrix is distributed row-wise. This means each rank owns a number of rows of the global matrix, as defined in a partition. Locally, the rows are split into a local part, which contains only entries with column indices that belong to the local row range.
The remaining entries are gathered into a non-local part. The non-local columns are renumbered such that empty columns are removed. This renumbering is stored in the matrix, although there is no access to it at the moment.

Currently, only reference and openMP kernels are available for reading the matrix. All other operations are supported by all executors.

Related PR: #971
MarcelKoch added a commit that referenced this pull request Oct 5, 2022
This PR will enable using distributed matrices and vector (#971 and #961) in the following iterative solvers:
- Bicgstab
- Cg
- Cgs
- Fcg
- Ir

Currently not supported are:
- Bicg
- [cb_]Gmres
- Idr
- Multigrid
- Lower/Upper_trs

The handling of the distributed/non-distributed data is done via additional dispatch routines that expand on precision_dispatch_real_complex, and helper routines to extract the underlying dense matrix from either a distributed or dense vector. Also, the residual norm stopping criteria implementation has been changed to also use a similar dispatch approach.

This also contains some fixes regarding the doxygen documentation for the other distributed classes.

Related PR: #976
MarcelKoch added a commit that referenced this pull request Oct 26, 2022
This PR adds support for an MPI distributed matrix. The matrix is distributed row-wise. This means each rank owns a number of rows of the global matrix, as defined in a partition. Locally, the rows are split into a local part, which contains only entries with column indices that belong to the local row range.
The remaining entries are gathered into a non-local part. The non-local columns are renumbered such that empty columns are removed. This renumbering is stored in the matrix, although there is no access to it at the moment.

Currently, only reference and openMP kernels are available for reading the matrix. All other operations are supported by all executors.

Related PR: #971
MarcelKoch added a commit that referenced this pull request Oct 26, 2022
This PR will enable using distributed matrices and vector (#971 and #961) in the following iterative solvers:
- Bicgstab
- Cg
- Cgs
- Fcg
- Ir

Currently not supported are:
- Bicg
- [cb_]Gmres
- Idr
- Multigrid
- Lower/Upper_trs

The handling of the distributed/non-distributed data is done via additional dispatch routines that expand on precision_dispatch_real_complex, and helper routines to extract the underlying dense matrix from either a distributed or dense vector. Also, the residual norm stopping criteria implementation has been changed to also use a similar dispatch approach.

This also contains some fixes regarding the doxygen documentation for the other distributed classes.

Related PR: #976
MarcelKoch added a commit that referenced this pull request Oct 31, 2022
This PR adds support for an MPI distributed matrix. The matrix is distributed row-wise. This means each rank owns a number of rows of the global matrix, as defined in a partition. Locally, the rows are split into a local part, which contains only entries with column indices that belong to the local row range.
The remaining entries are gathered into a non-local part. The non-local columns are renumbered such that empty columns are removed. This renumbering is stored in the matrix, although there is no access to it at the moment.

Currently, only reference and openMP kernels are available for reading the matrix. All other operations are supported by all executors.

Related PR: #971
MarcelKoch added a commit that referenced this pull request Oct 31, 2022
This PR will enable using distributed matrices and vector (#971 and #961) in the following iterative solvers:
- Bicgstab
- Cg
- Cgs
- Fcg
- Ir

Currently not supported are:
- Bicg
- [cb_]Gmres
- Idr
- Multigrid
- Lower/Upper_trs

The handling of the distributed/non-distributed data is done via additional dispatch routines that expand on precision_dispatch_real_complex, and helper routines to extract the underlying dense matrix from either a distributed or dense vector. Also, the residual norm stopping criteria implementation has been changed to also use a similar dispatch approach.

This also contains some fixes regarding the doxygen documentation for the other distributed classes.

Related PR: #976
MarcelKoch added a commit that referenced this pull request Oct 31, 2022
This PR will add basic, distributed data structures (matrix and vector), and enable some solvers for these types. This PR contains the following PRs:
- #961
- #971 
- #976 
- #985 
- #1007 
- #1030 
- #1054

# Additional Changes

- moves new types into experimental namespace
- moves existing Partition class into experimental namespace
- moves existing mpi namespace into experimental namespace
- makes generic_scoped_device_id_guard destructor noexcept by terminating if restoring the original device id fails
- switches to blocking communication in the SpMV if OpenMPI version 4.0.x is used
- disables Horeka mpi tests and uses nla-gpu instead

Related PR: #1133
tcojean added a commit that referenced this pull request Nov 12, 2022
Advertise release 1.5.0 and last changes

+ Add changelog,
+ Update third party libraries
+ A small fix to a CMake file

See PR: #1195

The Ginkgo team is proud to announce the new Ginkgo minor release 1.5.0. This release brings many important new features such as:
- MPI-based multi-node support for all matrix formats and most solvers;
- full DPC++/SYCL support,
- functionality and interface for GPU-resident sparse direct solvers,
- an interface for wrapping solvers with scaling and reordering applied,
- a new algebraic Multigrid solver/preconditioner,
- improved mixed-precision support,
- support for device matrix assembly,

and much more.

If you face an issue, please first check our [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues) and the [open issues list](https://github.com/ginkgo-project/ginkgo/issues) and if you do not find a solution, feel free to [open a new issue](https://github.com/ginkgo-project/ginkgo/issues/new/choose) or ask a question using the [github discussions](https://github.com/ginkgo-project/ginkgo/discussions).

Supported systems and requirements:
+ For all platforms, CMake 3.13+
+ C++14 compliant compiler
+ Linux and macOS
  + GCC: 5.5+
  + clang: 3.9+
  + Intel compiler: 2018+
  + Apple LLVM: 8.0+
  + NVHPC: 22.7+
  + Cray Compiler: 14.0.1+
  + CUDA module: CUDA 9.2+ or NVHPC 22.7+
  + HIP module: ROCm 4.0+
  + DPC++ module: Intel OneAPI 2021.3 with oneMKL and oneDPL. Set the CXX compiler to `dpcpp`.
+ Windows
  + MinGW and Cygwin: GCC 5.5+
  + Microsoft Visual Studio: VS 2019
  + CUDA module: CUDA 9.2+, Microsoft Visual Studio
  + OpenMP module: MinGW or Cygwin.


Algorithm and important feature additions:
+ Add MPI-based multi-node for all matrix formats and solvers (except GMRES and IDR). ([#676](#676), [#908](#908), [#909](#909), [#932](#932), [#951](#951), [#961](#961), [#971](#971), [#976](#976), [#985](#985), [#1007](#1007), [#1030](#1030), [#1054](#1054), [#1100](#1100), [#1148](#1148))
+ Porting the remaining algorithms (preconditioners like ISAI, Jacobi, Multigrid, ParILU(T) and ParIC(T)) to DPC++/SYCL, update to SYCL 2020, and improve support and performance ([#896](#896), [#924](#924), [#928](#928), [#929](#929), [#933](#933), [#943](#943), [#960](#960), [#1057](#1057), [#1110](#1110),  [#1142](#1142))
+ Add a Sparse Direct interface supporting GPU-resident numerical LU factorization, symbolic Cholesky factorization, improved triangular solvers, and more ([#957](#957), [#1058](#1058), [#1072](#1072), [#1082](#1082))
+ Add a ScaleReordered interface that can wrap solvers and automatically apply reorderings and scalings ([#1059](#1059))
+ Add a Multigrid solver and improve the aggregation based PGM coarsening scheme ([#542](#542), [#913](#913), [#980](#980), [#982](#982),  [#986](#986))
+ Add infrastructure for unified, lambda-based, backend agnostic, kernels and utilize it for some simple kernels ([#833](#833), [#910](#910), [#926](#926))
+ Merge different CUDA, HIP, DPC++ and OpenMP tests under a common interface ([#904](#904), [#973](#973), [#1044](#1044), [#1117](#1117))
+ Add a device_matrix_data type for device-side matrix assembly ([#886](#886), [#963](#963), [#965](#965))
+ Add support for mixed real/complex BLAS operations ([#864](#864))
+ Add a FFT LinOp for all but DPC++/SYCL ([#701](#701))
+ Add FBCSR support for NVIDIA and AMD GPUs and CPUs with OpenMP ([#775](#775))
+ Add CSR scaling ([#848](#848))
+ Add array::const_view and equivalent to create constant matrices from non-const data ([#890](#890))
+ Add a RowGatherer LinOp supporting mixed precision to gather dense matrix rows ([#901](#901))
+ Add mixed precision SparsityCsr SpMV support ([#970](#970))
+ Allow creating CSR submatrix including from (possibly discontinuous) index sets ([#885](#885), [#964](#964))
+ Add a scaled identity addition (M <- aI + bM) feature interface and impls for Csr and Dense ([#942](#942))


Deprecations and important changes:
+ Deprecate AmgxPgm in favor of the new Pgm name. ([#1149](#1149)).
+ Deprecate specialized residual norm classes in favor of a common `ResidualNorm` class ([#1101](#1101))
+ Deprecate CamelCase non-polymorphic types in favor of snake_case versions (like array, machine_topology, uninitialized_array, index_set) ([#1031](#1031), [#1052](#1052))
+ Bug fix: restrict gko::share to rvalue references (*possible interface break*) ([#1020](#1020))
+ Bug fix: when using cuSPARSE's triangular solvers, specifying the factory parameter `num_rhs` is now required when solving for more than one right-hand side, otherwise an exception is thrown ([#1184](#1184)).
+ Drop official support for old CUDA < 9.2 ([#887](#887))


Improved performance additions:
+ Reuse tmp storage in reductions in solvers and add a mutable workspace to all solvers ([#1013](#1013), [#1028](#1028))
+ Add HIP unsafe atomic option for AMD ([#1091](#1091))
+ Prefer vendor implementations for Dense dot, conj_dot and norm2 when available ([#967](#967)).
+ Tuned OpenMP SellP, COO, and ELL SpMV kernels for a small number of RHS ([#809](#809))


Fixes:
+ Fix various compilation warnings ([#1076](#1076), [#1183](#1183), [#1189](#1189))
+ Fix issues with hwloc-related tests ([#1074](#1074))
+ Fix include headers for GCC 12 ([#1071](#1071))
+ Fix for simple-solver-logging example ([#1066](#1066))
+ Fix for potential memory leak in Logger ([#1056](#1056))
+ Fix logging of mixin classes ([#1037](#1037))
+ Improve value semantics for LinOp types, like moved-from state in cross-executor copy/clones ([#753](#753))
+ Fix some matrix SpMV and conversion corner cases ([#905](#905), [#978](#978))
+ Fix uninitialized data ([#958](#958))
+ Fix CUDA version requirement for cusparseSpSM ([#953](#953))
+ Fix several issues within bash-script ([#1016](#1016))
+ Fixes for `NVHPC` compiler support ([#1194](#1194))


Other additions:
+ Simplify and properly name GMRES kernels ([#861](#861))
+ Improve pkg-config support for non-CMake libraries ([#923](#923), [#1109](#1109))
+ Improve gdb pretty printer ([#987](#987), [#1114](#1114))
+ Add a logger highlighting inefficient allocation and copy patterns ([#1035](#1035))
+ Improved and optimized test random matrix generation ([#954](#954), [#1032](#1032))
+ Better CSR strategy defaults ([#969](#969))
+ Add `move_from` to `PolymorphicObject` ([#997](#997))
+ Remove unnecessary device_guard usage ([#956](#956))
+ Improvements to the generic accessor for mixed-precision ([#727](#727))
+ Add a naive lower triangular solver implementation for CUDA ([#764](#764))
+ Add support for int64 indices from CUDA 11 onward with SpMV and SpGEMM ([#897](#897))
+ Add a L1 norm implementation ([#900](#900))
+ Add reduce_add for arrays ([#831](#831))
+ Add utility to simplify Dense View creation from an existing Dense vector ([#1136](#1136)).
+ Add a custom transpose implementation for Fbcsr and Csr transpose for unsupported vendor types ([#1123](#1123))
+ Make IDR random initilization deterministic ([#1116](#1116))
+ Move the algorithm choice for triangular solvers from Csr::strategy_type to a factory parameter ([#1088](#1088))
+ Update CUDA archCoresPerSM ([#1175](#1116))
+ Add kernels for Csr sparsity pattern lookup ([#994](#994))
+ Differentiate between structural and numerical zeros in Ell/Sellp ([#1027](#1027))
+ Add a binary IO format for matrix data ([#984](#984))
+ Add a tuple zip_iterator implementation ([#966](#966))
+ Simplify kernel stubs and declarations ([#888](#888))
+ Simplify GKO_REGISTER_OPERATION with lambdas ([#859](#859))
+ Simplify copy to device in tests and examples ([#863](#863))
+ More verbose output to array assertions ([#858](#858))
+ Allow parallel compilation for Jacobi kernels ([#871](#871))
+ Change clang-format pointer alignment to left ([#872](#872))
+ Various improvements and fixes to the benchmarking framework ([#750](#750), [#759](#759), [#870](#870), [#911](#911), [#1033](#1033), [#1137](#1137))
+ Various documentation improvements ([#892](#892), [#921](#921), [#950](#950), [#977](#977), [#1021](#1021), [#1068](#1068), [#1069](#1069), [#1080](#1080), [#1081](#1081), [#1108](#1108), [#1153](#1153), [#1154](#1154))
+ Various CI improvements ([#868](#868), [#874](#874), [#884](#884), [#889](#889), [#899](#899), [#903](#903),  [#922](#922), [#925](#925), [#930](#930), [#936](#936), [#937](#937), [#958](#958), [#882](#882), [#1011](#1011), [#1015](#1015), [#989](#989), [#1039](#1039), [#1042](#1042), [#1067](#1067), [#1073](#1073), [#1075](#1075), [#1083](#1083), [#1084](#1084), [#1085](#1085), [#1139](#1139), [#1178](#1178), [#1187](#1187))
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:ready-to-merge This PR is ready to merge. mod:all This touches all Ginkgo modules. mod:mpi This is related to the MPI module reg:build This is related to the build system. reg:testing This is related to testing. type:distributed-functionality
Projects
No open projects
Development

Successfully merging this pull request may close these issues.

None yet

7 participants