Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add mixed-spmv example #780

Merged
merged 3 commits into from
Jun 7, 2021
Merged

Add mixed-spmv example #780

merged 3 commits into from
Jun 7, 2021

Conversation

yhmtsai
Copy link
Member

@yhmtsai yhmtsai commented May 31, 2021

This PR adds the mixed spmv of ELL
It reads the matrix from the file and generate right hand side in uniform(0, 1).
the current matrix is HB/bcspwr02 Oberwolfach/LF10 from suitesparse.

@yhmtsai yhmtsai added reg:example This is related to the examples. 1:ST:ready-for-review This PR is ready for review labels May 31, 2021
@yhmtsai yhmtsai self-assigned this May 31, 2021
@ginkgo-bot ginkgo-bot added the reg:build This is related to the build system. label May 31, 2021
@yhmtsai yhmtsai requested a review from hartwiganzt May 31, 2021 13:12
@codecov
Copy link

codecov bot commented May 31, 2021

Codecov Report

Merging #780 (47b8199) into develop (2f61846) will decrease coverage by 0.09%.
The diff coverage is n/a.

❗ Current head 47b8199 differs from pull request most recent head ac39b9a. Consider uploading reports for the commit ac39b9a to get more accurate results
Impacted file tree graph

@@             Coverage Diff             @@
##           develop     #780      +/-   ##
===========================================
- Coverage    94.26%   94.17%   -0.10%     
===========================================
  Files          400      400              
  Lines        31578    31080     -498     
===========================================
- Hits         29768    29270     -498     
  Misses        1810     1810              
Impacted Files Coverage Δ
include/ginkgo/core/base/array.hpp 89.56% <0.00%> (-4.51%) ⬇️
core/base/extended_float.hpp 91.26% <0.00%> (-0.98%) ⬇️
core/test/base/utils.cpp 95.45% <0.00%> (-0.26%) ⬇️
core/matrix/dense.cpp 99.43% <0.00%> (-0.09%) ⬇️
omp/test/matrix/dense_kernels.cpp 99.78% <0.00%> (-0.04%) ⬇️
omp/matrix/dense_kernels.cpp 98.30% <0.00%> (-0.02%) ⬇️
core/test/base/array.cpp 100.00% <0.00%> (ø)
core/test/matrix/identity.cpp 100.00% <0.00%> (ø)
reference/matrix/dense_kernels.cpp 100.00% <0.00%> (ø)
reference/test/matrix/identity.cpp 100.00% <0.00%> (ø)
... and 4 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update b6fa8bf...ac39b9a. Read the comment docs.

Copy link
Member

@tcojean tcojean left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@@ -22,6 +22,7 @@ set(EXAMPLES_LIST
custom-stopping-criterion
ginkgo-overhead
minimal-cuda-solver
mixed-spmv
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it should be put above (in the EXAMPLES_EXEC_LIST), since it uses an executor.

Copy link
Member Author

@yhmtsai yhmtsai May 31, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think EXAMPLES_EXEC_LIST is for validating the examples?

@@ -0,0 +1 @@
basic
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This could go under techniques (advanced techniques) or we could create a new category for mixed precision in doc/scripts/examples.pl:193.

Comment on lines 6 to 16
High Precision time(s): 1.2272200000e-05
High Precision result norm: 1.1581546305e+01
Low Precision time(s): 1.2360400000e-05
Low Precision relative error: 4.4554843634e-08
Hp * Lp -> Hp time(s): 1.2302300000e-05
Hp * Lp -> Hp relative error: 1.4490698720e-08
Lp * Lp -> Hp time(s): 1.2614800000e-05
Lp * Lp -> Hp relative error: 1.4490698720e-08
@endcode
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there always no difference like this? How about using CUDA? Or a more representative matrix?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it depends on the matrix. I am not sure whether I can find the good one.

Copy link
Collaborator

@hartwiganzt hartwiganzt Jun 3, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah, the rand vectors are already generated in lp for the LPLP and the LPLP experiment - then it makes sense, I think.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you mean LP * LP -> HP and HP * LP -> HP?

examples/mixed-spmv/mixed-spmv.cpp Outdated Show resolved Hide resolved
Copy link
Member

@pratikvn pratikvn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Good idea creating a new type called mixed-precision for the examples. Maybe you can also add the adaptiveprecision-blockjacobi example to it as well ?

examples/mixed-spmv/mixed-spmv.cpp Outdated Show resolved Hide resolved
Copy link
Member

@upsj upsj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, only minor nits

examples/mixed-spmv/mixed-spmv.cpp Outdated Show resolved Hide resolved
examples/mixed-spmv/mixed-spmv.cpp Outdated Show resolved Hide resolved
Comment on lines 92 to 94
std::shared_ptr<gko::LinOp> A, std::shared_ptr<gko::LinOp> b,
std::shared_ptr<gko::LinOp> x)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about using plain pointers here? Then you could get rid of all the shares below.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

alternately, it will use lend when calling the function, right?

examples/mixed-spmv/mixed-spmv.cpp Show resolved Hide resolved
examples/mixed-spmv/mixed-spmv.cpp Outdated Show resolved Hide resolved
examples/mixed-spmv/mixed-spmv.cpp Outdated Show resolved Hide resolved
examples/mixed-spmv/mixed-spmv.cpp Outdated Show resolved Hide resolved
@upsj upsj added this to the Ginkgo 1.4.0 milestone Jun 4, 2021
@yhmtsai
Copy link
Member Author

yhmtsai commented Jun 4, 2021

@pratikvn For adaptiveprecision-blockjacobi, does it support multiple kinds for one example?

@yhmtsai
Copy link
Member Author

yhmtsai commented Jun 4, 2021

I add another one LP * HP -> HP and get the following result.

root@75354bac1c28:~/ginkgo/build_gnu8/examples/mixed-spmv# ./mixed-spmv
High Precision time(s): 1.7603000000e-06
High Precision result norm: 1.1581546305e+01
Low Precision time(s): 1.7575000000e-06
Low Precision relative error: 4.4554843634e-08
Hp * Lp -> Hp time(s): 2.0508000000e-06
Hp * Lp -> Hp relative error: 1.4490698720e-08
Lp * Lp -> Hp time(s): 2.0208000000e-06
Lp * Lp -> Hp relative error: 1.4490698720e-08
Lp * Hp -> Hp time(s): 2.0208000000e-06
Lp * Hp -> Hp relative error: 0.0000000000e+00
root@75354bac1c28:~/ginkgo/build_gnu8/examples/mixed-spmv# ./mixed-spmv omp
High Precision time(s): 1.2019345000e-03
High Precision result norm: 1.1581546305e+01
Low Precision time(s): 1.1902475000e-03
Low Precision relative error: 4.4554843634e-08
Hp * Lp -> Hp time(s): 1.6711660000e-03
Hp * Lp -> Hp relative error: 1.4490698720e-08
Lp * Lp -> Hp time(s): 1.0782604000e-03
Lp * Lp -> Hp relative error: 1.4490698720e-08
Lp * Hp -> Hp time(s): 1.0782604000e-03
Lp * Hp -> Hp relative error: 0.0000000000e+00
root@75354bac1c28:~/ginkgo/build_gnu8/examples/mixed-spmv# ./mixed-spmv hip
High Precision time(s): 3.3548600000e-05
High Precision result norm: 1.1581546305e+01
Low Precision time(s): 3.2933300000e-05
Low Precision relative error: 4.8531901730e-08
Hp * Lp -> Hp time(s): 3.3914300000e-05
Hp * Lp -> Hp relative error: 1.4490698711e-08
Lp * Lp -> Hp time(s): 2.9439700000e-05
Lp * Lp -> Hp relative error: 1.4490698711e-08
Lp * Hp -> Hp time(s): 2.9439700000e-05
Lp * Hp -> Hp relative error: 6.9126625015e-17

float is enough for representing the current matrix

@yhmtsai
Copy link
Member Author

yhmtsai commented Jun 4, 2021

I find another one LF10 from suitesparse.

High Precision time(s): 9.9190000000e-07
High Precision result norm: 2.1979863878e+05
Low Precision time(s): 9.7780000000e-07
Low Precision relative error: 4.3780295322e-08
Hp * Lp -> Hp time(s): 1.3637000000e-06
Hp * Lp -> Hp relative error: 2.3346749028e-08
Lp * Lp -> Hp time(s): 1.3477000000e-06
Lp * Lp -> Hp relative error: 5.4146586932e-08
Lp * Hp -> Hp time(s): 1.3477000000e-06
Lp * Hp -> Hp relative error: 3.7059522494e-08

@pratikvn
Copy link
Member

pratikvn commented Jun 7, 2021

adaptiveprecision-block-jacobi does use multiple precision underneath and the user can select the conversions through a parameter, so I would say that it belongs to mixed-precision, but I do see that it is slightly different than the proper mixed-precision, that this example has.

@yhmtsai
Copy link
Member Author

yhmtsai commented Jun 7, 2021

this is the current generation of doc
image

@yhmtsai yhmtsai added 1:ST:ready-to-merge This PR is ready to merge. and removed 1:ST:ready-for-review This PR is ready for review labels Jun 7, 2021
Co-authored-by: Pratik Nayak <[email protected]>
Co-authored-by: Tobias Ribizel <[email protected]>
@yhmtsai yhmtsai merged commit 9359008 into develop Jun 7, 2021
@yhmtsai yhmtsai deleted the mixed-spmv-example branch June 7, 2021 14:43
@sonarcloud
Copy link

sonarcloud bot commented Jun 7, 2021

Kudos, SonarCloud Quality Gate passed!

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

No Coverage information No Coverage information
No Duplication information No Duplication information

tcojean added a commit that referenced this pull request Aug 20, 2021
Ginkgo release 1.4.0

The Ginkgo team is proud to announce the new Ginkgo minor release 1.4.0. This
release brings most of the Ginkgo functionality to the Intel DPC++ ecosystem
which enables Intel-GPU and CPU execution. The only Ginkgo features which have
not been ported yet are some preconditioners.

Ginkgo's mixed-precision support is greatly enhanced thanks to:
1. The new Accessor concept, which allows writing kernels featuring on-the-fly
memory compression, among other features. The accessor can be used as
header-only, see the [accessor BLAS benchmarks repository](https://github.com/ginkgo-project/accessor-BLAS/tree/develop) as a usage example.
2. All LinOps now transparently support mixed-precision execution. By default,
this is done through a temporary copy which may have a performance impact but
already allows mixed-precision research.

Native mixed-precision ELL kernels are implemented which do not see this cost.
The accessor is also leveraged in a new CB-GMRES solver which allows for
performance improvements by compressing the Krylov basis vectors. Many other
features have been added to Ginkgo, such as reordering support, a new IDR
solver, Incomplete Cholesky preconditioner, matrix assembly support (only CPU
for now), machine topology information, and more!

Supported systems and requirements:
+ For all platforms, cmake 3.13+
+ C++14 compliant compiler
+ Linux and MacOS
  + gcc: 5.3+, 6.3+, 7.3+, all versions after 8.1+
  + clang: 3.9+
  + Intel compiler: 2018+
  + Apple LLVM: 8.0+
  + CUDA module: CUDA 9.0+
  + HIP module: ROCm 3.5+
  + DPC++ module: Intel OneAPI 2021.3. Set the CXX compiler to `dpcpp`.
+ Windows
  + MinGW and Cygwin: gcc 5.3+, 6.3+, 7.3+, all versions after 8.1+
  + Microsoft Visual Studio: VS 2019
  + CUDA module: CUDA 9.0+, Microsoft Visual Studio
  + OpenMP module: MinGW or Cygwin.


Algorithm and important feature additions:
+ Add a new DPC++ Executor for SYCL execution and other base utilities
  [#648](#648), [#661](#661), [#757](#757), [#832](#832)
+ Port matrix formats, solvers and related kernels to DPC++. For some kernels,
  also make use of a shared kernel implementation for all executors (except
  Reference). [#710](#710), [#799](#799), [#779](#779), [#733](#733), [#844](#844), [#843](#843), [#789](#789), [#845](#845), [#849](#849), [#855](#855), [#856](#856)
+ Add accessors which allow multi-precision kernels, among other things.
  [#643](#643), [#708](#708)
+ Add support for mixed precision operations through apply in all LinOps. [#677](#677)
+ Add incomplete Cholesky factorizations and preconditioners as well as some
  improvements to ILU. [#672](#672), [#837](#837), [#846](#846)
+ Add an AMGX implementation and kernels on all devices but DPC++.
  [#528](#528), [#695](#695), [#860](#860)
+ Add a new mixed-precision capability solver, Compressed Basis GMRES
  (CB-GMRES). [#693](#693), [#763](#763)
+ Add the IDR(s) solver. [#620](#620)
+ Add a new fixed-size block CSR matrix format (for the Reference executor).
  [#671](#671), [#730](#730)
+ Add native mixed-precision support to the ELL format. [#717](#717), [#780](#780)
+ Add Reverse Cuthill-McKee reordering [#500](#500), [#649](#649)
+ Add matrix assembly support on CPUs. [#644](#644)
+ Extends ISAI from triangular to general and spd matrices. [#690](#690)

Other additions:
+ Add the possibility to apply real matrices to complex vectors.
  [#655](#655), [#658](#658)
+ Add functions to compute the absolute of a matrix format. [#636](#636)
+ Add symmetric permutation and improve existing permutations.
  [#684](#684), [#657](#657), [#663](#663)
+ Add a MachineTopology class with HWLOC support [#554](#554), [#697](#697)
+ Add an implicit residual norm criterion. [#702](#702), [#818](#818), [#850](#850)
+ Row-major accessor is generalized to more than 2 dimensions and a new
  "block column-major" accessor has been added. [#707](#707)
+ Add an heat equation example. [#698](#698), [#706](#706)
+ Add ccache support in CMake and CI. [#725](#725), [#739](#739)
+ Allow tuning and benchmarking variables non intrusively. [#692](#692)
+ Add triangular solver benchmark [#664](#664)
+ Add benchmarks for BLAS operations [#772](#772), [#829](#829)
+ Add support for different precisions and consistent index types in benchmarks.
  [#675](#675), [#828](#828)
+ Add a Github bot system to facilitate development and PR management.
  [#667](#667), [#674](#674), [#689](#689), [#853](#853)
+ Add Intel (DPC++) CI support and enable CI on HPC systems. [#736](#736), [#751](#751), [#781](#781)
+ Add ssh debugging for Github Actions CI. [#749](#749)
+ Add pipeline segmentation for better CI speed. [#737](#737)


Changes:
+ Add a Scalar Jacobi specialization and kernels. [#808](#808), [#834](#834), [#854](#854)
+ Add implicit residual log for solvers and benchmarks. [#714](#714)
+ Change handling of the conjugate in the dense dot product. [#755](#755)
+ Improved Dense stride handling. [#774](#774)
+ Multiple improvements to the OpenMP kernels performance, including COO,
an exclusive prefix sum, and more. [#703](#703), [#765](#765), [#740](#740)
+ Allow specialization of submatrix and other dense creation functions in solvers. [#718](#718)
+ Improved Identity constructor and treatment of rectangular matrices. [#646](#646)
+ Allow CUDA/HIP executors to select allocation mode. [#758](#758)
+ Check if executors share the same memory. [#670](#670)
+ Improve test install and smoke testing support. [#721](#721)
+ Update the JOSS paper citation and add publications in the documentation.
  [#629](#629), [#724](#724)
+ Improve the version output. [#806](#806)
+ Add some utilities for dim and span. [#821](#821)
+ Improved solver and preconditioner benchmarks. [#660](#660)
+ Improve benchmark timing and output. [#669](#669), [#791](#791), [#801](#801), [#812](#812)


Fixes:
+ Sorting fix for the Jacobi preconditioner. [#659](#659)
+ Also log the first residual norm in CGS [#735](#735)
+ Fix BiCG and HIP CSR to work with complex matrices. [#651](#651)
+ Fix Coo SpMV on strided vectors. [#807](#807)
+ Fix segfault of extract_diagonal, add short-and-fat test. [#769](#769)
+ Fix device_reset issue by moving counter/mutex to device. [#810](#810)
+ Fix `EnableLogging` superclass. [#841](#841)
+ Support ROCm 4.1.x and breaking HIP_PLATFORM changes. [#726](#726)
+ Decreased test size for a few device tests. [#742](#742)
+ Fix multiple issues with our CMake HIP and RPATH setup.
  [#712](#712), [#745](#745), [#709](#709)
+ Cleanup our CMake installation step. [#713](#713)
+ Various simplification and fixes to the Windows CMake setup. [#720](#720), [#785](#785)
+ Simplify third-party integration. [#786](#786)
+ Improve Ginkgo device arch flags management. [#696](#696)
+ Other fixes and improvements to the CMake setup.
  [#685](#685), [#792](#792), [#705](#705), [#836](#836)
+ Clarification of dense norm documentation [#784](#784)
+ Various development tools fixes and improvements [#738](#738), [#830](#830), [#840](#840)
+ Make multiple operators/constructors explicit. [#650](#650), [#761](#761)
+ Fix some issues, memory leaks and warnings found by MSVC.
  [#666](#666), [#731](#731)
+ Improved solver memory estimates and consistent iteration counts [#691](#691)
+ Various logger improvements and fixes [#728](#728), [#743](#743), [#754](#754)
+ Fix for ForwardIterator requirements in iterator_factory. [#665](#665)
+ Various benchmark fixes. [#647](#647), [#673](#673), [#722](#722)
+ Various CI fixes and improvements. [#642](#642), [#641](#641), [#795](#795), [#783](#783), [#793](#793), [#852](#852)


Related PR: #857
tcojean added a commit that referenced this pull request Aug 23, 2021
Release 1.4.0 to master

The Ginkgo team is proud to announce the new Ginkgo minor release 1.4.0. This
release brings most of the Ginkgo functionality to the Intel DPC++ ecosystem
which enables Intel-GPU and CPU execution. The only Ginkgo features which have
not been ported yet are some preconditioners.

Ginkgo's mixed-precision support is greatly enhanced thanks to:
1. The new Accessor concept, which allows writing kernels featuring on-the-fly
memory compression, among other features. The accessor can be used as
header-only, see the [accessor BLAS benchmarks repository](https://github.com/ginkgo-project/accessor-BLAS/tree/develop) as a usage example.
2. All LinOps now transparently support mixed-precision execution. By default,
this is done through a temporary copy which may have a performance impact but
already allows mixed-precision research.

Native mixed-precision ELL kernels are implemented which do not see this cost.
The accessor is also leveraged in a new CB-GMRES solver which allows for
performance improvements by compressing the Krylov basis vectors. Many other
features have been added to Ginkgo, such as reordering support, a new IDR
solver, Incomplete Cholesky preconditioner, matrix assembly support (only CPU
for now), machine topology information, and more!

Supported systems and requirements:
+ For all platforms, cmake 3.13+
+ C++14 compliant compiler
+ Linux and MacOS
  + gcc: 5.3+, 6.3+, 7.3+, all versions after 8.1+
  + clang: 3.9+
  + Intel compiler: 2018+
  + Apple LLVM: 8.0+
  + CUDA module: CUDA 9.0+
  + HIP module: ROCm 3.5+
  + DPC++ module: Intel OneAPI 2021.3. Set the CXX compiler to `dpcpp`.
+ Windows
  + MinGW and Cygwin: gcc 5.3+, 6.3+, 7.3+, all versions after 8.1+
  + Microsoft Visual Studio: VS 2019
  + CUDA module: CUDA 9.0+, Microsoft Visual Studio
  + OpenMP module: MinGW or Cygwin.


Algorithm and important feature additions:
+ Add a new DPC++ Executor for SYCL execution and other base utilities
  [#648](#648), [#661](#661), [#757](#757), [#832](#832)
+ Port matrix formats, solvers and related kernels to DPC++. For some kernels,
  also make use of a shared kernel implementation for all executors (except
  Reference). [#710](#710), [#799](#799), [#779](#779), [#733](#733), [#844](#844), [#843](#843), [#789](#789), [#845](#845), [#849](#849), [#855](#855), [#856](#856)
+ Add accessors which allow multi-precision kernels, among other things.
  [#643](#643), [#708](#708)
+ Add support for mixed precision operations through apply in all LinOps. [#677](#677)
+ Add incomplete Cholesky factorizations and preconditioners as well as some
  improvements to ILU. [#672](#672), [#837](#837), [#846](#846)
+ Add an AMGX implementation and kernels on all devices but DPC++.
  [#528](#528), [#695](#695), [#860](#860)
+ Add a new mixed-precision capability solver, Compressed Basis GMRES
  (CB-GMRES). [#693](#693), [#763](#763)
+ Add the IDR(s) solver. [#620](#620)
+ Add a new fixed-size block CSR matrix format (for the Reference executor).
  [#671](#671), [#730](#730)
+ Add native mixed-precision support to the ELL format. [#717](#717), [#780](#780)
+ Add Reverse Cuthill-McKee reordering [#500](#500), [#649](#649)
+ Add matrix assembly support on CPUs. [#644](#644)
+ Extends ISAI from triangular to general and spd matrices. [#690](#690)

Other additions:
+ Add the possibility to apply real matrices to complex vectors.
  [#655](#655), [#658](#658)
+ Add functions to compute the absolute of a matrix format. [#636](#636)
+ Add symmetric permutation and improve existing permutations.
  [#684](#684), [#657](#657), [#663](#663)
+ Add a MachineTopology class with HWLOC support [#554](#554), [#697](#697)
+ Add an implicit residual norm criterion. [#702](#702), [#818](#818), [#850](#850)
+ Row-major accessor is generalized to more than 2 dimensions and a new
  "block column-major" accessor has been added. [#707](#707)
+ Add an heat equation example. [#698](#698), [#706](#706)
+ Add ccache support in CMake and CI. [#725](#725), [#739](#739)
+ Allow tuning and benchmarking variables non intrusively. [#692](#692)
+ Add triangular solver benchmark [#664](#664)
+ Add benchmarks for BLAS operations [#772](#772), [#829](#829)
+ Add support for different precisions and consistent index types in benchmarks.
  [#675](#675), [#828](#828)
+ Add a Github bot system to facilitate development and PR management.
  [#667](#667), [#674](#674), [#689](#689), [#853](#853)
+ Add Intel (DPC++) CI support and enable CI on HPC systems. [#736](#736), [#751](#751), [#781](#781)
+ Add ssh debugging for Github Actions CI. [#749](#749)
+ Add pipeline segmentation for better CI speed. [#737](#737)


Changes:
+ Add a Scalar Jacobi specialization and kernels. [#808](#808), [#834](#834), [#854](#854)
+ Add implicit residual log for solvers and benchmarks. [#714](#714)
+ Change handling of the conjugate in the dense dot product. [#755](#755)
+ Improved Dense stride handling. [#774](#774)
+ Multiple improvements to the OpenMP kernels performance, including COO,
an exclusive prefix sum, and more. [#703](#703), [#765](#765), [#740](#740)
+ Allow specialization of submatrix and other dense creation functions in solvers. [#718](#718)
+ Improved Identity constructor and treatment of rectangular matrices. [#646](#646)
+ Allow CUDA/HIP executors to select allocation mode. [#758](#758)
+ Check if executors share the same memory. [#670](#670)
+ Improve test install and smoke testing support. [#721](#721)
+ Update the JOSS paper citation and add publications in the documentation.
  [#629](#629), [#724](#724)
+ Improve the version output. [#806](#806)
+ Add some utilities for dim and span. [#821](#821)
+ Improved solver and preconditioner benchmarks. [#660](#660)
+ Improve benchmark timing and output. [#669](#669), [#791](#791), [#801](#801), [#812](#812)


Fixes:
+ Sorting fix for the Jacobi preconditioner. [#659](#659)
+ Also log the first residual norm in CGS [#735](#735)
+ Fix BiCG and HIP CSR to work with complex matrices. [#651](#651)
+ Fix Coo SpMV on strided vectors. [#807](#807)
+ Fix segfault of extract_diagonal, add short-and-fat test. [#769](#769)
+ Fix device_reset issue by moving counter/mutex to device. [#810](#810)
+ Fix `EnableLogging` superclass. [#841](#841)
+ Support ROCm 4.1.x and breaking HIP_PLATFORM changes. [#726](#726)
+ Decreased test size for a few device tests. [#742](#742)
+ Fix multiple issues with our CMake HIP and RPATH setup.
  [#712](#712), [#745](#745), [#709](#709)
+ Cleanup our CMake installation step. [#713](#713)
+ Various simplification and fixes to the Windows CMake setup. [#720](#720), [#785](#785)
+ Simplify third-party integration. [#786](#786)
+ Improve Ginkgo device arch flags management. [#696](#696)
+ Other fixes and improvements to the CMake setup.
  [#685](#685), [#792](#792), [#705](#705), [#836](#836)
+ Clarification of dense norm documentation [#784](#784)
+ Various development tools fixes and improvements [#738](#738), [#830](#830), [#840](#840)
+ Make multiple operators/constructors explicit. [#650](#650), [#761](#761)
+ Fix some issues, memory leaks and warnings found by MSVC.
  [#666](#666), [#731](#731)
+ Improved solver memory estimates and consistent iteration counts [#691](#691)
+ Various logger improvements and fixes [#728](#728), [#743](#743), [#754](#754)
+ Fix for ForwardIterator requirements in iterator_factory. [#665](#665)
+ Various benchmark fixes. [#647](#647), [#673](#673), [#722](#722)
+ Various CI fixes and improvements. [#642](#642), [#641](#641), [#795](#795), [#783](#783), [#793](#793), [#852](#852)

Related PR: #866
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:ready-to-merge This PR is ready to merge. reg:build This is related to the build system. reg:example This is related to the examples.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants