Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dpcpp porting the rest of matrix format #845

Merged
merged 13 commits into from
Aug 6, 2021
Merged

Conversation

yhmtsai
Copy link
Member

@yhmtsai yhmtsai commented Jul 23, 2021

This PR ports the rest of matrix format

TODO:

  • refine the history such that the unchanged part still be unchanged in the history
    git history already looks good
  • csr dpcpp subgroup size (use 16 for now)
  • Csr apply Unsorted Csr delete the test, because cuda/we do not ensure it always work

@yhmtsai yhmtsai added the 1:ST:WIP This PR is a work in progress. Not ready for review. label Jul 23, 2021
@yhmtsai yhmtsai self-assigned this Jul 23, 2021
@ginkgo-bot ginkgo-bot added mod:core This is related to the core module. mod:dpcpp This is related to the DPC++ module. reg:build This is related to the build system. reg:testing This is related to testing. type:matrix-format This is related to the Matrix formats labels Jul 23, 2021
@upsj upsj added this to the Ginkgo 1.4.0 milestone Jul 24, 2021
@yhmtsai yhmtsai added 1:ST:ready-for-review This PR is ready for review and removed 1:ST:WIP This PR is a work in progress. Not ready for review. labels Jul 26, 2021
@yhmtsai yhmtsai requested a review from a team July 26, 2021 12:13
Copy link
Contributor

@Slaedr Slaedr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work! I have a few comments and questions below. Most of them are around reducing the number of layers of functions for code size reasons.

In addition, have you thought whether the Dpcpp kernel implementations can be unified with those in common? Right now, it looks like most Dpcpp kernels are essentially
the common kernel + add nd_item parameter + find and replace.
Perhaps we could write a simple nd_item implementation for Cuda and make sure it's optimized away in release builds using constexpr functions and such. And there could a GKO_KERNEL defined to __global__ for cuda and nothing for Dpcpp, etc. Since you already did the work of unifying Cuda and Dpcpp cooperative groups etc., this would be nice to reduce code duplication in these complicated kernels. I guess perhaps local shared memory is a significant barrier to doing this. Of course, even if it's possible, this will be a different PR.

cuda/test/matrix/csr_kernels.cpp Show resolved Hide resolved
dpcpp/components/format_conversion.dp.hpp Outdated Show resolved Hide resolved
* @internal
* Returns the number of set bits in the given mask.
*/
__dpct_inline__ int popcnt(uint32 mask) { return sycl::popcount(mask); }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to the SyCL 2020 standard, it seems to me that the input and output arguments to popcount should be the same type. Ideally we would use bit_cast here, but since we don't have that in C++ 14,

Suggested change
__dpct_inline__ int popcnt(uint32 mask) { return sycl::popcount(mask); }
__dpct_inline__ int popcnt(uint32 mask) { return sycl::popcount(reinterpret_cast<int>(mask)); }

Or both could be uint32. But maybe it's fine; perhaps someone can take a look at the standard (section 4.19) and give their opinion too.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO bitmasks should always be unsigned and indices preferably signed, not sure I like what the SYCL standard is doing here - compare C++20's popcount. I think the implicit cast uint32 -> int for the return value here is fine, since the outputs range from 0 to 32/64 only.

dpcpp/components/intrinsics.dp.hpp Outdated Show resolved Hide resolved
dpcpp/components/intrinsics.dp.hpp Outdated Show resolved Hide resolved
dpcpp/matrix/csr_kernels.dp.cpp Outdated Show resolved Hide resolved
dpcpp/matrix/diagonal_kernels.dp.cpp Outdated Show resolved Hide resolved
dpcpp/matrix/hybrid_kernels.dp.cpp Outdated Show resolved Hide resolved
dpcpp/matrix/csr_kernels.dp.cpp Show resolved Hide resolved
dpcpp/test/matrix/sellp_kernels.cpp Outdated Show resolved Hide resolved
@yhmtsai yhmtsai force-pushed the ginkgo_complete_matrix branch 2 times, most recently from f2eb470 to 5eccf69 Compare July 28, 2021 21:48
@codecov
Copy link

codecov bot commented Jul 29, 2021

Codecov Report

Merging #845 (247b165) into develop (3ff5118) will decrease coverage by 0.02%.
The diff coverage is 72.22%.

❗ Current head 247b165 differs from pull request most recent head 099c2a4. Consider uploading reports for the commit 099c2a4 to get more accurate results
Impacted file tree graph

@@             Coverage Diff             @@
##           develop     #845      +/-   ##
===========================================
- Coverage    94.53%   94.51%   -0.03%     
===========================================
  Files          411      412       +1     
  Lines        33134    33117      -17     
===========================================
- Hits         31322    31299      -23     
- Misses        1812     1818       +6     
Impacted Files Coverage Δ
core/test/matrix/fbcsr_sample.hpp 90.81% <ø> (ø)
omp/matrix/coo_kernels.cpp 100.00% <ø> (ø)
omp/matrix/diagonal_kernels.cpp 100.00% <ø> (ø)
include/ginkgo/core/matrix/csr.hpp 43.94% <4.00%> (-3.79%) ⬇️
common/matrix/diagonal_kernels.cpp 92.85% <92.85%> (ø)
common/matrix/coo_kernels.cpp 100.00% <100.00%> (ø)
omp/test/matrix/sellp_kernels.cpp 100.00% <100.00%> (ø)
core/matrix/diagonal.cpp 79.04% <0.00%> (-1.31%) ⬇️
core/base/extended_float.hpp 91.26% <0.00%> (-0.98%) ⬇️
reference/test/matrix/diagonal_kernels.cpp 91.08% <0.00%> (-0.64%) ⬇️
... and 12 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 3ff5118...099c2a4. Read the comment docs.

Copy link
Member

@upsj upsj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, great job! I think there are a few kernels for which the 3D indexing scheme doesn't necessarily make much sense (conversions especially, everything with flat indexing and no particular performance tuning), and a few kernels could be moved to common.


if (tidx < nnz) {
if (orig_row_idxs[tidx] == orig_col_idxs[tidx]) {
diag[orig_row_idxs[tidx]] = orig_values[tidx];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might be a candidate for a common kernel?

* A compile-time list of the number items per threads for which spmv kernel
* should be compiled.
*/
using compiled_kernels = syn::value_list<int, 6>;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was this obtained by tuning?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, I use the smallest one from cuda for merge_spmv

Comment on lines 79 to 80
result_values[row * result_stride + col] =
source_values[row * source_stride + col] * diag[row];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

common kernels?

dpcpp/matrix/sparsity_csr_kernels.dp.cpp Show resolved Hide resolved
* @internal
* Returns the number of set bits in the given mask.
*/
__dpct_inline__ int popcnt(uint32 mask) { return sycl::popcount(mask); }
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO bitmasks should always be unsigned and indices preferably signed, not sure I like what the SYCL standard is doing here - compare C++20's popcount. I think the implicit cast uint32 -> int for the return value here is fine, since the outputs range from 0 to 32/64 only.

*/
template <int group_size, typename ValueType, typename IndexType,
typename Group, typename Callback>
__dpct_inline__ void group_merge(const ValueType *__restrict__ a,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TBH I'm not sure whether it makes sense to have these also for SYCL. Do random-access shuffles give good performance on Intel GPUs?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know actually.

@yhmtsai
Copy link
Member Author

yhmtsai commented Jul 29, 2021

@upsj @Slaedr Sorry, I do not delete unused functions yet. The functions (merging and some instruction like popcnt etc) should not be in this pr. I keep using @upsj #799 , so there's no test to test these functions.
thanks for the comments on these. I will try replying to something but it may not be correct because I do not have experience with them. And we can reference them again when we have another pr for it

Copy link
Member

@thoasm thoasm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Part 1/2 of my review

dpcpp/components/format_conversion.dp.hpp Outdated Show resolved Hide resolved
dpcpp/components/intrinsics.dp.hpp Outdated Show resolved Hide resolved
Comment on lines -432 to -448
TEST_F(Csr, SimpleApplyToCsrMatrixIsEquivalentToRefUnsorted)
{
set_up_apply_data(std::make_shared<Mtx::automatical>());
auto trans = mtx->transpose();
auto d_trans = dmtx->transpose();
gko::test::unsort_matrix(static_cast<Mtx *>(dmtx.get()), rand_engine);
gko::test::unsort_matrix(static_cast<Mtx *>(d_trans.get()), rand_engine);

mtx->apply(trans.get(), square_mtx.get());
dmtx->apply(d_trans.get(), square_dmtx.get());

GKO_ASSERT_MTX_NEAR(square_dmtx, square_mtx, 1e-14);
GKO_ASSERT_MTX_EQ_SPARSITY(square_dmtx, square_mtx);
ASSERT_TRUE(square_dmtx->is_sorted_by_column_index());
}


Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What exactly did you mean with delete the test, because cuda/we do not ensure it always work? Does CUDA not give any guarantee that an unsorted CSR apply works? We should definitively make sure that it does work for us because we allow CSR to be unsorted!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in cusparse, they denote the csr sorted by row and then by column
https://docs.nvidia.com/cuda/cusparse/index.html#csr-format
they may implement the spgemm with hash table, so that works in cuda.
but the current impl in dpcpp does not support unsorted

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To quote from the Csr documentation:

 * Both the SpGEMM and SpGEAM operation require the input matrices to be sorted
 * by column index, otherwise the algorithms will produce incorrect results.

dpcpp/matrix/coo_kernels.dp.cpp Outdated Show resolved Hide resolved
dpcpp/matrix/csr_kernels.dp.cpp Outdated Show resolved Hide resolved
constexpr int minimal_num =
ceildiv(sizeof(IndexType) + sizeof(ValueType), sizeof(IndexType));
int items_per_thread = num_item * 4 / sizeof(IndexType);
return std::max(minimal_num, items_per_thread);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: you mix std::max and max (actually gko::max) in this file. I would replace std::max with max (same with std::min in case that is used) in this file.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need to think about this part.
we have these in cuda because we use gko::max in device not in host. but in dpcpp, it may be not different

dpcpp/matrix/csr_kernels.dp.cpp Outdated Show resolved Hide resolved
dpcpp/matrix/diagonal_kernels.dp.cpp Outdated Show resolved Hide resolved
Comment on lines 184 to 191
const auto tidx = thread::get_thread_id_flat(item_ct1);

if (tidx >= num_nnz) {
return;
}

result_values[tidx] *= diag[col_idxs[tidx]];
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, inlining this (by hand) should be less code.

dpcpp/matrix/ell_kernels.dp.cpp Outdated Show resolved Hide resolved
dpcpp/components/intrinsics.dp.hpp Outdated Show resolved Hide resolved
* performs suffix sum. Works on the source array and returns whether the thread
* is the first element of its segment with same `ind`.
*/
template <unsigned subwarp_size, typename ValueType, typename IndexType>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since there is no subwarp, what is this using? The subgroup size?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it is subgroup size

@yhmtsai yhmtsai force-pushed the ginkgo_complete_matrix branch 2 times, most recently from 00075c0 to 247b165 Compare July 30, 2021 16:15
Copy link
Contributor

@Slaedr Slaedr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see that you're using the GKO_ENABLE_DEFAULT_HOST macro to reduce the source code, which is better than before. But would it not be possible to have an actual function rather than a macro? If shared memory is needed, couldn't you have an if statement in there that checks if the requested size is greater than zero? If so, it should create the shared memory buffer and a different cgh.parallel_for launch accordingly, and if not, it should just use what you already have in the macro. I don't think there will be any performance implications. Such a function would then be called from the final "kernel" function, very similar to hipLaunchKernelGGL. If something can be done with reasonable ease within the language, we should prefer that over macros. If that turns out to be too complicated, then I think what you have right now is fine.

@yhmtsai
Copy link
Member Author

yhmtsai commented Aug 2, 2021

@Slaedr the shared memory contains both static and dynamic.
according to the dynamic_shared_memory, I can only know the dynamic one.
the function also needs to know whether passing the pointer to kernel.
dpct will not help it.

__global__ kernel(args) {
  __shared__ float a[];
  __shared__ double b[];
  extern __shared__ int s[];
}
kernel<<<grid, block, k>>>(args);

in dpcpp,

queue->submit({
allocate dynamic in s.
allocate static in a.
allocate static in b.
cgh.parallel_for(..., {
  [=](nd_item) {kernel(item_ct1, args, a, b, s)}
})
}
) 

Copy link
Contributor

@Slaedr Slaedr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yhmtsai I see; I guess I need to read up a bit on this. Alright, if you think it's not worth going there for now, this is good to go from my side.

Copy link
Member

@thoasm thoasm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mainly have nits about formatting, otherwise, it looks good to me.

dpcpp/matrix/sparsity_csr_kernels.dp.cpp Show resolved Hide resolved
dpcpp/test/matrix/fbcsr_kernels.cpp Show resolved Hide resolved
dpcpp/test/matrix/sellp_kernels.cpp Outdated Show resolved Hide resolved
dpcpp/test/matrix/sellp_kernels.cpp Outdated Show resolved Hide resolved
dpcpp/test/matrix/sellp_kernels.cpp Outdated Show resolved Hide resolved
cuda/test/matrix/sellp_kernels.cpp Outdated Show resolved Hide resolved
cuda/test/matrix/sellp_kernels.cpp Outdated Show resolved Hide resolved
cuda/test/matrix/sellp_kernels.cpp Outdated Show resolved Hide resolved
hip/test/matrix/sellp_kernels.hip.cpp Outdated Show resolved Hide resolved
@@ -493,6 +512,12 @@ class Csr : public EnableLinOp<Csr<ValueType, IndexType>>,
/* Use imbalance strategy when the matrix has more more than 1e8 on AMD
* hardware */
const index_type amd_nnz_limit{static_cast<index_type>(1e8)};
/* Use imbalance strategy when the maximum number of nonzero per row is
* more than 25600 on Intel hardware. */
const index_type intel_row_len_limit = 25600;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

small nit: These should be static (since all the variables should have the same value) and potentially even constexpr (if you don't actually need the address of these variables).
Since this is in the public interface, I assume making it constexpr might not be ideal, but IMO making it static (to have only one const variable, and not one per object instance) should be possible.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found it is in public. is it okay to change it?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, if we made this static const, we would have to instantiate each variable once (so only one instance is created), which is what I forgot when commenting on this issue...
I guess, for now, it is fine keeping it as it is. However, if we don't use intel_row_len_limit outside, it might make sense to make it private and static constexpr.

Copy link
Member

@tcojean tcojean left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

Comment on lines 588 to 600
if (!cuda_strategy_) {
nnz_limit = intel_nnz_limit;
row_len_limit = intel_row_len_limit;
}
#if GINKGO_HIP_PLATFORM_HCC
if (!cuda_strategy_) {
nnz_limit = amd_nnz_limit;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This depends on not compiling HIP together with Intel. Maybe add a flag intel_strategy to the constructor so that the behavior can be controlled here if needed?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe use a enum?
is cuda_strategy flag already in the previous release?

@@ -61,11 +70,311 @@ namespace dpcpp {
namespace coo {


constexpr int default_block_size = 256;
constexpr int warps_in_block = 4;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this correct? Don't you need 7 or 8 warps/subgroups (number of EU per subslice), * 2 (two ALUs per EU) each of size 16 or whichever?
See https://software.intel.com/content/www/us/en/develop/download/architecture-overview-for-intel-processor-graphics-gen11.html

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think 7 or 8 should be used in oversubscription?

dpcpp/matrix/coo_kernels.dp.cpp Outdated Show resolved Hide resolved
@@ -57,6 +66,145 @@ namespace dpcpp {
namespace hybrid {


constexpr int default_block_size = 256;
constexpr int warps_in_block = 4;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same question, is 4 a good number?

dpcpp/matrix/hybrid_kernels.dp.cpp Outdated Show resolved Hide resolved
dpcpp/matrix/hybrid_kernels.dp.cpp Outdated Show resolved Hide resolved
@@ -71,11 +79,1524 @@ namespace dpcpp {
namespace csr {


constexpr int default_block_size = 256;
constexpr int warps_in_block = 4;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment.

dpcpp/matrix/csr_kernels.dp.cpp Outdated Show resolved Hide resolved
@yhmtsai yhmtsai added 1:ST:ready-to-merge This PR is ready to merge. and removed 1:ST:ready-for-review This PR is ready for review labels Aug 4, 2021
@tcojean
Copy link
Member

tcojean commented Aug 5, 2021

rebase!

yhmtsai and others added 12 commits August 6, 2021 02:37
also relax a error bound for ell small mtx test
- move some kernels to common
- use GKO_ENABLE_DEFAULT_HOST when possible
- use expilict type when intial declare
- delete unused function

Co-authored-by: Aditya Kashi <[email protected]>
Co-authored-by: Terry Cojean <[email protected]>
Co-authored-by: Thomas Grützmacher <[email protected]>
Co-authored-by: Tobias Ribizel <[email protected]>
- use UA reference
- del no_exit on exec due to issue on devcloud.
it still exits if encounter issue, so we do not miss it
- collect some of csr into common
- refine format

Co-authored-by: Aditya Kashi <[email protected]>
Co-authored-by: Terry Cojean <[email protected]>
Co-authored-by: Thomas Grützmacher <[email protected]>
Co-authored-by: Tobias Ribizel <[email protected]>
@yhmtsai yhmtsai force-pushed the ginkgo_complete_matrix branch 2 times, most recently from c4879fb to e609519 Compare August 6, 2021 09:56
@yhmtsai yhmtsai merged commit d9789d7 into develop Aug 6, 2021
@yhmtsai yhmtsai deleted the ginkgo_complete_matrix branch August 6, 2021 15:39
@sonarcloud
Copy link

sonarcloud bot commented Aug 7, 2021

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

No Coverage information No Coverage information
No Duplication information No Duplication information

tcojean added a commit that referenced this pull request Aug 20, 2021
Ginkgo release 1.4.0

The Ginkgo team is proud to announce the new Ginkgo minor release 1.4.0. This
release brings most of the Ginkgo functionality to the Intel DPC++ ecosystem
which enables Intel-GPU and CPU execution. The only Ginkgo features which have
not been ported yet are some preconditioners.

Ginkgo's mixed-precision support is greatly enhanced thanks to:
1. The new Accessor concept, which allows writing kernels featuring on-the-fly
memory compression, among other features. The accessor can be used as
header-only, see the [accessor BLAS benchmarks repository](https://github.com/ginkgo-project/accessor-BLAS/tree/develop) as a usage example.
2. All LinOps now transparently support mixed-precision execution. By default,
this is done through a temporary copy which may have a performance impact but
already allows mixed-precision research.

Native mixed-precision ELL kernels are implemented which do not see this cost.
The accessor is also leveraged in a new CB-GMRES solver which allows for
performance improvements by compressing the Krylov basis vectors. Many other
features have been added to Ginkgo, such as reordering support, a new IDR
solver, Incomplete Cholesky preconditioner, matrix assembly support (only CPU
for now), machine topology information, and more!

Supported systems and requirements:
+ For all platforms, cmake 3.13+
+ C++14 compliant compiler
+ Linux and MacOS
  + gcc: 5.3+, 6.3+, 7.3+, all versions after 8.1+
  + clang: 3.9+
  + Intel compiler: 2018+
  + Apple LLVM: 8.0+
  + CUDA module: CUDA 9.0+
  + HIP module: ROCm 3.5+
  + DPC++ module: Intel OneAPI 2021.3. Set the CXX compiler to `dpcpp`.
+ Windows
  + MinGW and Cygwin: gcc 5.3+, 6.3+, 7.3+, all versions after 8.1+
  + Microsoft Visual Studio: VS 2019
  + CUDA module: CUDA 9.0+, Microsoft Visual Studio
  + OpenMP module: MinGW or Cygwin.


Algorithm and important feature additions:
+ Add a new DPC++ Executor for SYCL execution and other base utilities
  [#648](#648), [#661](#661), [#757](#757), [#832](#832)
+ Port matrix formats, solvers and related kernels to DPC++. For some kernels,
  also make use of a shared kernel implementation for all executors (except
  Reference). [#710](#710), [#799](#799), [#779](#779), [#733](#733), [#844](#844), [#843](#843), [#789](#789), [#845](#845), [#849](#849), [#855](#855), [#856](#856)
+ Add accessors which allow multi-precision kernels, among other things.
  [#643](#643), [#708](#708)
+ Add support for mixed precision operations through apply in all LinOps. [#677](#677)
+ Add incomplete Cholesky factorizations and preconditioners as well as some
  improvements to ILU. [#672](#672), [#837](#837), [#846](#846)
+ Add an AMGX implementation and kernels on all devices but DPC++.
  [#528](#528), [#695](#695), [#860](#860)
+ Add a new mixed-precision capability solver, Compressed Basis GMRES
  (CB-GMRES). [#693](#693), [#763](#763)
+ Add the IDR(s) solver. [#620](#620)
+ Add a new fixed-size block CSR matrix format (for the Reference executor).
  [#671](#671), [#730](#730)
+ Add native mixed-precision support to the ELL format. [#717](#717), [#780](#780)
+ Add Reverse Cuthill-McKee reordering [#500](#500), [#649](#649)
+ Add matrix assembly support on CPUs. [#644](#644)
+ Extends ISAI from triangular to general and spd matrices. [#690](#690)

Other additions:
+ Add the possibility to apply real matrices to complex vectors.
  [#655](#655), [#658](#658)
+ Add functions to compute the absolute of a matrix format. [#636](#636)
+ Add symmetric permutation and improve existing permutations.
  [#684](#684), [#657](#657), [#663](#663)
+ Add a MachineTopology class with HWLOC support [#554](#554), [#697](#697)
+ Add an implicit residual norm criterion. [#702](#702), [#818](#818), [#850](#850)
+ Row-major accessor is generalized to more than 2 dimensions and a new
  "block column-major" accessor has been added. [#707](#707)
+ Add an heat equation example. [#698](#698), [#706](#706)
+ Add ccache support in CMake and CI. [#725](#725), [#739](#739)
+ Allow tuning and benchmarking variables non intrusively. [#692](#692)
+ Add triangular solver benchmark [#664](#664)
+ Add benchmarks for BLAS operations [#772](#772), [#829](#829)
+ Add support for different precisions and consistent index types in benchmarks.
  [#675](#675), [#828](#828)
+ Add a Github bot system to facilitate development and PR management.
  [#667](#667), [#674](#674), [#689](#689), [#853](#853)
+ Add Intel (DPC++) CI support and enable CI on HPC systems. [#736](#736), [#751](#751), [#781](#781)
+ Add ssh debugging for Github Actions CI. [#749](#749)
+ Add pipeline segmentation for better CI speed. [#737](#737)


Changes:
+ Add a Scalar Jacobi specialization and kernels. [#808](#808), [#834](#834), [#854](#854)
+ Add implicit residual log for solvers and benchmarks. [#714](#714)
+ Change handling of the conjugate in the dense dot product. [#755](#755)
+ Improved Dense stride handling. [#774](#774)
+ Multiple improvements to the OpenMP kernels performance, including COO,
an exclusive prefix sum, and more. [#703](#703), [#765](#765), [#740](#740)
+ Allow specialization of submatrix and other dense creation functions in solvers. [#718](#718)
+ Improved Identity constructor and treatment of rectangular matrices. [#646](#646)
+ Allow CUDA/HIP executors to select allocation mode. [#758](#758)
+ Check if executors share the same memory. [#670](#670)
+ Improve test install and smoke testing support. [#721](#721)
+ Update the JOSS paper citation and add publications in the documentation.
  [#629](#629), [#724](#724)
+ Improve the version output. [#806](#806)
+ Add some utilities for dim and span. [#821](#821)
+ Improved solver and preconditioner benchmarks. [#660](#660)
+ Improve benchmark timing and output. [#669](#669), [#791](#791), [#801](#801), [#812](#812)


Fixes:
+ Sorting fix for the Jacobi preconditioner. [#659](#659)
+ Also log the first residual norm in CGS [#735](#735)
+ Fix BiCG and HIP CSR to work with complex matrices. [#651](#651)
+ Fix Coo SpMV on strided vectors. [#807](#807)
+ Fix segfault of extract_diagonal, add short-and-fat test. [#769](#769)
+ Fix device_reset issue by moving counter/mutex to device. [#810](#810)
+ Fix `EnableLogging` superclass. [#841](#841)
+ Support ROCm 4.1.x and breaking HIP_PLATFORM changes. [#726](#726)
+ Decreased test size for a few device tests. [#742](#742)
+ Fix multiple issues with our CMake HIP and RPATH setup.
  [#712](#712), [#745](#745), [#709](#709)
+ Cleanup our CMake installation step. [#713](#713)
+ Various simplification and fixes to the Windows CMake setup. [#720](#720), [#785](#785)
+ Simplify third-party integration. [#786](#786)
+ Improve Ginkgo device arch flags management. [#696](#696)
+ Other fixes and improvements to the CMake setup.
  [#685](#685), [#792](#792), [#705](#705), [#836](#836)
+ Clarification of dense norm documentation [#784](#784)
+ Various development tools fixes and improvements [#738](#738), [#830](#830), [#840](#840)
+ Make multiple operators/constructors explicit. [#650](#650), [#761](#761)
+ Fix some issues, memory leaks and warnings found by MSVC.
  [#666](#666), [#731](#731)
+ Improved solver memory estimates and consistent iteration counts [#691](#691)
+ Various logger improvements and fixes [#728](#728), [#743](#743), [#754](#754)
+ Fix for ForwardIterator requirements in iterator_factory. [#665](#665)
+ Various benchmark fixes. [#647](#647), [#673](#673), [#722](#722)
+ Various CI fixes and improvements. [#642](#642), [#641](#641), [#795](#795), [#783](#783), [#793](#793), [#852](#852)


Related PR: #857
tcojean added a commit that referenced this pull request Aug 23, 2021
Release 1.4.0 to master

The Ginkgo team is proud to announce the new Ginkgo minor release 1.4.0. This
release brings most of the Ginkgo functionality to the Intel DPC++ ecosystem
which enables Intel-GPU and CPU execution. The only Ginkgo features which have
not been ported yet are some preconditioners.

Ginkgo's mixed-precision support is greatly enhanced thanks to:
1. The new Accessor concept, which allows writing kernels featuring on-the-fly
memory compression, among other features. The accessor can be used as
header-only, see the [accessor BLAS benchmarks repository](https://github.com/ginkgo-project/accessor-BLAS/tree/develop) as a usage example.
2. All LinOps now transparently support mixed-precision execution. By default,
this is done through a temporary copy which may have a performance impact but
already allows mixed-precision research.

Native mixed-precision ELL kernels are implemented which do not see this cost.
The accessor is also leveraged in a new CB-GMRES solver which allows for
performance improvements by compressing the Krylov basis vectors. Many other
features have been added to Ginkgo, such as reordering support, a new IDR
solver, Incomplete Cholesky preconditioner, matrix assembly support (only CPU
for now), machine topology information, and more!

Supported systems and requirements:
+ For all platforms, cmake 3.13+
+ C++14 compliant compiler
+ Linux and MacOS
  + gcc: 5.3+, 6.3+, 7.3+, all versions after 8.1+
  + clang: 3.9+
  + Intel compiler: 2018+
  + Apple LLVM: 8.0+
  + CUDA module: CUDA 9.0+
  + HIP module: ROCm 3.5+
  + DPC++ module: Intel OneAPI 2021.3. Set the CXX compiler to `dpcpp`.
+ Windows
  + MinGW and Cygwin: gcc 5.3+, 6.3+, 7.3+, all versions after 8.1+
  + Microsoft Visual Studio: VS 2019
  + CUDA module: CUDA 9.0+, Microsoft Visual Studio
  + OpenMP module: MinGW or Cygwin.


Algorithm and important feature additions:
+ Add a new DPC++ Executor for SYCL execution and other base utilities
  [#648](#648), [#661](#661), [#757](#757), [#832](#832)
+ Port matrix formats, solvers and related kernels to DPC++. For some kernels,
  also make use of a shared kernel implementation for all executors (except
  Reference). [#710](#710), [#799](#799), [#779](#779), [#733](#733), [#844](#844), [#843](#843), [#789](#789), [#845](#845), [#849](#849), [#855](#855), [#856](#856)
+ Add accessors which allow multi-precision kernels, among other things.
  [#643](#643), [#708](#708)
+ Add support for mixed precision operations through apply in all LinOps. [#677](#677)
+ Add incomplete Cholesky factorizations and preconditioners as well as some
  improvements to ILU. [#672](#672), [#837](#837), [#846](#846)
+ Add an AMGX implementation and kernels on all devices but DPC++.
  [#528](#528), [#695](#695), [#860](#860)
+ Add a new mixed-precision capability solver, Compressed Basis GMRES
  (CB-GMRES). [#693](#693), [#763](#763)
+ Add the IDR(s) solver. [#620](#620)
+ Add a new fixed-size block CSR matrix format (for the Reference executor).
  [#671](#671), [#730](#730)
+ Add native mixed-precision support to the ELL format. [#717](#717), [#780](#780)
+ Add Reverse Cuthill-McKee reordering [#500](#500), [#649](#649)
+ Add matrix assembly support on CPUs. [#644](#644)
+ Extends ISAI from triangular to general and spd matrices. [#690](#690)

Other additions:
+ Add the possibility to apply real matrices to complex vectors.
  [#655](#655), [#658](#658)
+ Add functions to compute the absolute of a matrix format. [#636](#636)
+ Add symmetric permutation and improve existing permutations.
  [#684](#684), [#657](#657), [#663](#663)
+ Add a MachineTopology class with HWLOC support [#554](#554), [#697](#697)
+ Add an implicit residual norm criterion. [#702](#702), [#818](#818), [#850](#850)
+ Row-major accessor is generalized to more than 2 dimensions and a new
  "block column-major" accessor has been added. [#707](#707)
+ Add an heat equation example. [#698](#698), [#706](#706)
+ Add ccache support in CMake and CI. [#725](#725), [#739](#739)
+ Allow tuning and benchmarking variables non intrusively. [#692](#692)
+ Add triangular solver benchmark [#664](#664)
+ Add benchmarks for BLAS operations [#772](#772), [#829](#829)
+ Add support for different precisions and consistent index types in benchmarks.
  [#675](#675), [#828](#828)
+ Add a Github bot system to facilitate development and PR management.
  [#667](#667), [#674](#674), [#689](#689), [#853](#853)
+ Add Intel (DPC++) CI support and enable CI on HPC systems. [#736](#736), [#751](#751), [#781](#781)
+ Add ssh debugging for Github Actions CI. [#749](#749)
+ Add pipeline segmentation for better CI speed. [#737](#737)


Changes:
+ Add a Scalar Jacobi specialization and kernels. [#808](#808), [#834](#834), [#854](#854)
+ Add implicit residual log for solvers and benchmarks. [#714](#714)
+ Change handling of the conjugate in the dense dot product. [#755](#755)
+ Improved Dense stride handling. [#774](#774)
+ Multiple improvements to the OpenMP kernels performance, including COO,
an exclusive prefix sum, and more. [#703](#703), [#765](#765), [#740](#740)
+ Allow specialization of submatrix and other dense creation functions in solvers. [#718](#718)
+ Improved Identity constructor and treatment of rectangular matrices. [#646](#646)
+ Allow CUDA/HIP executors to select allocation mode. [#758](#758)
+ Check if executors share the same memory. [#670](#670)
+ Improve test install and smoke testing support. [#721](#721)
+ Update the JOSS paper citation and add publications in the documentation.
  [#629](#629), [#724](#724)
+ Improve the version output. [#806](#806)
+ Add some utilities for dim and span. [#821](#821)
+ Improved solver and preconditioner benchmarks. [#660](#660)
+ Improve benchmark timing and output. [#669](#669), [#791](#791), [#801](#801), [#812](#812)


Fixes:
+ Sorting fix for the Jacobi preconditioner. [#659](#659)
+ Also log the first residual norm in CGS [#735](#735)
+ Fix BiCG and HIP CSR to work with complex matrices. [#651](#651)
+ Fix Coo SpMV on strided vectors. [#807](#807)
+ Fix segfault of extract_diagonal, add short-and-fat test. [#769](#769)
+ Fix device_reset issue by moving counter/mutex to device. [#810](#810)
+ Fix `EnableLogging` superclass. [#841](#841)
+ Support ROCm 4.1.x and breaking HIP_PLATFORM changes. [#726](#726)
+ Decreased test size for a few device tests. [#742](#742)
+ Fix multiple issues with our CMake HIP and RPATH setup.
  [#712](#712), [#745](#745), [#709](#709)
+ Cleanup our CMake installation step. [#713](#713)
+ Various simplification and fixes to the Windows CMake setup. [#720](#720), [#785](#785)
+ Simplify third-party integration. [#786](#786)
+ Improve Ginkgo device arch flags management. [#696](#696)
+ Other fixes and improvements to the CMake setup.
  [#685](#685), [#792](#792), [#705](#705), [#836](#836)
+ Clarification of dense norm documentation [#784](#784)
+ Various development tools fixes and improvements [#738](#738), [#830](#830), [#840](#840)
+ Make multiple operators/constructors explicit. [#650](#650), [#761](#761)
+ Fix some issues, memory leaks and warnings found by MSVC.
  [#666](#666), [#731](#731)
+ Improved solver memory estimates and consistent iteration counts [#691](#691)
+ Various logger improvements and fixes [#728](#728), [#743](#743), [#754](#754)
+ Fix for ForwardIterator requirements in iterator_factory. [#665](#665)
+ Various benchmark fixes. [#647](#647), [#673](#673), [#722](#722)
+ Various CI fixes and improvements. [#642](#642), [#641](#641), [#795](#795), [#783](#783), [#793](#793), [#852](#852)

Related PR: #866
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:ready-to-merge This PR is ready to merge. mod:core This is related to the core module. mod:dpcpp This is related to the DPC++ module. reg:build This is related to the build system. reg:testing This is related to testing. type:matrix-format This is related to the Matrix formats
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants