Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dpcpp port ParILU(T)/IC(T) #928

Merged
merged 11 commits into from
Jun 22, 2022
Merged

Dpcpp port ParILU(T)/IC(T) #928

merged 11 commits into from
Jun 22, 2022

Conversation

yhmtsai
Copy link
Member

@yhmtsai yhmtsai commented Nov 25, 2021

This PR ports the ParILU(T)/IC(T).

TODO:

@yhmtsai yhmtsai self-assigned this Nov 25, 2021
@ginkgo-bot ginkgo-bot added mod:core This is related to the core module. mod:cuda This is related to the CUDA module. mod:dpcpp This is related to the DPC++ module. mod:hip This is related to the HIP module. reg:build This is related to the build system. reg:helper-scripts This issue/PR is related to the helper scripts mainly concerned with development of Ginkgo. reg:testing This is related to testing. type:factorization This is related to the Factorizations type:matrix-format This is related to the Matrix formats type:preconditioner This is related to the preconditioners type:solver This is related to the solvers labels Nov 25, 2021
@yhmtsai yhmtsai mentioned this pull request Nov 25, 2021
4 tasks
Comment on lines 125 to 137
/*
DPCT1007:2: Migration of this CUDA API is not supported by the
Intel(R) DPC++ Compatibility Tool.
*/
sum += l_vals[l_row_begin] * conj(l_vals[lh_col_begin]);
}
l_row_begin += l_col <= lh_row;
lh_col_begin += l_col >= lh_row;
}
/*
DPCT1064:3: Migrated sqrt call is used in a macro definition and is not
valid for all macro uses. Adjust the code.
*/
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note: there seems to have been a few issues with DPCT

@upsj upsj mentioned this pull request Feb 1, 2022
65 tasks
@tcojean tcojean added this to the Ginkgo 1.5.0 milestone Feb 10, 2022
@yhmtsai yhmtsai force-pushed the dpcpp_port_par_ilu_ic branch 2 times, most recently from 6d4de72 to 14052de Compare March 23, 2022 23:39
@codecov
Copy link

codecov bot commented Apr 8, 2022

Codecov Report

Merging #928 (821fb7b) into develop (ff576ee) will increase coverage by 0.00%.
The diff coverage is 0.00%.

@@           Coverage Diff            @@
##           develop     #928   +/-   ##
========================================
  Coverage    91.77%   91.78%           
========================================
  Files          499      499           
  Lines        42971    42971           
========================================
+ Hits         39435    39439    +4     
+ Misses        3536     3532    -4     
Impacted Files Coverage Δ
include/ginkgo/core/matrix/csr.hpp 43.36% <0.00%> (ø)
omp/factorization/par_ic_kernels.cpp 100.00% <ø> (ø)
omp/factorization/par_ilu_kernels.cpp 100.00% <ø> (ø)
devices/machine_topology.cpp 82.71% <0.00%> (+4.93%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update ff576ee...821fb7b. Read the comment docs.

@yhmtsai yhmtsai added the 1:ST:ready-for-review This PR is ready for review label Apr 20, 2022
@tcojean tcojean self-requested a review April 21, 2022 08:30
@ginkgo-bot ginkgo-bot requested a review from a team April 25, 2022 09:15
@upsj upsj self-requested a review May 2, 2022 08:35
Copy link
Member

@upsj upsj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! A blast from the past for me 😄

Comment on lines 88 to 99
last_operation = zero<ValueType>();
if (l_col == u_col) {
last_operation = l_values[l_idx] * u_values[u_idx];
sum -= last_operation;
}
l_idx += (l_col <= u_col);
u_idx += (u_col <= l_col);
}
sum += last_operation; // undo the last operation
if (row > col) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be simplified a bit since we know that last_operation is the one that picks up a diagonal entry from L or U.

@@ -52,7 +52,7 @@ struct config {
/**
* The number of threads within a Dpcpp subgroup.
*/
static constexpr uint32 warp_size = 16;
static constexpr uint32 warp_size = 32;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this will become obsolete soon, anyways?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it will stay for some time.

Comment on lines 74 to 109
template <bool IsSorted>
struct find_helper {
template <typename Group, typename IndexType>
static __dpct_inline__ bool find(Group subwarp_grp, const IndexType* first,
const IndexType* last, IndexType value)
{
auto subwarp_idx = subwarp_grp.thread_rank();
bool found{false};
for (auto curr_start = first; curr_start < last;
curr_start += subwarp_grp.size()) {
const auto curr = curr_start + subwarp_idx;
found = (curr < last && *curr == value);
found = subwarp_grp.any(found);
if (found) {
break;
}
}
return found;
}
};


// Improved version in case the CSR matrix is sorted
template <>
struct find_helper<true> {
template <typename Group, typename IndexType>
static __dpct_inline__ bool find(Group subwarp_grp, const IndexType* first,
const IndexType* last, IndexType value)
{
const auto length = static_cast<IndexType>(last - first);
const auto pos =
group_wide_search(IndexType{}, length, subwarp_grp,
[&](IndexType i) { return first[i] >= value; });
return pos < length && first[pos] == value;
}
};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With IsSorted being true everywhere AFAIK, I think we could remove a bit of complexity here.

Comment on lines 144 to 149
/*
DPCT1084:0: The function call has multiple migration results in
different template instantiations that could not be unified. You may
need to adjust the code.
*/
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO

Comment on lines +326 to +328
l_row_nnz += (col < row);
u_row_nnz += (row < col);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit:

Suggested change
l_row_nnz += (col < row);
u_row_nnz += (row < col);
l_row_nnz += col < row ? 1 : 0;
u_row_nnz += row < col ? 1 : 0;

for (auto idx = row_ptrs[row]; idx < row_ptrs[row + 1]; ++idx) {
auto col = col_idxs[idx];
// skip the diagonal entry
l_row_nnz += col < row;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
l_row_nnz += col < row;
l_row_nnz += col < row ? 1 : 0;



template <int subwarp_size, typename ValueType, typename IndexType>
void tri_spgeam_init(const IndexType* __restrict__ lu_row_ptrs,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Honestly, it's very much possible that you can get similar/better performance from a thread-per-row parallelization of this algorithm. Might be interesting to investigate this at some point, This implementation was very fun do write, but may also be overkill.

Comment on lines 126 to 127


Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change

dpcpp/factorization/par_ilut_select_kernels.hpp.inc Outdated Show resolved Hide resolved
#pragma unroll
for (int i = 0; i < sampleselect_oversampling; ++i) {
auto lidx = idx * sampleselect_oversampling + i;
auto val = input[static_cast<IndexType>(lidx * size / sample_size)];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This may overflow for large inputs. better alternative would be switching between size < sample_size and size >= sample_size with different techniques (stride = size / sample_size or lidx * size / sample_size)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I recover the stride computation, so it should not be the issue now?

Copy link
Member

@tcojean tcojean left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM in general. I'm wondering whether the optimizations make sense for DPC++ without subwarps, etc., but this is fine as a first version.

Comment on lines +118 to +119
* This is a recursive implementation of a bitonic sorting network,
* executed in parallel within a warp using lane shuffle instructions.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor, maybe some update on the documentation.

Comment on lines +130 to +131
config::warp_size % num_threads == 0 &&
num_threads <= config::warp_size,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would we also need a template parameter for the warp_size (subgroup_size) at some point?

sycl::nd_item<3> item_ct1)
{
constexpr auto num_threads = num_elements / num_local;
constexpr auto num_warps = num_threads / config::warp_size;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same question we might need subgroup configurability

Comment on lines -48 to +50
* @brief The parallel ILU factorization namespace.
* @brief The parallel ilu factorization namespace.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor&unrelated: since it's all acronyms I don't know why we didn't keep capitalization

namespace kernel {


template <int subwarp_size, typename IndexType>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

minor, but maybe we should update to subgroup or subsubgroup, especially since there's no subwarp here?



// subwarp sizes for all warp-parallel kernels (filter, add_candidates)
using compiled_kernels = syn::value_list<int, 1, 8, 16, 32>;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does 1 actually work? Many times it's not a valid size?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 is the single thread parallelization. Although the sub_group may be the 8, 16, 32, we use the single thread only and no communication between thread via cooperative group interface



// subwarp sizes for all warp-parallel kernels (filter, add_candidates)
using compiled_kernels = syn::value_list<int, 1, 8, 16, 32>;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same question with the 1. I think only host queue allows this?

@yhmtsai yhmtsai added 1:ST:ready-to-merge This PR is ready to merge. and removed 1:ST:ready-for-review This PR is ready for review labels Jun 20, 2022
@sonarcloud
Copy link

sonarcloud bot commented Jun 22, 2022

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 7 Code Smells

0.0% 0.0% Coverage
7.4% 7.4% Duplication

@yhmtsai
Copy link
Member Author

yhmtsai commented Jun 22, 2022

I put those suggestion related to performance in TODO.

@yhmtsai yhmtsai merged commit 6873586 into develop Jun 22, 2022
@yhmtsai yhmtsai deleted the dpcpp_port_par_ilu_ic branch June 22, 2022 07:31
tcojean added a commit that referenced this pull request Nov 12, 2022
Advertise release 1.5.0 and last changes

+ Add changelog,
+ Update third party libraries
+ A small fix to a CMake file

See PR: #1195

The Ginkgo team is proud to announce the new Ginkgo minor release 1.5.0. This release brings many important new features such as:
- MPI-based multi-node support for all matrix formats and most solvers;
- full DPC++/SYCL support,
- functionality and interface for GPU-resident sparse direct solvers,
- an interface for wrapping solvers with scaling and reordering applied,
- a new algebraic Multigrid solver/preconditioner,
- improved mixed-precision support,
- support for device matrix assembly,

and much more.

If you face an issue, please first check our [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues) and the [open issues list](https://github.com/ginkgo-project/ginkgo/issues) and if you do not find a solution, feel free to [open a new issue](https://github.com/ginkgo-project/ginkgo/issues/new/choose) or ask a question using the [github discussions](https://github.com/ginkgo-project/ginkgo/discussions).

Supported systems and requirements:
+ For all platforms, CMake 3.13+
+ C++14 compliant compiler
+ Linux and macOS
  + GCC: 5.5+
  + clang: 3.9+
  + Intel compiler: 2018+
  + Apple LLVM: 8.0+
  + NVHPC: 22.7+
  + Cray Compiler: 14.0.1+
  + CUDA module: CUDA 9.2+ or NVHPC 22.7+
  + HIP module: ROCm 4.0+
  + DPC++ module: Intel OneAPI 2021.3 with oneMKL and oneDPL. Set the CXX compiler to `dpcpp`.
+ Windows
  + MinGW and Cygwin: GCC 5.5+
  + Microsoft Visual Studio: VS 2019
  + CUDA module: CUDA 9.2+, Microsoft Visual Studio
  + OpenMP module: MinGW or Cygwin.


Algorithm and important feature additions:
+ Add MPI-based multi-node for all matrix formats and solvers (except GMRES and IDR). ([#676](#676), [#908](#908), [#909](#909), [#932](#932), [#951](#951), [#961](#961), [#971](#971), [#976](#976), [#985](#985), [#1007](#1007), [#1030](#1030), [#1054](#1054), [#1100](#1100), [#1148](#1148))
+ Porting the remaining algorithms (preconditioners like ISAI, Jacobi, Multigrid, ParILU(T) and ParIC(T)) to DPC++/SYCL, update to SYCL 2020, and improve support and performance ([#896](#896), [#924](#924), [#928](#928), [#929](#929), [#933](#933), [#943](#943), [#960](#960), [#1057](#1057), [#1110](#1110),  [#1142](#1142))
+ Add a Sparse Direct interface supporting GPU-resident numerical LU factorization, symbolic Cholesky factorization, improved triangular solvers, and more ([#957](#957), [#1058](#1058), [#1072](#1072), [#1082](#1082))
+ Add a ScaleReordered interface that can wrap solvers and automatically apply reorderings and scalings ([#1059](#1059))
+ Add a Multigrid solver and improve the aggregation based PGM coarsening scheme ([#542](#542), [#913](#913), [#980](#980), [#982](#982),  [#986](#986))
+ Add infrastructure for unified, lambda-based, backend agnostic, kernels and utilize it for some simple kernels ([#833](#833), [#910](#910), [#926](#926))
+ Merge different CUDA, HIP, DPC++ and OpenMP tests under a common interface ([#904](#904), [#973](#973), [#1044](#1044), [#1117](#1117))
+ Add a device_matrix_data type for device-side matrix assembly ([#886](#886), [#963](#963), [#965](#965))
+ Add support for mixed real/complex BLAS operations ([#864](#864))
+ Add a FFT LinOp for all but DPC++/SYCL ([#701](#701))
+ Add FBCSR support for NVIDIA and AMD GPUs and CPUs with OpenMP ([#775](#775))
+ Add CSR scaling ([#848](#848))
+ Add array::const_view and equivalent to create constant matrices from non-const data ([#890](#890))
+ Add a RowGatherer LinOp supporting mixed precision to gather dense matrix rows ([#901](#901))
+ Add mixed precision SparsityCsr SpMV support ([#970](#970))
+ Allow creating CSR submatrix including from (possibly discontinuous) index sets ([#885](#885), [#964](#964))
+ Add a scaled identity addition (M <- aI + bM) feature interface and impls for Csr and Dense ([#942](#942))


Deprecations and important changes:
+ Deprecate AmgxPgm in favor of the new Pgm name. ([#1149](#1149)).
+ Deprecate specialized residual norm classes in favor of a common `ResidualNorm` class ([#1101](#1101))
+ Deprecate CamelCase non-polymorphic types in favor of snake_case versions (like array, machine_topology, uninitialized_array, index_set) ([#1031](#1031), [#1052](#1052))
+ Bug fix: restrict gko::share to rvalue references (*possible interface break*) ([#1020](#1020))
+ Bug fix: when using cuSPARSE's triangular solvers, specifying the factory parameter `num_rhs` is now required when solving for more than one right-hand side, otherwise an exception is thrown ([#1184](#1184)).
+ Drop official support for old CUDA < 9.2 ([#887](#887))


Improved performance additions:
+ Reuse tmp storage in reductions in solvers and add a mutable workspace to all solvers ([#1013](#1013), [#1028](#1028))
+ Add HIP unsafe atomic option for AMD ([#1091](#1091))
+ Prefer vendor implementations for Dense dot, conj_dot and norm2 when available ([#967](#967)).
+ Tuned OpenMP SellP, COO, and ELL SpMV kernels for a small number of RHS ([#809](#809))


Fixes:
+ Fix various compilation warnings ([#1076](#1076), [#1183](#1183), [#1189](#1189))
+ Fix issues with hwloc-related tests ([#1074](#1074))
+ Fix include headers for GCC 12 ([#1071](#1071))
+ Fix for simple-solver-logging example ([#1066](#1066))
+ Fix for potential memory leak in Logger ([#1056](#1056))
+ Fix logging of mixin classes ([#1037](#1037))
+ Improve value semantics for LinOp types, like moved-from state in cross-executor copy/clones ([#753](#753))
+ Fix some matrix SpMV and conversion corner cases ([#905](#905), [#978](#978))
+ Fix uninitialized data ([#958](#958))
+ Fix CUDA version requirement for cusparseSpSM ([#953](#953))
+ Fix several issues within bash-script ([#1016](#1016))
+ Fixes for `NVHPC` compiler support ([#1194](#1194))


Other additions:
+ Simplify and properly name GMRES kernels ([#861](#861))
+ Improve pkg-config support for non-CMake libraries ([#923](#923), [#1109](#1109))
+ Improve gdb pretty printer ([#987](#987), [#1114](#1114))
+ Add a logger highlighting inefficient allocation and copy patterns ([#1035](#1035))
+ Improved and optimized test random matrix generation ([#954](#954), [#1032](#1032))
+ Better CSR strategy defaults ([#969](#969))
+ Add `move_from` to `PolymorphicObject` ([#997](#997))
+ Remove unnecessary device_guard usage ([#956](#956))
+ Improvements to the generic accessor for mixed-precision ([#727](#727))
+ Add a naive lower triangular solver implementation for CUDA ([#764](#764))
+ Add support for int64 indices from CUDA 11 onward with SpMV and SpGEMM ([#897](#897))
+ Add a L1 norm implementation ([#900](#900))
+ Add reduce_add for arrays ([#831](#831))
+ Add utility to simplify Dense View creation from an existing Dense vector ([#1136](#1136)).
+ Add a custom transpose implementation for Fbcsr and Csr transpose for unsupported vendor types ([#1123](#1123))
+ Make IDR random initilization deterministic ([#1116](#1116))
+ Move the algorithm choice for triangular solvers from Csr::strategy_type to a factory parameter ([#1088](#1088))
+ Update CUDA archCoresPerSM ([#1175](#1116))
+ Add kernels for Csr sparsity pattern lookup ([#994](#994))
+ Differentiate between structural and numerical zeros in Ell/Sellp ([#1027](#1027))
+ Add a binary IO format for matrix data ([#984](#984))
+ Add a tuple zip_iterator implementation ([#966](#966))
+ Simplify kernel stubs and declarations ([#888](#888))
+ Simplify GKO_REGISTER_OPERATION with lambdas ([#859](#859))
+ Simplify copy to device in tests and examples ([#863](#863))
+ More verbose output to array assertions ([#858](#858))
+ Allow parallel compilation for Jacobi kernels ([#871](#871))
+ Change clang-format pointer alignment to left ([#872](#872))
+ Various improvements and fixes to the benchmarking framework ([#750](#750), [#759](#759), [#870](#870), [#911](#911), [#1033](#1033), [#1137](#1137))
+ Various documentation improvements ([#892](#892), [#921](#921), [#950](#950), [#977](#977), [#1021](#1021), [#1068](#1068), [#1069](#1069), [#1080](#1080), [#1081](#1081), [#1108](#1108), [#1153](#1153), [#1154](#1154))
+ Various CI improvements ([#868](#868), [#874](#874), [#884](#884), [#889](#889), [#899](#899), [#903](#903),  [#922](#922), [#925](#925), [#930](#930), [#936](#936), [#937](#937), [#958](#958), [#882](#882), [#1011](#1011), [#1015](#1015), [#989](#989), [#1039](#1039), [#1042](#1042), [#1067](#1067), [#1073](#1073), [#1075](#1075), [#1083](#1083), [#1084](#1084), [#1085](#1085), [#1139](#1139), [#1178](#1178), [#1187](#1187))
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:ready-to-merge This PR is ready to merge. mod:core This is related to the core module. mod:cuda This is related to the CUDA module. mod:dpcpp This is related to the DPC++ module. mod:hip This is related to the HIP module. reg:build This is related to the build system. reg:helper-scripts This issue/PR is related to the helper scripts mainly concerned with development of Ginkgo. reg:testing This is related to testing. type:factorization This is related to the Factorizations type:matrix-format This is related to the Matrix formats type:preconditioner This is related to the preconditioners type:solver This is related to the solvers
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants