Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dpcpp ports amgx_pgm #933

Merged
merged 6 commits into from
Feb 11, 2022
Merged

dpcpp ports amgx_pgm #933

merged 6 commits into from
Feb 11, 2022

Conversation

yhmtsai
Copy link
Member

@yhmtsai yhmtsai commented Nov 29, 2021

This PR ports the amgx_pgm to dpcpp.
the current thrust::tie can not be translated to dpcpp by dpct.

@upsj Do you know something like thrust::tie inside oneDPL?

@yhmtsai yhmtsai added the 1:ST:ready-for-review This PR is ready for review label Nov 29, 2021
@yhmtsai yhmtsai requested a review from a team November 29, 2021 16:38
@yhmtsai yhmtsai self-assigned this Nov 29, 2021
@ginkgo-bot ginkgo-bot added mod:dpcpp This is related to the DPC++ module. reg:build This is related to the build system. reg:testing This is related to testing. type:multigrid This is related to multigrid labels Nov 29, 2021
@upsj
Copy link
Member

upsj commented Nov 29, 2021

std::tie from <tuple>

Copy link
Contributor

@Slaedr Slaedr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Looks completely consistent with the cuda implementation. Just one comment below.

Comment on lines 362 to 364
kernel::match_edge_kernel(grid, default_block_size, 0, exec->get_queue(),
num, strongest_neighbor.get_const_data(),
agg.get_data());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just putting the kernel call directly here makes the code easier to navigate and reduces code size. Is it simply that DPCT creates the extra function by default?

Suggested change
kernel::match_edge_kernel(grid, default_block_size, 0, exec->get_queue(),
num, strongest_neighbor.get_const_data(),
agg.get_data());
exec->get_queue()->parallel_for(
sycl_nd_range(grid, block), [&](sycl::nd_item<3> item_ct1) {
kernel::match_edge_kernel(num, strongest_neighbor.get_const_data(),
agg_vals.get_data(), item_ct1);
});

Copy link
Member

@upsj upsj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM as well! I agree with @Slaedr, and would even go one step further: All of these kernels use one thread per row. So you could simplify them heavily by putting them directly into a 1D parallel_for

@yhmtsai
Copy link
Member Author

yhmtsai commented Jan 5, 2022

no, the extra is our own effort.
it gives the same structure as the cuda and hip call including the share memory usage.
when the kernel call many times, the function in dpcpp is hard to align with other executor

@upsj upsj mentioned this pull request Feb 1, 2022
65 tasks
@tcojean tcojean self-requested a review February 7, 2022 09:31
@yhmtsai yhmtsai force-pushed the dpcpp_port_amgx_pgm branch 2 times, most recently from 7b4b76c to dc768b4 Compare February 8, 2022 14:37
@codecov
Copy link

codecov bot commented Feb 9, 2022

Codecov Report

Merging #933 (905f6c1) into develop (4a7e36f) will increase coverage by 0.00%.
The diff coverage is 92.85%.

❗ Current head 905f6c1 differs from pull request most recent head 1d16ba1. Consider uploading reports for the commit 1d16ba1 to get more accurate results

Impacted file tree graph

@@            Coverage Diff            @@
##           develop     #933    +/-   ##
=========================================
  Coverage    93.36%   93.37%            
=========================================
  Files          476      476            
  Lines        39368    39244   -124     
=========================================
- Hits         36755    36643   -112     
+ Misses        2613     2601    -12     
Impacted Files Coverage Δ
omp/base/kernel_launch.hpp 88.46% <ø> (ø)
common/unified/multigrid/amgx_pgm_kernels.cpp 84.61% <84.61%> (ø)
test/multigrid/amgx_pgm_kernels.cpp 100.00% <100.00%> (ø)
include/ginkgo/core/base/matrix_data.hpp 97.35% <0.00%> (-0.23%) ⬇️
core/base/device_matrix_data.cpp 100.00% <0.00%> (ø)
omp/base/device_matrix_data_kernels.cpp 100.00% <0.00%> (ø)
core/device_hooks/common_kernels.inc.cpp 0.00% <0.00%> (ø)
test/base/device_matrix_data_kernels.cpp 100.00% <0.00%> (ø)
reference/base/device_matrix_data_kernels.cpp 100.00% <0.00%> (ø)
include/ginkgo/core/base/device_matrix_data.hpp 62.50% <0.00%> (ø)
... and 3 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update e1f17e4...1d16ba1. Read the comment docs.

@yhmtsai
Copy link
Member Author

yhmtsai commented Feb 9, 2022

@upsj @Slaedr I move those kernels to unified and add device_std as thrust for cuda/hip and as std for others

Copy link
Contributor

@Slaedr Slaedr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Excellent work!

Did you try benchmarking your previous implementations against these ones? Since we seem to be discussing multigrid performance quite often, it would be good to document the performance difference between the separate kernels and the unified ones. I guess it's unlikely there's a big difference, but it will be good to document it.

It might make sense to develop more fine-grain parallel kernels for the aggregation kernels, primarily to get good coalescing access, though I understand that is far from trivial. That can be left to another PR.

if (agg[row] != -1) {
return;
}
for (auto idx = row_ptrs[row]; idx < row_ptrs[row + 1]; idx++) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right now, this algorithm (a) does not give coalesced access to weight matrix values and (b) processes the nonzeros in each row sequentially. Is that fine? Would you look into a more fine-grain parallel algorithm at some point? I guess this kernel is an important one, though I see that it's difficult to parallelize this further; it looks like you need a creative reduction. Similar for the next kernel.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tend to agree here - there is a clear path to using warp-parallelism here in the future. It is a cheap way to implement it in DPC++. There may be a few abstract primitives we could use for the GPU kernels: Csr row-wise reduction + counting, Csr row-wise filtering, ... Might make sense to put that into a file in components? But for now I wouldn't block the PR for this.
Probably should take a look at specializing it before the release?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes it should be parallelized by warp per row.
we do not yet test the performance of the generation step.
at least, the RAP also need to improve

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My only concern is we might have to go back to backend-specific kernels if we need more advanced algorithms. But for now, I think we should move ahead with this.

namespace {


class AmgxPgm : public ::testing::Test {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All these tests can now be move to tests/multigrid, I guess? Then you could remove hip/test/multigrid/amgx_pgm_kernels.cpp etc.

@yhmtsai
Copy link
Member Author

yhmtsai commented Feb 9, 2022

unified ~107.86ms

4.81%  3.1193ms         4  779.82us  129.02us  1.8408ms find_strongest_neighbor
0.05%  33.488us         4  8.3720us  6.4610us  10.141us match_edge

original ~107.108 (skip_sorting 62.4452)

4.82%  3.1674ms         4  791.86us  146.08us  1.8393ms find_strongest_neighbor
0.06%  36.687us         4  9.1710us  6.8120us  10.939us match_edge

only check mfem l-shape 1 level generation on V100
it does not give noticeable difference between these.
the main bottleneck is spgemm and sorting.
sorting takes ~ 40 ~ 45% of generation and spgemm takes ~10%
without sorting. spgemm (sum of RadixSort and spgemm_core) takes ~20% and find_strongest_neighbor takes ~10%

@yhmtsai yhmtsai force-pushed the dpcpp_port_amgx_pgm branch 2 times, most recently from aba653c to 13492da Compare February 10, 2022 14:29
@tcojean tcojean added this to the Ginkgo 1.5.0 milestone Feb 10, 2022

std::default_random_engine rand_engine;
std::ranlux48 rand_engine;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess this should be reverted?

Suggested change
std::ranlux48 rand_engine;
std::default_random_engine rand_engine;

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good catch!

@yhmtsai yhmtsai added 1:ST:ready-to-merge This PR is ready to merge. and removed 1:ST:ready-for-review This PR is ready for review labels Feb 10, 2022
@yhmtsai yhmtsai merged commit a9608ea into develop Feb 11, 2022
@yhmtsai yhmtsai deleted the dpcpp_port_amgx_pgm branch February 11, 2022 12:04
@sonarcloud
Copy link

sonarcloud bot commented Feb 11, 2022

SonarCloud Quality Gate failed.    Quality Gate failed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell B 65 Code Smells

91.8% 91.8% Coverage
7.2% 7.2% Duplication

tcojean added a commit that referenced this pull request Nov 12, 2022
Advertise release 1.5.0 and last changes

+ Add changelog,
+ Update third party libraries
+ A small fix to a CMake file

See PR: #1195

The Ginkgo team is proud to announce the new Ginkgo minor release 1.5.0. This release brings many important new features such as:
- MPI-based multi-node support for all matrix formats and most solvers;
- full DPC++/SYCL support,
- functionality and interface for GPU-resident sparse direct solvers,
- an interface for wrapping solvers with scaling and reordering applied,
- a new algebraic Multigrid solver/preconditioner,
- improved mixed-precision support,
- support for device matrix assembly,

and much more.

If you face an issue, please first check our [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues) and the [open issues list](https://github.com/ginkgo-project/ginkgo/issues) and if you do not find a solution, feel free to [open a new issue](https://github.com/ginkgo-project/ginkgo/issues/new/choose) or ask a question using the [github discussions](https://github.com/ginkgo-project/ginkgo/discussions).

Supported systems and requirements:
+ For all platforms, CMake 3.13+
+ C++14 compliant compiler
+ Linux and macOS
  + GCC: 5.5+
  + clang: 3.9+
  + Intel compiler: 2018+
  + Apple LLVM: 8.0+
  + NVHPC: 22.7+
  + Cray Compiler: 14.0.1+
  + CUDA module: CUDA 9.2+ or NVHPC 22.7+
  + HIP module: ROCm 4.0+
  + DPC++ module: Intel OneAPI 2021.3 with oneMKL and oneDPL. Set the CXX compiler to `dpcpp`.
+ Windows
  + MinGW and Cygwin: GCC 5.5+
  + Microsoft Visual Studio: VS 2019
  + CUDA module: CUDA 9.2+, Microsoft Visual Studio
  + OpenMP module: MinGW or Cygwin.


Algorithm and important feature additions:
+ Add MPI-based multi-node for all matrix formats and solvers (except GMRES and IDR). ([#676](#676), [#908](#908), [#909](#909), [#932](#932), [#951](#951), [#961](#961), [#971](#971), [#976](#976), [#985](#985), [#1007](#1007), [#1030](#1030), [#1054](#1054), [#1100](#1100), [#1148](#1148))
+ Porting the remaining algorithms (preconditioners like ISAI, Jacobi, Multigrid, ParILU(T) and ParIC(T)) to DPC++/SYCL, update to SYCL 2020, and improve support and performance ([#896](#896), [#924](#924), [#928](#928), [#929](#929), [#933](#933), [#943](#943), [#960](#960), [#1057](#1057), [#1110](#1110),  [#1142](#1142))
+ Add a Sparse Direct interface supporting GPU-resident numerical LU factorization, symbolic Cholesky factorization, improved triangular solvers, and more ([#957](#957), [#1058](#1058), [#1072](#1072), [#1082](#1082))
+ Add a ScaleReordered interface that can wrap solvers and automatically apply reorderings and scalings ([#1059](#1059))
+ Add a Multigrid solver and improve the aggregation based PGM coarsening scheme ([#542](#542), [#913](#913), [#980](#980), [#982](#982),  [#986](#986))
+ Add infrastructure for unified, lambda-based, backend agnostic, kernels and utilize it for some simple kernels ([#833](#833), [#910](#910), [#926](#926))
+ Merge different CUDA, HIP, DPC++ and OpenMP tests under a common interface ([#904](#904), [#973](#973), [#1044](#1044), [#1117](#1117))
+ Add a device_matrix_data type for device-side matrix assembly ([#886](#886), [#963](#963), [#965](#965))
+ Add support for mixed real/complex BLAS operations ([#864](#864))
+ Add a FFT LinOp for all but DPC++/SYCL ([#701](#701))
+ Add FBCSR support for NVIDIA and AMD GPUs and CPUs with OpenMP ([#775](#775))
+ Add CSR scaling ([#848](#848))
+ Add array::const_view and equivalent to create constant matrices from non-const data ([#890](#890))
+ Add a RowGatherer LinOp supporting mixed precision to gather dense matrix rows ([#901](#901))
+ Add mixed precision SparsityCsr SpMV support ([#970](#970))
+ Allow creating CSR submatrix including from (possibly discontinuous) index sets ([#885](#885), [#964](#964))
+ Add a scaled identity addition (M <- aI + bM) feature interface and impls for Csr and Dense ([#942](#942))


Deprecations and important changes:
+ Deprecate AmgxPgm in favor of the new Pgm name. ([#1149](#1149)).
+ Deprecate specialized residual norm classes in favor of a common `ResidualNorm` class ([#1101](#1101))
+ Deprecate CamelCase non-polymorphic types in favor of snake_case versions (like array, machine_topology, uninitialized_array, index_set) ([#1031](#1031), [#1052](#1052))
+ Bug fix: restrict gko::share to rvalue references (*possible interface break*) ([#1020](#1020))
+ Bug fix: when using cuSPARSE's triangular solvers, specifying the factory parameter `num_rhs` is now required when solving for more than one right-hand side, otherwise an exception is thrown ([#1184](#1184)).
+ Drop official support for old CUDA < 9.2 ([#887](#887))


Improved performance additions:
+ Reuse tmp storage in reductions in solvers and add a mutable workspace to all solvers ([#1013](#1013), [#1028](#1028))
+ Add HIP unsafe atomic option for AMD ([#1091](#1091))
+ Prefer vendor implementations for Dense dot, conj_dot and norm2 when available ([#967](#967)).
+ Tuned OpenMP SellP, COO, and ELL SpMV kernels for a small number of RHS ([#809](#809))


Fixes:
+ Fix various compilation warnings ([#1076](#1076), [#1183](#1183), [#1189](#1189))
+ Fix issues with hwloc-related tests ([#1074](#1074))
+ Fix include headers for GCC 12 ([#1071](#1071))
+ Fix for simple-solver-logging example ([#1066](#1066))
+ Fix for potential memory leak in Logger ([#1056](#1056))
+ Fix logging of mixin classes ([#1037](#1037))
+ Improve value semantics for LinOp types, like moved-from state in cross-executor copy/clones ([#753](#753))
+ Fix some matrix SpMV and conversion corner cases ([#905](#905), [#978](#978))
+ Fix uninitialized data ([#958](#958))
+ Fix CUDA version requirement for cusparseSpSM ([#953](#953))
+ Fix several issues within bash-script ([#1016](#1016))
+ Fixes for `NVHPC` compiler support ([#1194](#1194))


Other additions:
+ Simplify and properly name GMRES kernels ([#861](#861))
+ Improve pkg-config support for non-CMake libraries ([#923](#923), [#1109](#1109))
+ Improve gdb pretty printer ([#987](#987), [#1114](#1114))
+ Add a logger highlighting inefficient allocation and copy patterns ([#1035](#1035))
+ Improved and optimized test random matrix generation ([#954](#954), [#1032](#1032))
+ Better CSR strategy defaults ([#969](#969))
+ Add `move_from` to `PolymorphicObject` ([#997](#997))
+ Remove unnecessary device_guard usage ([#956](#956))
+ Improvements to the generic accessor for mixed-precision ([#727](#727))
+ Add a naive lower triangular solver implementation for CUDA ([#764](#764))
+ Add support for int64 indices from CUDA 11 onward with SpMV and SpGEMM ([#897](#897))
+ Add a L1 norm implementation ([#900](#900))
+ Add reduce_add for arrays ([#831](#831))
+ Add utility to simplify Dense View creation from an existing Dense vector ([#1136](#1136)).
+ Add a custom transpose implementation for Fbcsr and Csr transpose for unsupported vendor types ([#1123](#1123))
+ Make IDR random initilization deterministic ([#1116](#1116))
+ Move the algorithm choice for triangular solvers from Csr::strategy_type to a factory parameter ([#1088](#1088))
+ Update CUDA archCoresPerSM ([#1175](#1116))
+ Add kernels for Csr sparsity pattern lookup ([#994](#994))
+ Differentiate between structural and numerical zeros in Ell/Sellp ([#1027](#1027))
+ Add a binary IO format for matrix data ([#984](#984))
+ Add a tuple zip_iterator implementation ([#966](#966))
+ Simplify kernel stubs and declarations ([#888](#888))
+ Simplify GKO_REGISTER_OPERATION with lambdas ([#859](#859))
+ Simplify copy to device in tests and examples ([#863](#863))
+ More verbose output to array assertions ([#858](#858))
+ Allow parallel compilation for Jacobi kernels ([#871](#871))
+ Change clang-format pointer alignment to left ([#872](#872))
+ Various improvements and fixes to the benchmarking framework ([#750](#750), [#759](#759), [#870](#870), [#911](#911), [#1033](#1033), [#1137](#1137))
+ Various documentation improvements ([#892](#892), [#921](#921), [#950](#950), [#977](#977), [#1021](#1021), [#1068](#1068), [#1069](#1069), [#1080](#1080), [#1081](#1081), [#1108](#1108), [#1153](#1153), [#1154](#1154))
+ Various CI improvements ([#868](#868), [#874](#874), [#884](#884), [#889](#889), [#899](#899), [#903](#903),  [#922](#922), [#925](#925), [#930](#930), [#936](#936), [#937](#937), [#958](#958), [#882](#882), [#1011](#1011), [#1015](#1015), [#989](#989), [#1039](#1039), [#1042](#1042), [#1067](#1067), [#1073](#1073), [#1075](#1075), [#1083](#1083), [#1084](#1084), [#1085](#1085), [#1139](#1139), [#1178](#1178), [#1187](#1187))
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:ready-to-merge This PR is ready to merge. mod:dpcpp This is related to the DPC++ module. reg:build This is related to the build system. reg:testing This is related to testing. type:multigrid This is related to multigrid
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants