Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Two level and use type as config #960

Merged
merged 11 commits into from
Jul 11, 2022
Merged

Two level and use type as config #960

merged 11 commits into from
Jul 11, 2022

Conversation

yhmtsai
Copy link
Member

@yhmtsai yhmtsai commented Jan 30, 2022

This PR adds some macro helper and the kernel will use the type for device config.
Moreover, it adds two level selection, i.e. first for device attributes and second for kernel own attributes.
Macro detail:
There are two kinds of selecting config type.

  1. Use ConfigSet to select the kernel. the selection will change the ConfigSet to device_config type.
    pro: keep the kernel call like cuda/hip. only the additional first arg for device config. Leave the kernel only do kernel things. reuse or keep the same config uint.
    con: uint to type is not a direct usage
  2. Use type selection
    pro: only handle type.
    con: kernel call will be function containing a kernel or multiple kernel if they need the same config.

In my opinion, when porting cuda to dpcpp, I will definitely go for option 1 to reduce the difference between cuda and dpcpp (on device core view). However, for self-implemented kernel or later change, I think both should be okay If the type selection is easier to use or the uint selection does not reduce the effort.

@yhmtsai yhmtsai added the 1:ST:need-feedback The PR is somewhat ready but feedback on a blocking topic is required before a proper review. label Jan 30, 2022
@yhmtsai yhmtsai self-assigned this Jan 30, 2022
@ginkgo-bot ginkgo-bot added mod:core This is related to the core module. mod:dpcpp This is related to the DPC++ module. reg:testing This is related to testing. type:matrix-format This is related to the Matrix formats labels Jan 30, 2022
@tcojean tcojean added this to the Ginkgo 1.5.0 milestone Feb 10, 2022
@sonarcloud
Copy link

sonarcloud bot commented Feb 15, 2022

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 1 Code Smell

No Coverage information No Coverage information
9.0% 9.0% Duplication

@codecov
Copy link

codecov bot commented Feb 15, 2022

Codecov Report

Merging #960 (f04bb7b) into develop (78678c2) will increase coverage by 2.24%.
The diff coverage is n/a.

@@             Coverage Diff             @@
##           develop     #960      +/-   ##
===========================================
+ Coverage    91.57%   93.81%   +2.24%     
===========================================
  Files          504      504              
  Lines        43502    43255     -247     
===========================================
+ Hits         39836    40581     +745     
+ Misses        3666     2674     -992     
Impacted Files Coverage Δ
omp/reorder/rcm_kernels.cpp 97.53% <0.00%> (-0.61%) ⬇️
core/matrix/hybrid.cpp 96.22% <0.00%> (+0.62%) ⬆️
core/matrix/ell.cpp 97.12% <0.00%> (+0.74%) ⬆️
core/matrix/fbcsr.cpp 94.70% <0.00%> (+1.32%) ⬆️
include/ginkgo/core/matrix/csr.hpp 44.98% <0.00%> (+1.61%) ⬆️
include/ginkgo/core/matrix/hybrid.hpp 98.36% <0.00%> (+1.63%) ⬆️
core/matrix/coo.cpp 100.00% <0.00%> (+1.98%) ⬆️
core/matrix/dense.cpp 96.90% <0.00%> (+3.55%) ⬆️
common/unified/matrix/dense_kernels.cpp 100.00% <0.00%> (+4.34%) ⬆️
core/matrix/csr.cpp 98.07% <0.00%> (+4.67%) ⬆️
... and 17 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 78678c2...f04bb7b. Read the comment docs.

@thoasm thoasm self-requested a review March 17, 2022 11:20
@upsj upsj self-requested a review April 12, 2022 08:39
Copy link
Member

@thoasm thoasm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Part 1/2

dpcpp/base/config.hpp Show resolved Hide resolved
dpcpp/base/config.hpp Outdated Show resolved Hide resolved
dpcpp/base/config.hpp Outdated Show resolved Hide resolved
dpcpp/base/config.hpp Outdated Show resolved Hide resolved
dpcpp/base/config.hpp Outdated Show resolved Hide resolved
dpcpp/base/config.hpp Outdated Show resolved Hide resolved
dpcpp/base/helper.hpp Outdated Show resolved Hide resolved
dpcpp/base/helper.hpp Show resolved Hide resolved
dpcpp/base/helper.hpp Outdated Show resolved Hide resolved
@ginkgo-bot ginkgo-bot requested a review from a team April 25, 2022 09:16
@yhmtsai yhmtsai added the 1:ST:ready-for-review This PR is ready for review label Apr 25, 2022
Copy link
Member

@upsj upsj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! I would be happiest if we had only a single solution (int or config, not both), since that blows up implementation_selection.hpp quite a bit. Maybe also add tests for the config selection via type?

@@ -50,18 +51,15 @@ namespace dpcpp {
namespace components {


using BlockCfg = ConfigSet<11>;
constexpr auto block_cfg_list = block_cfg_list_t();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no need to emit a symbol for this (also might cause ODR issues if we have different values for the same variable in different files)

Suggested change
constexpr auto block_cfg_list = block_cfg_list_t();
static constexpr auto block_cfg_list = block_cfg_list_t();

thoasm
thoasm previously approved these changes May 12, 2022
Copy link
Member

@thoasm thoasm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly comments about the device_config type naming.

dpcpp/base/config.hpp Outdated Show resolved Hide resolved
dpcpp/base/config.hpp Outdated Show resolved Hide resolved
dpcpp/base/kernel_launch_reduction.dp.hpp Outdated Show resolved Hide resolved
dpcpp/components/prefix_sum.dp.hpp Outdated Show resolved Hide resolved
dpcpp/components/reduction.dp.hpp Outdated Show resolved Hide resolved
dpcpp/matrix/dense_kernels.dp.cpp Outdated Show resolved Hide resolved
dpcpp/matrix/ell_kernels.dp.cpp Outdated Show resolved Hide resolved
dpcpp/matrix/ell_kernels.dp.cpp Outdated Show resolved Hide resolved
dpcpp/test/components/cooperative_groups.dp.cpp Outdated Show resolved Hide resolved
dpcpp/synthesizer/implementation_selection.hpp Outdated Show resolved Hide resolved
@thoasm thoasm dismissed their stale review May 12, 2022 08:14

Dismissed until debug output is removed.

Copy link
Member

@thoasm thoasm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some cfg -> DeviceConfig changes were missed.
Other than that, LGTM.

@@ -302,7 +299,7 @@ void run_kernel_reduction_cached(std::shared_ptr<const DpcppExecutor> exec,
namespace {


template <std::uint32_t cfg, int ssg_size, typename ValueType,
template <typename cfg, int ssg_size, typename ValueType,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
template <typename cfg, int ssg_size, typename ValueType,
template <typename DeviceConfig, int ssg_size, typename ValueType,

@@ -358,16 +355,16 @@ GKO_ENABLE_IMPLEMENTATION_SELECTION(select_generic_kernel_row_reduction_2d,
generic_kernel_row_reduction_2d);


template <std::uint32_t cfg, int ssg_size, typename ValueType,
template <typename cfg, int ssg_size, typename ValueType,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
template <typename cfg, int ssg_size, typename ValueType,
template <typename DeviceConfig, int ssg_size, typename ValueType,

@@ -433,16 +430,16 @@ void generic_kernel_col_reduction_2d_small(
}


template <std::uint32_t cfg, typename ValueType, typename KernelFunction,
template <typename cfg, typename ValueType, typename KernelFunction,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
template <typename cfg, typename ValueType, typename KernelFunction,
template <typename DeviceConfig, typename ValueType, typename KernelFunction,

@@ -502,7 +499,7 @@ void generic_kernel_reduction_finalize_2d(
}


template <std::uint32_t cfg, int ssg_size, typename ValueType,
template <typename cfg, int ssg_size, typename ValueType,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
template <typename cfg, int ssg_size, typename ValueType,
template <typename DeviceConfig, int ssg_size, typename ValueType,

@@ -551,7 +548,7 @@ GKO_ENABLE_IMPLEMENTATION_SELECTION(select_generic_col_reduction_small,
run_generic_col_reduction_small);


template <std::uint32_t cfg, typename ValueType, typename KernelFunction,
template <typename cfg, typename ValueType, typename KernelFunction,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
template <typename cfg, typename ValueType, typename KernelFunction,
template <typename DeviceConfig, typename ValueType, typename KernelFunction,



template <std::uint32_t cfg, typename ValueType, typename KernelFunction,
template <typename cfg, typename ValueType, typename KernelFunction,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
template <typename cfg, typename ValueType, typename KernelFunction,
template <typename DeviceConfig, typename ValueType, typename KernelFunction,

GKO_ENABLE_DEFAULT_CONFIG_CALL(cg_any_call, cg_any, default_config_list)

TEST_P(CooperativeGroups, Any) { test_all_subgroup(cg_any_call<bool*>); }


template <std::uint32_t config>
template <typename cfg>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
template <typename cfg>
template <typename DeviceConfig>

@yhmtsai yhmtsai force-pushed the two_level branch 3 times, most recently from 3dfaae7 to 694a516 Compare July 3, 2022 19:53
@yhmtsai yhmtsai added 1:ST:ready-to-merge This PR is ready to merge. and removed 1:ST:ready-for-review This PR is ready for review labels Jul 11, 2022
@sonarcloud
Copy link

sonarcloud bot commented Jul 11, 2022

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

No Coverage information No Coverage information
9.8% 9.8% Duplication

@yhmtsai yhmtsai merged commit 4726fed into develop Jul 11, 2022
@yhmtsai yhmtsai deleted the two_level branch July 11, 2022 21:37
tcojean added a commit that referenced this pull request Nov 12, 2022
Advertise release 1.5.0 and last changes

+ Add changelog,
+ Update third party libraries
+ A small fix to a CMake file

See PR: #1195

The Ginkgo team is proud to announce the new Ginkgo minor release 1.5.0. This release brings many important new features such as:
- MPI-based multi-node support for all matrix formats and most solvers;
- full DPC++/SYCL support,
- functionality and interface for GPU-resident sparse direct solvers,
- an interface for wrapping solvers with scaling and reordering applied,
- a new algebraic Multigrid solver/preconditioner,
- improved mixed-precision support,
- support for device matrix assembly,

and much more.

If you face an issue, please first check our [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues) and the [open issues list](https://github.com/ginkgo-project/ginkgo/issues) and if you do not find a solution, feel free to [open a new issue](https://github.com/ginkgo-project/ginkgo/issues/new/choose) or ask a question using the [github discussions](https://github.com/ginkgo-project/ginkgo/discussions).

Supported systems and requirements:
+ For all platforms, CMake 3.13+
+ C++14 compliant compiler
+ Linux and macOS
  + GCC: 5.5+
  + clang: 3.9+
  + Intel compiler: 2018+
  + Apple LLVM: 8.0+
  + NVHPC: 22.7+
  + Cray Compiler: 14.0.1+
  + CUDA module: CUDA 9.2+ or NVHPC 22.7+
  + HIP module: ROCm 4.0+
  + DPC++ module: Intel OneAPI 2021.3 with oneMKL and oneDPL. Set the CXX compiler to `dpcpp`.
+ Windows
  + MinGW and Cygwin: GCC 5.5+
  + Microsoft Visual Studio: VS 2019
  + CUDA module: CUDA 9.2+, Microsoft Visual Studio
  + OpenMP module: MinGW or Cygwin.


Algorithm and important feature additions:
+ Add MPI-based multi-node for all matrix formats and solvers (except GMRES and IDR). ([#676](#676), [#908](#908), [#909](#909), [#932](#932), [#951](#951), [#961](#961), [#971](#971), [#976](#976), [#985](#985), [#1007](#1007), [#1030](#1030), [#1054](#1054), [#1100](#1100), [#1148](#1148))
+ Porting the remaining algorithms (preconditioners like ISAI, Jacobi, Multigrid, ParILU(T) and ParIC(T)) to DPC++/SYCL, update to SYCL 2020, and improve support and performance ([#896](#896), [#924](#924), [#928](#928), [#929](#929), [#933](#933), [#943](#943), [#960](#960), [#1057](#1057), [#1110](#1110),  [#1142](#1142))
+ Add a Sparse Direct interface supporting GPU-resident numerical LU factorization, symbolic Cholesky factorization, improved triangular solvers, and more ([#957](#957), [#1058](#1058), [#1072](#1072), [#1082](#1082))
+ Add a ScaleReordered interface that can wrap solvers and automatically apply reorderings and scalings ([#1059](#1059))
+ Add a Multigrid solver and improve the aggregation based PGM coarsening scheme ([#542](#542), [#913](#913), [#980](#980), [#982](#982),  [#986](#986))
+ Add infrastructure for unified, lambda-based, backend agnostic, kernels and utilize it for some simple kernels ([#833](#833), [#910](#910), [#926](#926))
+ Merge different CUDA, HIP, DPC++ and OpenMP tests under a common interface ([#904](#904), [#973](#973), [#1044](#1044), [#1117](#1117))
+ Add a device_matrix_data type for device-side matrix assembly ([#886](#886), [#963](#963), [#965](#965))
+ Add support for mixed real/complex BLAS operations ([#864](#864))
+ Add a FFT LinOp for all but DPC++/SYCL ([#701](#701))
+ Add FBCSR support for NVIDIA and AMD GPUs and CPUs with OpenMP ([#775](#775))
+ Add CSR scaling ([#848](#848))
+ Add array::const_view and equivalent to create constant matrices from non-const data ([#890](#890))
+ Add a RowGatherer LinOp supporting mixed precision to gather dense matrix rows ([#901](#901))
+ Add mixed precision SparsityCsr SpMV support ([#970](#970))
+ Allow creating CSR submatrix including from (possibly discontinuous) index sets ([#885](#885), [#964](#964))
+ Add a scaled identity addition (M <- aI + bM) feature interface and impls for Csr and Dense ([#942](#942))


Deprecations and important changes:
+ Deprecate AmgxPgm in favor of the new Pgm name. ([#1149](#1149)).
+ Deprecate specialized residual norm classes in favor of a common `ResidualNorm` class ([#1101](#1101))
+ Deprecate CamelCase non-polymorphic types in favor of snake_case versions (like array, machine_topology, uninitialized_array, index_set) ([#1031](#1031), [#1052](#1052))
+ Bug fix: restrict gko::share to rvalue references (*possible interface break*) ([#1020](#1020))
+ Bug fix: when using cuSPARSE's triangular solvers, specifying the factory parameter `num_rhs` is now required when solving for more than one right-hand side, otherwise an exception is thrown ([#1184](#1184)).
+ Drop official support for old CUDA < 9.2 ([#887](#887))


Improved performance additions:
+ Reuse tmp storage in reductions in solvers and add a mutable workspace to all solvers ([#1013](#1013), [#1028](#1028))
+ Add HIP unsafe atomic option for AMD ([#1091](#1091))
+ Prefer vendor implementations for Dense dot, conj_dot and norm2 when available ([#967](#967)).
+ Tuned OpenMP SellP, COO, and ELL SpMV kernels for a small number of RHS ([#809](#809))


Fixes:
+ Fix various compilation warnings ([#1076](#1076), [#1183](#1183), [#1189](#1189))
+ Fix issues with hwloc-related tests ([#1074](#1074))
+ Fix include headers for GCC 12 ([#1071](#1071))
+ Fix for simple-solver-logging example ([#1066](#1066))
+ Fix for potential memory leak in Logger ([#1056](#1056))
+ Fix logging of mixin classes ([#1037](#1037))
+ Improve value semantics for LinOp types, like moved-from state in cross-executor copy/clones ([#753](#753))
+ Fix some matrix SpMV and conversion corner cases ([#905](#905), [#978](#978))
+ Fix uninitialized data ([#958](#958))
+ Fix CUDA version requirement for cusparseSpSM ([#953](#953))
+ Fix several issues within bash-script ([#1016](#1016))
+ Fixes for `NVHPC` compiler support ([#1194](#1194))


Other additions:
+ Simplify and properly name GMRES kernels ([#861](#861))
+ Improve pkg-config support for non-CMake libraries ([#923](#923), [#1109](#1109))
+ Improve gdb pretty printer ([#987](#987), [#1114](#1114))
+ Add a logger highlighting inefficient allocation and copy patterns ([#1035](#1035))
+ Improved and optimized test random matrix generation ([#954](#954), [#1032](#1032))
+ Better CSR strategy defaults ([#969](#969))
+ Add `move_from` to `PolymorphicObject` ([#997](#997))
+ Remove unnecessary device_guard usage ([#956](#956))
+ Improvements to the generic accessor for mixed-precision ([#727](#727))
+ Add a naive lower triangular solver implementation for CUDA ([#764](#764))
+ Add support for int64 indices from CUDA 11 onward with SpMV and SpGEMM ([#897](#897))
+ Add a L1 norm implementation ([#900](#900))
+ Add reduce_add for arrays ([#831](#831))
+ Add utility to simplify Dense View creation from an existing Dense vector ([#1136](#1136)).
+ Add a custom transpose implementation for Fbcsr and Csr transpose for unsupported vendor types ([#1123](#1123))
+ Make IDR random initilization deterministic ([#1116](#1116))
+ Move the algorithm choice for triangular solvers from Csr::strategy_type to a factory parameter ([#1088](#1088))
+ Update CUDA archCoresPerSM ([#1175](#1116))
+ Add kernels for Csr sparsity pattern lookup ([#994](#994))
+ Differentiate between structural and numerical zeros in Ell/Sellp ([#1027](#1027))
+ Add a binary IO format for matrix data ([#984](#984))
+ Add a tuple zip_iterator implementation ([#966](#966))
+ Simplify kernel stubs and declarations ([#888](#888))
+ Simplify GKO_REGISTER_OPERATION with lambdas ([#859](#859))
+ Simplify copy to device in tests and examples ([#863](#863))
+ More verbose output to array assertions ([#858](#858))
+ Allow parallel compilation for Jacobi kernels ([#871](#871))
+ Change clang-format pointer alignment to left ([#872](#872))
+ Various improvements and fixes to the benchmarking framework ([#750](#750), [#759](#759), [#870](#870), [#911](#911), [#1033](#1033), [#1137](#1137))
+ Various documentation improvements ([#892](#892), [#921](#921), [#950](#950), [#977](#977), [#1021](#1021), [#1068](#1068), [#1069](#1069), [#1080](#1080), [#1081](#1081), [#1108](#1108), [#1153](#1153), [#1154](#1154))
+ Various CI improvements ([#868](#868), [#874](#874), [#884](#884), [#889](#889), [#899](#899), [#903](#903),  [#922](#922), [#925](#925), [#930](#930), [#936](#936), [#937](#937), [#958](#958), [#882](#882), [#1011](#1011), [#1015](#1015), [#989](#989), [#1039](#1039), [#1042](#1042), [#1067](#1067), [#1073](#1073), [#1075](#1075), [#1083](#1083), [#1084](#1084), [#1085](#1085), [#1139](#1139), [#1178](#1178), [#1187](#1187))
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:need-feedback The PR is somewhat ready but feedback on a blocking topic is required before a proper review. 1:ST:ready-to-merge This PR is ready to merge. mod:core This is related to the core module. mod:dpcpp This is related to the DPC++ module. reg:testing This is related to testing. type:matrix-format This is related to the Matrix formats
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants