Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Oneapi6 fix #1251

Merged
merged 9 commits into from
Mar 15, 2023
Merged

Oneapi6 fix #1251

merged 9 commits into from
Mar 15, 2023

Conversation

yhmtsai
Copy link
Member

@yhmtsai yhmtsai commented Jan 3, 2023

oneapi change their structure internally after major 6 and start to drop subgroup 8 for some GPU.
Note. the version after compiler is just date, which does not indicate any version information.
the original DPCPP version we catch is the date of building package, so I can not use it.
oneapi 6 change:

  • sycl is not under cl namespace
    they put using sycl = ::sycl in namespace cl to provide some back compatibility, but we have forward declaration and specialization. It will lead compiler error, so we need to adjust namespace.
  • onemkl gemm works only works on no-empty matrix like other vendor

@yhmtsai yhmtsai requested a review from a team January 3, 2023 14:11
@yhmtsai yhmtsai self-assigned this Jan 3, 2023
@ginkgo-bot ginkgo-bot added mod:core This is related to the core module. mod:dpcpp This is related to the DPC++ module. reg:build This is related to the build system. type:factorization This is related to the Factorizations type:matrix-format This is related to the Matrix formats labels Jan 3, 2023
@yhmtsai yhmtsai mentioned this pull request Jan 12, 2023
12 tasks
Copy link
Member

@tcojean tcojean left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor nit, I didn't look at the code.

As I feared originally with these magic numbers, we might need a hardware SIMD table somewhere to keep 7 for some hardware while updating for newer ones?

@@ -393,7 +393,7 @@ class Csr : public EnableLinOp<Csr<ValueType, IndexType>>,
* number of threads in a SIMD unit is 7
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* number of threads in a SIMD unit is 7
* number of threads in a SIMD unit is 8

@@ -583,7 +583,7 @@ class Csr : public EnableLinOp<Csr<ValueType, IndexType>>,
* number of threads in a SIMD unit is 7
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* number of threads in a SIMD unit is 7
* number of threads in a SIMD unit is 8

@upsj upsj self-requested a review February 6, 2023 13:31
@yhmtsai yhmtsai force-pushed the oneapi6_fix branch 3 times, most recently from 9e631fe to 1ed4027 Compare February 8, 2023 07:55
@yhmtsai yhmtsai added the 1:ST:ready-for-review This PR is ready for review label Feb 8, 2023
@yhmtsai yhmtsai requested review from tcojean and a team February 8, 2023 10:03

// after intel/llvm September'22 release, which uses major version 6, they
// introduce another inline namespace _V1.
#if GINKGO_DPCPP_MAJOR_VERSION >= 6
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to keep supporting the old version? It sounds like their ecosystem is slowly stabilizing, maybe we can follow that?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the next release I would consider it, though this _V1 namespace thing is concerning, it's not stabilizing that much...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the _V1 is annoying to keep on our side. They do this way to keep the non-breaking interface after this changes. (_V1->_V2->...). Thus, it will not be shown on the release documentation, I think. I have tried something to avoid _V1 but failed. maybe I miss something

include/ginkgo/core/base/executor.hpp Outdated Show resolved Hide resolved
dpcpp/matrix/dense_kernels.dp.cpp Show resolved Hide resolved
dpcpp/matrix/csr_kernels.dp.cpp Outdated Show resolved Hide resolved
@@ -274,15 +274,38 @@ void DpcppExecutor::set_device_property(dpcpp_queue_property property)
}
this->get_exec_info().max_workgroup_size = static_cast<int>(
device.get_info<sycl::info::device::max_work_group_size>());
// They change the max_work_item_size with template parameter Dimension after
// major version 6 and adding the default = 3 is not in the same release.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would 3 be a sensible default at all? It sounds like that is a pretty CUDA-specific view? maybe just state that this is the dimensionality we use.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think so. At least, it's the default dimension template choice from the spec.
They just forgot to add the default in some version when updating the code with updated spec

dpcpp/components/cooperative_groups.dp.hpp Show resolved Hide resolved
@yhmtsai
Copy link
Member Author

yhmtsai commented Feb 11, 2023

rebase!

@sonarcloud
Copy link

sonarcloud bot commented Feb 11, 2023

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 1 Code Smell

0.0% 0.0% Coverage
0.0% 0.0% Duplication

Copy link
Member

@pratikvn pratikvn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is something holding this PR back ? Maybe we can push this ?

include/ginkgo/core/base/executor.hpp Outdated Show resolved Hide resolved
Copy link
Member

@pratikvn pratikvn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

Copy link
Member

@tcojean tcojean left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

include/ginkgo/config.hpp.in Show resolved Hide resolved
dpcpp/CMakeLists.txt Outdated Show resolved Hide resolved
dpcpp/base/executor.dp.cpp Show resolved Hide resolved
dpcpp/matrix/csr_kernels.dp.cpp Outdated Show resolved Hide resolved
include/ginkgo/core/base/executor.hpp Outdated Show resolved Hide resolved

// after intel/llvm September'22 release, which uses major version 6, they
// introduce another inline namespace _V1.
#if GINKGO_DPCPP_MAJOR_VERSION >= 6
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the next release I would consider it, though this _V1 namespace thing is concerning, it's not stabilizing that much...

include/ginkgo/core/base/executor.hpp Outdated Show resolved Hide resolved
@ginkgo-bot
Copy link
Member

Note: This PR changes the Ginkgo ABI:

Functions changes summary: 1 Removed, 0 Changed, 1 Added functions
Variables changes summary: 0 Removed, 0 Changed, 0 Added variable

For details check the full ABI diff under Artifacts here

@yhmtsai yhmtsai added 1:ST:ready-to-merge This PR is ready to merge. and removed 1:ST:ready-for-review This PR is ready for review labels Mar 14, 2023
@sonarcloud
Copy link

sonarcloud bot commented Mar 15, 2023

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 1 Code Smell

0.0% 0.0% Coverage
6.9% 6.9% Duplication

Copy link
Member

@upsj upsj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@codecov
Copy link

codecov bot commented Mar 15, 2023

Codecov Report

Patch coverage has no change and project coverage change: -0.06 ⚠️

Comparison is base (f050721) 90.83% compared to head (ec8981a) 90.77%.

Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #1251      +/-   ##
===========================================
- Coverage    90.83%   90.77%   -0.06%     
===========================================
  Files          570      570              
  Lines        48621    48557      -64     
===========================================
- Hits         44164    44079      -85     
- Misses        4457     4478      +21     
Impacted Files Coverage Δ
include/ginkgo/core/base/executor.hpp 75.23% <0.00%> (-0.36%) ⬇️
include/ginkgo/core/base/math.hpp 100.00% <ø> (ø)
include/ginkgo/core/matrix/csr.hpp 44.33% <0.00%> (+0.28%) ⬆️

... and 17 files with indirect coverage changes

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

@yhmtsai yhmtsai merged commit c97d33e into develop Mar 15, 2023
@yhmtsai yhmtsai deleted the oneapi6_fix branch March 15, 2023 15:39
tcojean added a commit that referenced this pull request Jun 16, 2023
Release 1.6.0 of Ginkgo.

The Ginkgo team is proud to announce the new Ginkgo minor release 1.6.0. This release brings new features such as:
- Several building blocks for GPU-resident sparse direct solvers like symbolic
  and numerical LU and Cholesky factorization, ...,
- A distributed Schwarz preconditioner,
- New FGMRES and GCR solvers,
- Distributed benchmarks for the SpMV operation, solvers, ...
- Support for non-default streams in the CUDA and HIP backends,
- Mixed precision support for the CSR SpMV,
- A new profiling logger which integrates with NVTX, ROCTX, TAU and VTune to
  provide internal Ginkgo knowledge to most HPC profilers!

and much more.

If you face an issue, please first check our [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues) and the [open issues list](https://github.com/ginkgo-project/ginkgo/issues) and if you do not find a solution, feel free to [open a new issue](https://github.com/ginkgo-project/ginkgo/issues/new/choose) or ask a question using the [github discussions](https://github.com/ginkgo-project/ginkgo/discussions).

Supported systems and requirements:
+ For all platforms, CMake 3.13+
+ C++14 compliant compiler
+ Linux and macOS
  + GCC: 5.5+
  + clang: 3.9+
  + Intel compiler: 2018+
  + Apple Clang: 14.0 is tested. Earlier versions might also work.
  + NVHPC: 22.7+
  + Cray Compiler: 14.0.1+
  + CUDA module: CUDA 9.2+ or NVHPC 22.7+
  + HIP module: ROCm 4.5+
  + DPC++ module: Intel OneAPI 2021.3+ with oneMKL and oneDPL. Set the CXX compiler to `dpcpp`.
+ Windows
  + MinGW: GCC 5.5+
  + Microsoft Visual Studio: VS 2019+
  + CUDA module: CUDA 9.2+, Microsoft Visual Studio
  + OpenMP module: MinGW.

### Version Support Changes
+ ROCm 4.0+ -> 4.5+ after [#1303](#1303)
+ Removed Cygwin pipeline and support [#1283](#1283)

### Interface Changes
+ Due to internal changes, `ConcreteExecutor::run` will now always throw if the corresponding module for the `ConcreteExecutor` is not build [#1234](#1234)
+ The constructor of `experimental::distributed::Vector` was changed to only accept local vectors as `std::unique_ptr` [#1284](#1284)
+ The default parameters for the `solver::MultiGrid` were improved. In particular, the smoother defaults to one iteration of `Ir` with `Jacobi` preconditioner, and the coarse grid solver uses the new direct solver with LU factorization. [#1291](#1291) [#1327](#1327)
+ The `iteration_complete` event gained a more expressive overload with additional parameters, the old overloads were deprecated. [#1288](#1288) [#1327](#1327)

### Deprecations
+ Deprecated less expressive `iteration_complete` event. Users are advised to now implement the function `void iteration_complete(const LinOp* solver, const LinOp* b, const LinOp* x, const size_type& it, const LinOp* r, const LinOp* tau, const LinOp* implicit_tau_sq, const array<stopping_status>* status, bool stopped)` [#1288](#1288)

### Added Features
+ A distributed Schwarz preconditioner. [#1248](#1248)
+ A GCR solver [#1239](#1239)
+ Flexible Gmres solver [#1244](#1244)
+ Enable Gmres solver for distributed matrices and vectors [#1201](#1201)
+ An example that uses Kokkos to assemble the system matrix [#1216](#1216)
+ A symbolic LU factorization allowing the `gko::experimental::factorization::Lu` and `gko::experimental::solver::Direct` classes to be used for matrices with non-symmetric sparsity pattern [#1210](#1210)
+ A numerical Cholesky factorization [#1215](#1215)
+ Symbolic factorizations in host-side operations are now wrapped in a host-side `Operation` to make their execution visible to loggers. This means that profiling loggers and benchmarks are no longer missing a separate entry for their runtime [#1232](#1232)
+ Symbolic factorization benchmark [#1302](#1302)
+ The `ProfilerHook` logger allows annotating the Ginkgo execution (apply, operations, ...) for profiling frameworks like NVTX, ROCTX and TAU. [#1055](#1055)
+ `ProfilerHook::created_(nested_)summary` allows the generation of a lightweight runtime profile over all Ginkgo functions written to a user-defined stream [#1270](#1270) for both host and device timing functionality [#1313](#1313)
+ It is now possible to enable host buffers for MPI communications at runtime even if the compile option `GINKGO_FORCE_GPU_AWARE_MPI` is set. [#1228](#1228)
+ A stencil matrices generator (5-pt, 7-pt, 9-pt, and 27-pt) for benchmarks [#1204](#1204)
+ Distributed benchmarks (multi-vector blas, SpMV, solver) [#1204](#1204)
+ Benchmarks for CSR sorting and lookup [#1219](#1219)
+ A timer for MPI benchmarks that reports the longest time [#1217](#1217)
+ A `timer_method=min|max|average|median` flag for benchmark timing summary [#1294](#1294)
+ Support for non-default streams in CUDA and HIP executors [#1236](#1236)
+ METIS integration for nested dissection reordering [#1296](#1296)
+ SuiteSparse AMD integration for fillin-reducing reordering [#1328](#1328)
+ Csr mixed-precision SpMV support [#1319](#1319)
+ A `with_loggers` function for all `Factory` parameters [#1337](#1337)

### Improvements
+ Improve naming of kernel operations for loggers [#1277](#1277)
+ Annotate solver iterations in `ProfilerHook` [#1290](#1290)
+ Allow using the profiler hooks and inline input strings in benchmarks [#1342](#1342)
+ Allow passing smart pointers in place of raw pointers to most matrix functions. This means that things like `vec->compute_norm2(x.get())` or `vec->compute_norm2(lend(x))` can be simplified to `vec->compute_norm2(x)` [#1279](#1279) [#1261](#1261)
+ Catch overflows in prefix sum operations, which makes Ginkgo's operations much less likely to crash. This also improves the performance of the prefix sum kernel [#1303](#1303)
+ Make the installed GinkgoConfig.cmake file relocatable and follow more best practices [#1325](#1325)

### Fixes
+ Fix OpenMPI version check [#1200](#1200)
+ Fix the mpi cxx type binding by c binding [#1306](#1306)
+ Fix runtime failures for one-sided MPI wrapper functions observed on some OpenMPI versions [#1249](#1249)
+ Disable thread pinning with GPU executors due to poor performance [#1230](#1230)
+ Fix hwloc version detection [#1266](#1266)
+ Fix PAPI detection in non-implicit include directories [#1268](#1268)
+ Fix PAPI support for newer PAPI versions: [#1321](#1321)
+ Fix pkg-config file generation for library paths outside prefix [#1271](#1271)
+ Fix various build failures with ROCm 5.4, CUDA 12, and OneAPI 6 [#1214](#1214), [#1235](#1235), [#1251](#1251)
+ Fix incorrect read for skew-symmetric MatrixMarket files with explicit diagonal entries [#1272](#1272)
+ Fix handling of missing diagonal entries in symbolic factorizations [#1263](#1263)
+ Fix segmentation fault in benchmark matrix construction [#1299](#1299)
+ Fix the stencil matrix creation for benchmarking [#1305](#1305)
+ Fix the additional residual check in IR [#1307](#1307)
+ Fix the cuSPARSE CSR SpMM issue on single strided vector when cuda >= 11.6 [#1322](#1322) [#1331](#1331)
+ Fix Isai generation for large sparsity powers [#1327](#1327)
+ Fix Ginkgo compilation and test with NVHPC >= 22.7 [#1331](#1331)
+ Fix Ginkgo compilation of 32 bit binaries with MSVC [#1349](#1349)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:ready-to-merge This PR is ready to merge. mod:core This is related to the core module. mod:dpcpp This is related to the DPC++ module. reg:build This is related to the build system. type:factorization This is related to the Factorizations type:matrix-format This is related to the Matrix formats
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants