Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add METIS integration #1296

Merged
merged 6 commits into from
Mar 21, 2023
Merged

Add METIS integration #1296

merged 6 commits into from
Mar 21, 2023

Conversation

upsj
Copy link
Member

@upsj upsj commented Mar 13, 2023

This adds a METIS CMake find module and a simple integration. It also serves as an example for how to use LinOpFactory to represent a reordering algorithm by creating Permutation matrix from a Csr matrix, which might be a potential future interface for reorderings. I guess we also need to add METIS to a handful of containers to test this.

Since METIS gets hiccups from diagonal entries, I added sparsity_csr::remove_diagonal_elements kernels for all backends.

@upsj upsj added the 1:ST:ready-for-review This PR is ready for review label Mar 13, 2023
@upsj upsj requested review from pratikvn and a team March 13, 2023 21:30
@upsj upsj self-assigned this Mar 13, 2023
@ginkgo-bot ginkgo-bot added reg:build This is related to the build system. reg:testing This is related to testing. type:matrix-format This is related to the Matrix formats type:reordering This is related to the matrix(LinOp) reordering mod:all This touches all Ginkgo modules. labels Mar 13, 2023
@upsj
Copy link
Member Author

upsj commented Mar 15, 2023

format-rebase!

@ginkgo-bot
Copy link
Member

Formatting rebase introduced changes, see Artifacts here to review them

cmake/Modules/FindMETIS.cmake Outdated Show resolved Hide resolved
test/reorder/CMakeLists.txt Outdated Show resolved Hide resolved
@upsj
Copy link
Member Author

upsj commented Mar 17, 2023

format-rebase!

@ginkgo-bot
Copy link
Member

Formatting rebase introduced changes, see Artifacts here to review them

Copy link
Member

@pratikvn pratikvn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

At some point, we should maybe think of having one interface for METIS along with ParMETIS, which has MPI support which might be useful for distributed matrix partitioning.

vector<idx_t> tmp_perm(num_rows, {host_exec});
vector<idx_t> tmp_iperm(num_rows, {host_exec});
auto result = METIS_NodeND(&nvtxs, tmp_row_ptrs.data(), tmp_col_idxs.data(),
nullptr, const_cast<idx_t*>(options.data()),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we have the option of providing weights as well ? It might be interesting to see if we can provide weights here such a way that we minimize row interchanges when doing partial pivoting ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a random idea, not sure if it makes a lot of sense. :)

cmake/Modules/FindMETIS.cmake Show resolved Hide resolved
test/reorder/nested_dissection.cpp Show resolved Hide resolved
@MarcelKoch
Copy link
Member

+1 for ParMETIS, but not in this PR.

@pratikvn
Copy link
Member

But I believe they also had a multi-threaded version of METIS, which might be useful to speedup the reordering on one node as well

@codecov
Copy link

codecov bot commented Mar 18, 2023

Codecov Report

Patch coverage: 83.80% and project coverage change: +0.60 🎉

Comparison is base (ccc569b) 90.73% compared to head (3a3a38a) 91.34%.

❗ Current head 3a3a38a differs from pull request most recent head be9be3d. Consider uploading reports for the commit be9be3d to get more accurate results

Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #1296      +/-   ##
===========================================
+ Coverage    90.73%   91.34%   +0.60%     
===========================================
  Files          570      575       +5     
  Lines        48631    48666      +35     
===========================================
+ Hits         44125    44453     +328     
+ Misses        4506     4213     -293     
Impacted Files Coverage Δ
core/device_hooks/common_kernels.inc.cpp 0.00% <ø> (ø)
include/ginkgo/core/base/exception.hpp 96.84% <0.00%> (-3.16%) ⬇️
omp/matrix/sparsity_csr_kernels.cpp 0.00% <ø> (-35.07%) ⬇️
include/ginkgo/core/reorder/nested_dissection.hpp 25.00% <25.00%> (ø)
core/reorder/nested_dissection.cpp 64.58% <64.58%> (ø)
common/unified/matrix/sparsity_csr_kernels.cpp 100.00% <100.00%> (ø)
core/matrix/sparsity_csr.cpp 80.35% <100.00%> (+6.96%) ⬆️
reference/matrix/sparsity_csr_kernels.cpp 100.00% <100.00%> (+7.84%) ⬆️
reference/test/matrix/sparsity_csr_kernels.cpp 98.36% <100.00%> (+0.05%) ⬆️
test/matrix/sparsity_csr_kernels.cpp 100.00% <100.00%> (ø)
... and 1 more

... and 7 files with indirect coverage changes

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

Copy link
Member

@yhmtsai yhmtsai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the reference test and core test is missing.
Also, whether is NestedDissection only from METIS?

Comment on lines +91 to +95
for (auto nz = begin; nz < end; nz++) {
if (col_idxs[nz] == row) {
count++;
}
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

doesn't repeated diagonal elements destroy sparsity CSR property? because the repeated elements does not have the same value as the others

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, I just wanted to make the algorithm robust to broken inputs. METIS crashes with diagonal elements, so we should avoid that at all costs 😆

core/reorder/nested_dissection.cpp Outdated Show resolved Hide resolved
core/reorder/nested_dissection.cpp Outdated Show resolved Hide resolved
core/reorder/nested_dissection.cpp Outdated Show resolved Hide resolved
TypenameNameGenerator);


TYPED_TEST(NestedDissection, ResultIsEquivalentToRef)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It tests with reference results, but the reference test is missing

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Building a ground-truth for METIS is pretty hard, since there are many orderings that may produce equivalent fill-in. What we could do is test that the permutation reduces Cholesky fill-in in a pathological case?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do they provide some example matrix with the result?
if they have, we can use it to ensure we indeed run through METIS

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, and graph partitioning has many degrees of freedom that produce equivalent results, which is why I used the simple star graph as an unambiguous example.

core/reorder/nested_dissection.cpp Outdated Show resolved Hide resolved
@upsj
Copy link
Member Author

upsj commented Mar 21, 2023

rebase!

@upsj
Copy link
Member Author

upsj commented Mar 21, 2023

rebase!

upsj and others added 6 commits March 21, 2023 16:46
- fix failing tests
- move to experimental
- output unknown METIS error IDs
- fail when users provide invalid NUMBERING option
- add core and reference tests

Co-authored-by: Yuhsiang M. Tsai <[email protected]>
Co-authored-by: Yuhsiang M. Tsai <[email protected]>
@sonarcloud
Copy link

sonarcloud bot commented Mar 22, 2023

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

No Coverage information No Coverage information
No Duplication information No Duplication information

tcojean added a commit that referenced this pull request Jun 16, 2023
Release 1.6.0 of Ginkgo.

The Ginkgo team is proud to announce the new Ginkgo minor release 1.6.0. This release brings new features such as:
- Several building blocks for GPU-resident sparse direct solvers like symbolic
  and numerical LU and Cholesky factorization, ...,
- A distributed Schwarz preconditioner,
- New FGMRES and GCR solvers,
- Distributed benchmarks for the SpMV operation, solvers, ...
- Support for non-default streams in the CUDA and HIP backends,
- Mixed precision support for the CSR SpMV,
- A new profiling logger which integrates with NVTX, ROCTX, TAU and VTune to
  provide internal Ginkgo knowledge to most HPC profilers!

and much more.

If you face an issue, please first check our [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues) and the [open issues list](https://github.com/ginkgo-project/ginkgo/issues) and if you do not find a solution, feel free to [open a new issue](https://github.com/ginkgo-project/ginkgo/issues/new/choose) or ask a question using the [github discussions](https://github.com/ginkgo-project/ginkgo/discussions).

Supported systems and requirements:
+ For all platforms, CMake 3.13+
+ C++14 compliant compiler
+ Linux and macOS
  + GCC: 5.5+
  + clang: 3.9+
  + Intel compiler: 2018+
  + Apple Clang: 14.0 is tested. Earlier versions might also work.
  + NVHPC: 22.7+
  + Cray Compiler: 14.0.1+
  + CUDA module: CUDA 9.2+ or NVHPC 22.7+
  + HIP module: ROCm 4.5+
  + DPC++ module: Intel OneAPI 2021.3+ with oneMKL and oneDPL. Set the CXX compiler to `dpcpp`.
+ Windows
  + MinGW: GCC 5.5+
  + Microsoft Visual Studio: VS 2019+
  + CUDA module: CUDA 9.2+, Microsoft Visual Studio
  + OpenMP module: MinGW.

### Version Support Changes
+ ROCm 4.0+ -> 4.5+ after [#1303](#1303)
+ Removed Cygwin pipeline and support [#1283](#1283)

### Interface Changes
+ Due to internal changes, `ConcreteExecutor::run` will now always throw if the corresponding module for the `ConcreteExecutor` is not build [#1234](#1234)
+ The constructor of `experimental::distributed::Vector` was changed to only accept local vectors as `std::unique_ptr` [#1284](#1284)
+ The default parameters for the `solver::MultiGrid` were improved. In particular, the smoother defaults to one iteration of `Ir` with `Jacobi` preconditioner, and the coarse grid solver uses the new direct solver with LU factorization. [#1291](#1291) [#1327](#1327)
+ The `iteration_complete` event gained a more expressive overload with additional parameters, the old overloads were deprecated. [#1288](#1288) [#1327](#1327)

### Deprecations
+ Deprecated less expressive `iteration_complete` event. Users are advised to now implement the function `void iteration_complete(const LinOp* solver, const LinOp* b, const LinOp* x, const size_type& it, const LinOp* r, const LinOp* tau, const LinOp* implicit_tau_sq, const array<stopping_status>* status, bool stopped)` [#1288](#1288)

### Added Features
+ A distributed Schwarz preconditioner. [#1248](#1248)
+ A GCR solver [#1239](#1239)
+ Flexible Gmres solver [#1244](#1244)
+ Enable Gmres solver for distributed matrices and vectors [#1201](#1201)
+ An example that uses Kokkos to assemble the system matrix [#1216](#1216)
+ A symbolic LU factorization allowing the `gko::experimental::factorization::Lu` and `gko::experimental::solver::Direct` classes to be used for matrices with non-symmetric sparsity pattern [#1210](#1210)
+ A numerical Cholesky factorization [#1215](#1215)
+ Symbolic factorizations in host-side operations are now wrapped in a host-side `Operation` to make their execution visible to loggers. This means that profiling loggers and benchmarks are no longer missing a separate entry for their runtime [#1232](#1232)
+ Symbolic factorization benchmark [#1302](#1302)
+ The `ProfilerHook` logger allows annotating the Ginkgo execution (apply, operations, ...) for profiling frameworks like NVTX, ROCTX and TAU. [#1055](#1055)
+ `ProfilerHook::created_(nested_)summary` allows the generation of a lightweight runtime profile over all Ginkgo functions written to a user-defined stream [#1270](#1270) for both host and device timing functionality [#1313](#1313)
+ It is now possible to enable host buffers for MPI communications at runtime even if the compile option `GINKGO_FORCE_GPU_AWARE_MPI` is set. [#1228](#1228)
+ A stencil matrices generator (5-pt, 7-pt, 9-pt, and 27-pt) for benchmarks [#1204](#1204)
+ Distributed benchmarks (multi-vector blas, SpMV, solver) [#1204](#1204)
+ Benchmarks for CSR sorting and lookup [#1219](#1219)
+ A timer for MPI benchmarks that reports the longest time [#1217](#1217)
+ A `timer_method=min|max|average|median` flag for benchmark timing summary [#1294](#1294)
+ Support for non-default streams in CUDA and HIP executors [#1236](#1236)
+ METIS integration for nested dissection reordering [#1296](#1296)
+ SuiteSparse AMD integration for fillin-reducing reordering [#1328](#1328)
+ Csr mixed-precision SpMV support [#1319](#1319)
+ A `with_loggers` function for all `Factory` parameters [#1337](#1337)

### Improvements
+ Improve naming of kernel operations for loggers [#1277](#1277)
+ Annotate solver iterations in `ProfilerHook` [#1290](#1290)
+ Allow using the profiler hooks and inline input strings in benchmarks [#1342](#1342)
+ Allow passing smart pointers in place of raw pointers to most matrix functions. This means that things like `vec->compute_norm2(x.get())` or `vec->compute_norm2(lend(x))` can be simplified to `vec->compute_norm2(x)` [#1279](#1279) [#1261](#1261)
+ Catch overflows in prefix sum operations, which makes Ginkgo's operations much less likely to crash. This also improves the performance of the prefix sum kernel [#1303](#1303)
+ Make the installed GinkgoConfig.cmake file relocatable and follow more best practices [#1325](#1325)

### Fixes
+ Fix OpenMPI version check [#1200](#1200)
+ Fix the mpi cxx type binding by c binding [#1306](#1306)
+ Fix runtime failures for one-sided MPI wrapper functions observed on some OpenMPI versions [#1249](#1249)
+ Disable thread pinning with GPU executors due to poor performance [#1230](#1230)
+ Fix hwloc version detection [#1266](#1266)
+ Fix PAPI detection in non-implicit include directories [#1268](#1268)
+ Fix PAPI support for newer PAPI versions: [#1321](#1321)
+ Fix pkg-config file generation for library paths outside prefix [#1271](#1271)
+ Fix various build failures with ROCm 5.4, CUDA 12, and OneAPI 6 [#1214](#1214), [#1235](#1235), [#1251](#1251)
+ Fix incorrect read for skew-symmetric MatrixMarket files with explicit diagonal entries [#1272](#1272)
+ Fix handling of missing diagonal entries in symbolic factorizations [#1263](#1263)
+ Fix segmentation fault in benchmark matrix construction [#1299](#1299)
+ Fix the stencil matrix creation for benchmarking [#1305](#1305)
+ Fix the additional residual check in IR [#1307](#1307)
+ Fix the cuSPARSE CSR SpMM issue on single strided vector when cuda >= 11.6 [#1322](#1322) [#1331](#1331)
+ Fix Isai generation for large sparsity powers [#1327](#1327)
+ Fix Ginkgo compilation and test with NVHPC >= 22.7 [#1331](#1331)
+ Fix Ginkgo compilation of 32 bit binaries with MSVC [#1349](#1349)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:ready-to-merge This PR is ready to merge. 1:ST:run-full-test mod:all This touches all Ginkgo modules. reg:build This is related to the build system. reg:testing This is related to testing. type:matrix-format This is related to the Matrix formats type:reordering This is related to the matrix(LinOp) reordering
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants