Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix a few MPI related issues. #1249

Merged
merged 4 commits into from
Jan 9, 2023
Merged

Fix a few MPI related issues. #1249

merged 4 commits into from
Jan 9, 2023

Conversation

pratikvn
Copy link
Member

@pratikvn pratikvn commented Dec 22, 2022

This PR fixes a few MPI related issues reported on some clusters with some MPI versions. Most issues were related to the one-sided functionality.

  • Flush was called on the calling rank rather than the target rank.
  • Const correctness for window functions.
  • Use correct MPI datatypes for CXX complex types.
  • Make sure a valid pointer exists for window before calling any window related MPI functions.

Closes #1064

@pratikvn pratikvn added 1:ST:ready-for-review This PR is ready for review mod:mpi This is related to the MPI module 1:ST:run-full-test is:bugfix This fixes a bug labels Dec 22, 2022
@pratikvn pratikvn self-assigned this Dec 22, 2022
@ginkgo-bot ginkgo-bot added mod:core This is related to the core module. reg:testing This is related to testing. labels Dec 22, 2022
@pratikvn pratikvn requested a review from a team December 22, 2022 08:33
@MarcelKoch MarcelKoch self-requested a review December 22, 2022 13:14
@codecov
Copy link

codecov bot commented Dec 22, 2022

Codecov Report

Base: 91.50% // Head: 91.62% // Increases project coverage by +0.11% 🎉

Coverage data is based on head (e90285e) compared to base (99c8d73).
Patch coverage: 100.00% of modified lines in pull request are covered.

❗ Current head e90285e differs from pull request most recent head ec63086. Consider uploading reports for the commit ec63086 to get more accurate results

Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #1249      +/-   ##
===========================================
+ Coverage    91.50%   91.62%   +0.11%     
===========================================
  Files          556      556              
  Lines        47456    47460       +4     
===========================================
+ Hits         43425    43483      +58     
+ Misses        4031     3977      -54     
Impacted Files Coverage Δ
core/test/mpi/base/bindings.cpp 100.00% <100.00%> (ø)
include/ginkgo/core/base/mpi.hpp 91.66% <100.00%> (ø)
reference/base/index_set_kernels.cpp 94.11% <0.00%> (-0.09%) ⬇️
test/mpi/solver/solver.cpp 94.32% <0.00%> (+0.02%) ⬆️
omp/reorder/rcm_kernels.cpp 98.13% <0.00%> (+0.60%) ⬆️
devices/machine_topology.cpp 82.71% <0.00%> (+4.93%) ⬆️
core/test/mpi/gtest/mpi_listener.cpp 93.28% <0.00%> (+37.55%) ⬆️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

Copy link
Member

@yhmtsai yhmtsai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for this PR. I think we should have another testing for flush.
Put value with lock/unlock (passive) like CanPutValuesWithExclusiveLock.
We can not ensure when (which line of code) the data is already in the target memory with unlock/flush on the source, right?
We can only ensure it when the windows is free. (or with other active sync)
All processor will wait each other, so the source process finish its RMA and then we can check the data.
Is my understanding correct?

@@ -227,7 +229,7 @@ TYPED_TEST(MpiBindings, CanPutValuesWithExclusiveLock)
if (rank != my_rank) {
win.lock(rank, window::lock_type::exclusive);
win.put(this->ref, data.data(), 4, rank, 0, 4);
win.flush(0);
win.flush(rank);
win.unlock(rank);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think unlock also completing the operation?

Copy link
Member Author

@pratikvn pratikvn Dec 31, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No unlock does not complete the operation, but just denotes the end of the communication epoch. With a passive target, a flush or flush_all is necessary to enforce the complete the outstanding operations. The operation might complete without the flush, but using a flush guarantees completion.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

from https://www.mpi-forum.org/docs/mpi-3.1/mpi31-report/node289.htm, doesn't unlock also guarantee that?
1-ensure an RMA operation can be completed with UNLOCK at the origin
and then 3-when the RMA is completed at the origin when calling UNLOCK, it's also completed at the target with the same call.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think for one operation that is valid, but maybe not for multiple operations ? Also, to be honest, I am not sure if all implementations do this properly. The previous bug was partly because of this issue as well. So, a flush is usually a good idea to ensure synchronization.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think since we're only testing the wrapper, IMO it's fine to be overly cautious with semantics, so I would also leave it in.

include/ginkgo/core/base/mpi.hpp Show resolved Hide resolved
Comment on lines +167 to 168
win.flush_all();
win.unlock_all();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here, unlock seems to be enough.
BTW, we do not have the individual test for flush

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above. I am not sure what you mean by separate test for flush. With passive target RDMA operations, you need to create an epoch, and perform communication and synchronization operations inside it. It is difficult to separate them with separate tests for epoch creation and sync tests. That is why there is a test with flush and a test with flush_all.

Copy link
Member

@upsj upsj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for fixing this!

@yhmtsai yhmtsai added 1:ST:ready-to-merge This PR is ready to merge. and removed 1:ST:ready-for-review This PR is ready for review labels Jan 3, 2023
@pratikvn pratikvn merged commit d1b52fe into develop Jan 9, 2023
@pratikvn pratikvn deleted the mpi-win-fixes branch January 9, 2023 11:19
@sonarcloud
Copy link

sonarcloud bot commented Jan 20, 2023

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

No Coverage information No Coverage information
No Duplication information No Duplication information

tcojean added a commit that referenced this pull request Jun 16, 2023
Release 1.6.0 of Ginkgo.

The Ginkgo team is proud to announce the new Ginkgo minor release 1.6.0. This release brings new features such as:
- Several building blocks for GPU-resident sparse direct solvers like symbolic
  and numerical LU and Cholesky factorization, ...,
- A distributed Schwarz preconditioner,
- New FGMRES and GCR solvers,
- Distributed benchmarks for the SpMV operation, solvers, ...
- Support for non-default streams in the CUDA and HIP backends,
- Mixed precision support for the CSR SpMV,
- A new profiling logger which integrates with NVTX, ROCTX, TAU and VTune to
  provide internal Ginkgo knowledge to most HPC profilers!

and much more.

If you face an issue, please first check our [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues) and the [open issues list](https://github.com/ginkgo-project/ginkgo/issues) and if you do not find a solution, feel free to [open a new issue](https://github.com/ginkgo-project/ginkgo/issues/new/choose) or ask a question using the [github discussions](https://github.com/ginkgo-project/ginkgo/discussions).

Supported systems and requirements:
+ For all platforms, CMake 3.13+
+ C++14 compliant compiler
+ Linux and macOS
  + GCC: 5.5+
  + clang: 3.9+
  + Intel compiler: 2018+
  + Apple Clang: 14.0 is tested. Earlier versions might also work.
  + NVHPC: 22.7+
  + Cray Compiler: 14.0.1+
  + CUDA module: CUDA 9.2+ or NVHPC 22.7+
  + HIP module: ROCm 4.5+
  + DPC++ module: Intel OneAPI 2021.3+ with oneMKL and oneDPL. Set the CXX compiler to `dpcpp`.
+ Windows
  + MinGW: GCC 5.5+
  + Microsoft Visual Studio: VS 2019+
  + CUDA module: CUDA 9.2+, Microsoft Visual Studio
  + OpenMP module: MinGW.

### Version Support Changes
+ ROCm 4.0+ -> 4.5+ after [#1303](#1303)
+ Removed Cygwin pipeline and support [#1283](#1283)

### Interface Changes
+ Due to internal changes, `ConcreteExecutor::run` will now always throw if the corresponding module for the `ConcreteExecutor` is not build [#1234](#1234)
+ The constructor of `experimental::distributed::Vector` was changed to only accept local vectors as `std::unique_ptr` [#1284](#1284)
+ The default parameters for the `solver::MultiGrid` were improved. In particular, the smoother defaults to one iteration of `Ir` with `Jacobi` preconditioner, and the coarse grid solver uses the new direct solver with LU factorization. [#1291](#1291) [#1327](#1327)
+ The `iteration_complete` event gained a more expressive overload with additional parameters, the old overloads were deprecated. [#1288](#1288) [#1327](#1327)

### Deprecations
+ Deprecated less expressive `iteration_complete` event. Users are advised to now implement the function `void iteration_complete(const LinOp* solver, const LinOp* b, const LinOp* x, const size_type& it, const LinOp* r, const LinOp* tau, const LinOp* implicit_tau_sq, const array<stopping_status>* status, bool stopped)` [#1288](#1288)

### Added Features
+ A distributed Schwarz preconditioner. [#1248](#1248)
+ A GCR solver [#1239](#1239)
+ Flexible Gmres solver [#1244](#1244)
+ Enable Gmres solver for distributed matrices and vectors [#1201](#1201)
+ An example that uses Kokkos to assemble the system matrix [#1216](#1216)
+ A symbolic LU factorization allowing the `gko::experimental::factorization::Lu` and `gko::experimental::solver::Direct` classes to be used for matrices with non-symmetric sparsity pattern [#1210](#1210)
+ A numerical Cholesky factorization [#1215](#1215)
+ Symbolic factorizations in host-side operations are now wrapped in a host-side `Operation` to make their execution visible to loggers. This means that profiling loggers and benchmarks are no longer missing a separate entry for their runtime [#1232](#1232)
+ Symbolic factorization benchmark [#1302](#1302)
+ The `ProfilerHook` logger allows annotating the Ginkgo execution (apply, operations, ...) for profiling frameworks like NVTX, ROCTX and TAU. [#1055](#1055)
+ `ProfilerHook::created_(nested_)summary` allows the generation of a lightweight runtime profile over all Ginkgo functions written to a user-defined stream [#1270](#1270) for both host and device timing functionality [#1313](#1313)
+ It is now possible to enable host buffers for MPI communications at runtime even if the compile option `GINKGO_FORCE_GPU_AWARE_MPI` is set. [#1228](#1228)
+ A stencil matrices generator (5-pt, 7-pt, 9-pt, and 27-pt) for benchmarks [#1204](#1204)
+ Distributed benchmarks (multi-vector blas, SpMV, solver) [#1204](#1204)
+ Benchmarks for CSR sorting and lookup [#1219](#1219)
+ A timer for MPI benchmarks that reports the longest time [#1217](#1217)
+ A `timer_method=min|max|average|median` flag for benchmark timing summary [#1294](#1294)
+ Support for non-default streams in CUDA and HIP executors [#1236](#1236)
+ METIS integration for nested dissection reordering [#1296](#1296)
+ SuiteSparse AMD integration for fillin-reducing reordering [#1328](#1328)
+ Csr mixed-precision SpMV support [#1319](#1319)
+ A `with_loggers` function for all `Factory` parameters [#1337](#1337)

### Improvements
+ Improve naming of kernel operations for loggers [#1277](#1277)
+ Annotate solver iterations in `ProfilerHook` [#1290](#1290)
+ Allow using the profiler hooks and inline input strings in benchmarks [#1342](#1342)
+ Allow passing smart pointers in place of raw pointers to most matrix functions. This means that things like `vec->compute_norm2(x.get())` or `vec->compute_norm2(lend(x))` can be simplified to `vec->compute_norm2(x)` [#1279](#1279) [#1261](#1261)
+ Catch overflows in prefix sum operations, which makes Ginkgo's operations much less likely to crash. This also improves the performance of the prefix sum kernel [#1303](#1303)
+ Make the installed GinkgoConfig.cmake file relocatable and follow more best practices [#1325](#1325)

### Fixes
+ Fix OpenMPI version check [#1200](#1200)
+ Fix the mpi cxx type binding by c binding [#1306](#1306)
+ Fix runtime failures for one-sided MPI wrapper functions observed on some OpenMPI versions [#1249](#1249)
+ Disable thread pinning with GPU executors due to poor performance [#1230](#1230)
+ Fix hwloc version detection [#1266](#1266)
+ Fix PAPI detection in non-implicit include directories [#1268](#1268)
+ Fix PAPI support for newer PAPI versions: [#1321](#1321)
+ Fix pkg-config file generation for library paths outside prefix [#1271](#1271)
+ Fix various build failures with ROCm 5.4, CUDA 12, and OneAPI 6 [#1214](#1214), [#1235](#1235), [#1251](#1251)
+ Fix incorrect read for skew-symmetric MatrixMarket files with explicit diagonal entries [#1272](#1272)
+ Fix handling of missing diagonal entries in symbolic factorizations [#1263](#1263)
+ Fix segmentation fault in benchmark matrix construction [#1299](#1299)
+ Fix the stencil matrix creation for benchmarking [#1305](#1305)
+ Fix the additional residual check in IR [#1307](#1307)
+ Fix the cuSPARSE CSR SpMM issue on single strided vector when cuda >= 11.6 [#1322](#1322) [#1331](#1331)
+ Fix Isai generation for large sparsity powers [#1327](#1327)
+ Fix Ginkgo compilation and test with NVHPC >= 22.7 [#1331](#1331)
+ Fix Ginkgo compilation of 32 bit binaries with MSVC [#1349](#1349)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:ready-to-merge This PR is ready to merge. 1:ST:run-full-test is:bugfix This fixes a bug mod:core This is related to the core module. mod:mpi This is related to the MPI module reg:testing This is related to testing.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

make test fails for tests 128 (MPI) and 195 (numerical discrepancy)
5 participants