Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extend iteration complete event #1288

Merged
merged 19 commits into from
Apr 25, 2023
Merged

Extend iteration complete event #1288

merged 19 commits into from
Apr 25, 2023

Conversation

MarcelKoch
Copy link
Member

@MarcelKoch MarcelKoch commented Feb 22, 2023

This PR extends the iteration_complete to also capture the stopping status at the end of the iteration. More specifically, it add the two parameters const gko::array<stopping_status>* status, bool all_stopped to the function. The other two overloads are marked as deprecated and just call the new overload. All of our loggers support the new overload.

The big difference to our current state is that this PR allows attaching the Convergence logger directly to the solver. IMO, this is much more user-friendly. I changed the examples to reflect this.

Evaluate:

  • rename event to iteration_completed to fit in the naming scheme
  • capture the right-hand-side in the event. This would allow reconstructing the residual if necessary (e.g. multigrid)
  • perhaps only the gko::array<stopping_status>* status is enough. I'm not sure if the all_stopped gives additional information

@MarcelKoch MarcelKoch added the 1:ST:ready-for-review This PR is ready for review label Feb 22, 2023
@MarcelKoch MarcelKoch requested review from a team February 22, 2023 15:24
@MarcelKoch MarcelKoch self-assigned this Feb 22, 2023
@MarcelKoch MarcelKoch added this to In progress in Release 1.6.0 via automation Feb 22, 2023
@ginkgo-bot ginkgo-bot added reg:testing This is related to testing. mod:core This is related to the core module. reg:example This is related to the examples. reg:benchmarking This is related to benchmarking. type:solver This is related to the solvers labels Feb 22, 2023
@upsj
Copy link
Member

upsj commented Feb 22, 2023

If we're already touching the iteration events, would it be much more effort to add a true iteration_started/completed pair like we discussed in the context of profiler hooks? Also, the residual can theoretically be queried based on the workspace

@MarcelKoch
Copy link
Member Author

You're right, that fits the scope of this PR. So I will work on that.

@MarcelKoch
Copy link
Member Author

@upsj how helpful do you think an iteration_started event could be? Thinking about it, except for the very first iteration, it would get exactly the same data as the iteration_complete event. Literally nothing happens between an iteration_complete and an iteration_started event.
So, for me, the only use would be in the profiler logger.

@upsj
Copy link
Member

upsj commented Feb 23, 2023

@MarcelKoch yes, that is a valid point - we could also consider adding "markers", which are at least supported by NVTX and maybe VTune, to annotate the iterations without an associated range. In that case, I would rename it iteration instead of iteration_complete/... in the long term

@MarcelKoch
Copy link
Member Author

I mean in principle you could also get a profiler range just from the iteration_complete, but of course without the first iteration.

@upsj
Copy link
Member

upsj commented Feb 23, 2023

yes, that would require additional logic for dealing with iterative solvers specifically, so I'm not 100% happy about it. We could maybe detect whether the solver has a stopping criterion, and based on that begin and end the first/last iteration range with apply

@MarcelKoch
Copy link
Member Author

I don't think this event would/should be called from anything else than an iterative solver, or maybe I misunderstood you there. But of course you're right, the additional logic might not be worth it in the end.

@upsj
Copy link
Member

upsj commented Feb 23, 2023

It's simpler than I thought, see #1290 :)

Copy link
Member

@yhmtsai yhmtsai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IR will not compute the last residual in the last max iteration.

core/log/convergence.cpp Outdated Show resolved Hide resolved
core/log/record.cpp Outdated Show resolved Hide resolved
core/log/stream.cpp Show resolved Hide resolved
Comment on lines +218 to +219
// Calculate residual explicitly, because the residual is not
// available inside of the multigrid solver
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if you put the residual norm check, it should be available

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But that is not available for the iteration complete event (normally). I may try if I can get it from the stopping criterion.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't this work now? Multigrid has SolverBase and can at least fall back to the bode with b and x?

Copy link
Member

@yhmtsai yhmtsai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should the convergence check be applied to the solver?
AFAIR, it's mentioned several times before but we somehow decline it because it does not fit the logger property well.
maybe @tcojean

core/log/stream.cpp Show resolved Hide resolved
core/log/stream.cpp Outdated Show resolved Hide resolved
include/ginkgo/core/log/logger.hpp Outdated Show resolved Hide resolved
include/ginkgo/core/log/record.hpp Show resolved Hide resolved
@@ -378,7 +378,6 @@ class EnablePreconditionable : public Preconditionable {
* @ingroup solver
* @ingroup LinOp
*/
template <typename MatrixType = LinOp>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's a public interface break and we use it for LowerTrs at least

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It breaks interface, yes. But IMO that interface was pretty useless. And I've fixed the use case of LowerTrs. So, for us the change is fine, since the EnableSolverBase has the concrete LinOp type.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it introduce any bug? If not, we should keep the interface

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With the current interface it is not possible to get the stored system matrix from a solver passed in as a LinOp. You would need to try casting the LinOp to SolverBase<T> for different types. And for a custom solver with a custom user type, it would be completely impossible to get the operator.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

without breaking interface, you can make SolverBase<MatrixType> inherit from SolverLinOpBase
SolverLinOpBase provides the following interface (maybe you need _linop suffix)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can do that, but that is also not ideal. For that, I would deprecate SolverBase<...>, because it serves no purpose. The base logic (e.g. the workspace) is handled in SolverBaseLinOp, while the concrete matrix type is known in EnableSolverBase. But we still need to derive from SolverBase<...> in EnableSolverBase to keep the interface stability you propose, which will result in a lot of deprecation warnings.
IMO, breaking the interface here is the cleaner approach. I doubt that part of our interface has been used anyway, because of the issue I mentioned, and users should not derive directly from SolverBase.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you do not need to deprecate SolverBase<> because it's still in use.
It's a public interface without any detail/internal namespace, so I can not say it is not used.
(unless we change the definition)
If it is the bug, we must fix it with breaking interface sadly. Other than the bug, we should keep the public interface as much as possible although it introduces much pain.
From the current changelog, we only change the behavior, not the interface (multigrid, executor) and the experimental feature (distributed)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO SolverBase<...> needs to be deprecated, because it should be removed. We could also remove it without deprecation, but I find that can cause even more issues on the user side.
I know that it might be used, but I argued that I find that extremely unlikely.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've now kept the old interface and deprecated it. To reduce clutter, I disabled the corresponding warning for the header.

include/ginkgo/core/log/stream.hpp Show resolved Hide resolved
@MarcelKoch
Copy link
Member Author

should the convergence check be applied to the solver?

Could you expand on what you mean here? Do you mean the convergence logger?

@yhmtsai
Copy link
Member

yhmtsai commented Mar 21, 2023

@MarcelKoch I sent it to you in private chat.
After reading the discussion, it seems to decline

  1. record the num_iteration as solver member directly -> the issue from calling solver by multithreads and mutable variable
  2. propagate the logger to the stopping criterion.

this pr does not meet two opinions.
Whether should the logger record the data which is not in the object?
For example, the solver might only need residual, not the residual norm.
Whether should the convergence logger still record that? The residual norm is required by the stopping criterion, not the solver.

Copy link
Member

@yhmtsai yhmtsai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. if it does not face the concerned issue.

Comment on lines 117 to 119
} else if (dynamic_cast<const solver::SolverBase*>(solver) &&
b != nullptr && x != nullptr) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we will also need the IterationLogger for getting iteration only.
Convergence will compute the residual norm (with additional residual computation in some cases)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is relevant to this PR. The convergence logger should compute the residual norm, that is the point of it. I added this case, because sometimes the residual is not available (e.g. MG or first check in IR). Now the logger should be able to always compute the residual.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IterationLogger should be in another pr, I mean.
If the user just needs the iteration count, the convergence requires more operation than what the user needs.

@upsj upsj self-requested a review April 6, 2023 15:50
@MarcelKoch
Copy link
Member Author

format-rebase!

@ginkgo-bot
Copy link
Member

Error: Rebase failed, see the related Action for details

@MarcelKoch
Copy link
Member Author

format!

@MarcelKoch MarcelKoch added 1:ST:ready-to-merge This PR is ready to merge. and removed 1:ST:ready-for-review This PR is ready for review labels Apr 22, 2023
@ginkgo-bot
Copy link
Member

Note: This PR changes the Ginkgo ABI:

Functions changes summary: 346 Removed, 321 Changed (58927 filtered out), 253 Added functions
Variables changes summary: 0 Removed, 0 Changed, 0 Added variable

For details check the full ABI diff under Artifacts here

@MarcelKoch MarcelKoch merged commit f4fac6a into develop Apr 25, 2023
Release 1.6.0 automation moved this from Reviewer approved to Done Apr 25, 2023
@MarcelKoch MarcelKoch deleted the extend-iteration-complete branch April 25, 2023 06:49
tcojean added a commit that referenced this pull request Jun 16, 2023
Release 1.6.0 of Ginkgo.

The Ginkgo team is proud to announce the new Ginkgo minor release 1.6.0. This release brings new features such as:
- Several building blocks for GPU-resident sparse direct solvers like symbolic
  and numerical LU and Cholesky factorization, ...,
- A distributed Schwarz preconditioner,
- New FGMRES and GCR solvers,
- Distributed benchmarks for the SpMV operation, solvers, ...
- Support for non-default streams in the CUDA and HIP backends,
- Mixed precision support for the CSR SpMV,
- A new profiling logger which integrates with NVTX, ROCTX, TAU and VTune to
  provide internal Ginkgo knowledge to most HPC profilers!

and much more.

If you face an issue, please first check our [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues) and the [open issues list](https://github.com/ginkgo-project/ginkgo/issues) and if you do not find a solution, feel free to [open a new issue](https://github.com/ginkgo-project/ginkgo/issues/new/choose) or ask a question using the [github discussions](https://github.com/ginkgo-project/ginkgo/discussions).

Supported systems and requirements:
+ For all platforms, CMake 3.13+
+ C++14 compliant compiler
+ Linux and macOS
  + GCC: 5.5+
  + clang: 3.9+
  + Intel compiler: 2018+
  + Apple Clang: 14.0 is tested. Earlier versions might also work.
  + NVHPC: 22.7+
  + Cray Compiler: 14.0.1+
  + CUDA module: CUDA 9.2+ or NVHPC 22.7+
  + HIP module: ROCm 4.5+
  + DPC++ module: Intel OneAPI 2021.3+ with oneMKL and oneDPL. Set the CXX compiler to `dpcpp`.
+ Windows
  + MinGW: GCC 5.5+
  + Microsoft Visual Studio: VS 2019+
  + CUDA module: CUDA 9.2+, Microsoft Visual Studio
  + OpenMP module: MinGW.

### Version Support Changes
+ ROCm 4.0+ -> 4.5+ after [#1303](#1303)
+ Removed Cygwin pipeline and support [#1283](#1283)

### Interface Changes
+ Due to internal changes, `ConcreteExecutor::run` will now always throw if the corresponding module for the `ConcreteExecutor` is not build [#1234](#1234)
+ The constructor of `experimental::distributed::Vector` was changed to only accept local vectors as `std::unique_ptr` [#1284](#1284)
+ The default parameters for the `solver::MultiGrid` were improved. In particular, the smoother defaults to one iteration of `Ir` with `Jacobi` preconditioner, and the coarse grid solver uses the new direct solver with LU factorization. [#1291](#1291) [#1327](#1327)
+ The `iteration_complete` event gained a more expressive overload with additional parameters, the old overloads were deprecated. [#1288](#1288) [#1327](#1327)

### Deprecations
+ Deprecated less expressive `iteration_complete` event. Users are advised to now implement the function `void iteration_complete(const LinOp* solver, const LinOp* b, const LinOp* x, const size_type& it, const LinOp* r, const LinOp* tau, const LinOp* implicit_tau_sq, const array<stopping_status>* status, bool stopped)` [#1288](#1288)

### Added Features
+ A distributed Schwarz preconditioner. [#1248](#1248)
+ A GCR solver [#1239](#1239)
+ Flexible Gmres solver [#1244](#1244)
+ Enable Gmres solver for distributed matrices and vectors [#1201](#1201)
+ An example that uses Kokkos to assemble the system matrix [#1216](#1216)
+ A symbolic LU factorization allowing the `gko::experimental::factorization::Lu` and `gko::experimental::solver::Direct` classes to be used for matrices with non-symmetric sparsity pattern [#1210](#1210)
+ A numerical Cholesky factorization [#1215](#1215)
+ Symbolic factorizations in host-side operations are now wrapped in a host-side `Operation` to make their execution visible to loggers. This means that profiling loggers and benchmarks are no longer missing a separate entry for their runtime [#1232](#1232)
+ Symbolic factorization benchmark [#1302](#1302)
+ The `ProfilerHook` logger allows annotating the Ginkgo execution (apply, operations, ...) for profiling frameworks like NVTX, ROCTX and TAU. [#1055](#1055)
+ `ProfilerHook::created_(nested_)summary` allows the generation of a lightweight runtime profile over all Ginkgo functions written to a user-defined stream [#1270](#1270) for both host and device timing functionality [#1313](#1313)
+ It is now possible to enable host buffers for MPI communications at runtime even if the compile option `GINKGO_FORCE_GPU_AWARE_MPI` is set. [#1228](#1228)
+ A stencil matrices generator (5-pt, 7-pt, 9-pt, and 27-pt) for benchmarks [#1204](#1204)
+ Distributed benchmarks (multi-vector blas, SpMV, solver) [#1204](#1204)
+ Benchmarks for CSR sorting and lookup [#1219](#1219)
+ A timer for MPI benchmarks that reports the longest time [#1217](#1217)
+ A `timer_method=min|max|average|median` flag for benchmark timing summary [#1294](#1294)
+ Support for non-default streams in CUDA and HIP executors [#1236](#1236)
+ METIS integration for nested dissection reordering [#1296](#1296)
+ SuiteSparse AMD integration for fillin-reducing reordering [#1328](#1328)
+ Csr mixed-precision SpMV support [#1319](#1319)
+ A `with_loggers` function for all `Factory` parameters [#1337](#1337)

### Improvements
+ Improve naming of kernel operations for loggers [#1277](#1277)
+ Annotate solver iterations in `ProfilerHook` [#1290](#1290)
+ Allow using the profiler hooks and inline input strings in benchmarks [#1342](#1342)
+ Allow passing smart pointers in place of raw pointers to most matrix functions. This means that things like `vec->compute_norm2(x.get())` or `vec->compute_norm2(lend(x))` can be simplified to `vec->compute_norm2(x)` [#1279](#1279) [#1261](#1261)
+ Catch overflows in prefix sum operations, which makes Ginkgo's operations much less likely to crash. This also improves the performance of the prefix sum kernel [#1303](#1303)
+ Make the installed GinkgoConfig.cmake file relocatable and follow more best practices [#1325](#1325)

### Fixes
+ Fix OpenMPI version check [#1200](#1200)
+ Fix the mpi cxx type binding by c binding [#1306](#1306)
+ Fix runtime failures for one-sided MPI wrapper functions observed on some OpenMPI versions [#1249](#1249)
+ Disable thread pinning with GPU executors due to poor performance [#1230](#1230)
+ Fix hwloc version detection [#1266](#1266)
+ Fix PAPI detection in non-implicit include directories [#1268](#1268)
+ Fix PAPI support for newer PAPI versions: [#1321](#1321)
+ Fix pkg-config file generation for library paths outside prefix [#1271](#1271)
+ Fix various build failures with ROCm 5.4, CUDA 12, and OneAPI 6 [#1214](#1214), [#1235](#1235), [#1251](#1251)
+ Fix incorrect read for skew-symmetric MatrixMarket files with explicit diagonal entries [#1272](#1272)
+ Fix handling of missing diagonal entries in symbolic factorizations [#1263](#1263)
+ Fix segmentation fault in benchmark matrix construction [#1299](#1299)
+ Fix the stencil matrix creation for benchmarking [#1305](#1305)
+ Fix the additional residual check in IR [#1307](#1307)
+ Fix the cuSPARSE CSR SpMM issue on single strided vector when cuda >= 11.6 [#1322](#1322) [#1331](#1331)
+ Fix Isai generation for large sparsity powers [#1327](#1327)
+ Fix Ginkgo compilation and test with NVHPC >= 22.7 [#1331](#1331)
+ Fix Ginkgo compilation of 32 bit binaries with MSVC [#1349](#1349)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:ready-to-merge This PR is ready to merge. mod:core This is related to the core module. reg:benchmarking This is related to benchmarking. reg:example This is related to the examples. reg:testing This is related to testing. type:solver This is related to the solvers
Projects
No open projects
Development

Successfully merging this pull request may close these issues.

None yet

6 participants