Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add generic solver test #973

Merged
merged 20 commits into from
Apr 21, 2022
Merged

Add generic solver test #973

merged 20 commits into from
Apr 21, 2022

Conversation

upsj
Copy link
Member

@upsj upsj commented Feb 18, 2022

This adds a generic test testing the behavior of different solvers in different cases on a small input matrix, and fixes a few found issues.
The test will later be used in #753 to test the cross-executor behavior.

TODO

  • fix IDR issues with strided vectors

@upsj upsj added the 1:ST:ready-for-review This PR is ready for review label Feb 18, 2022
@upsj upsj added this to the Ginkgo 1.5.0 milestone Feb 18, 2022
@upsj upsj requested a review from a team February 18, 2022 16:28
@upsj upsj self-assigned this Feb 18, 2022
@ginkgo-bot ginkgo-bot added mod:core This is related to the core module. mod:cuda This is related to the CUDA module. mod:dpcpp This is related to the DPC++ module. mod:hip This is related to the HIP module. mod:openmp This is related to the OpenMP module. reg:build This is related to the build system. reg:testing This is related to testing. type:matrix-format This is related to the Matrix formats type:solver This is related to the solvers labels Feb 18, 2022
@MarcelKoch
Copy link
Member

Perhaps you would want to add a test where the initial guess is already the solution. I think gmres and cb_gmres don't work correctly in that case. I have a patch for that, I just didn't get to push that yet.

@upsj
Copy link
Member Author

upsj commented Feb 18, 2022

That a great idea! I'll get to that right away

Copy link
Contributor

@Slaedr Slaedr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice job! I have a couple of suggestions below - one related to simplifying the use of matrix_utils and the other related to testing restart (or subspace dim) related stuff.

core/test/utils/assertions.hpp Outdated Show resolved Hide resolved
*/
template <typename ValueType>
void make_symmetric(matrix::Dense<ValueType>* mtx)
template <typename ValueType, typename IndexType>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's better to template all of these on the matrix type, I think, to make usage easier in tests. Right now, you're having t o create a matrix data first and then reading it everywhere it's needed.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The old one is really limited in usability - it only works on Dense matrices, so there will be an intermediate step every time you want to use another matrix type (in my case: solver tests are much faster with Csr than with Dense)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean to say keep your current implementation, but move the mtx->read(data) and mtx->write(data) inside the respective function such as make_symmetric. Template them on the matrix type. Right now, there are always read and write functions surrounding your calls to these functions; you might as well move them inside.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where is the write happening here? Taking a matrix parameter, we would have generate_random_matrix_data -> read -> write -> make_* -> read, this approach prevents omits a read -> write pair.

To make things more easily navigatable, I'll link the other comments here and resolve them: #973 (comment) #973 (comment) #973 (comment)

cuda/test/preconditioner/jacobi_kernels.cpp Show resolved Hide resolved
include/ginkgo/core/base/range.hpp Show resolved Hide resolved
omp/test/preconditioner/jacobi_kernels.cpp Show resolved Hide resolved
test/solver/solver.cpp Show resolved Hide resolved
Config::build_preconditioned(exec, 4).on(exec)->generate(
mtx.dev)});
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could also check for a is_restartable and test with 0 restart, 1 restart and some normal restart count. This could also include subspace dimension for IDR.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried incorporating this by using two GMRES configurations: One that restarts when doing 4 iterations, and one that doesn't. How would you behave differently if is_restartable is available or not?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, ideally, there should be no difference. But it would be nice just as a check for how the restart parameter is treated, since it and the max iterations are two separate variables in the implementation.

Copy link
Member

@MarcelKoch MarcelKoch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have just a few remarks on the new test itself. I will also check out the rest of this PR later.

I have a question regarding the tolerance used below. Besides, we should perhaps add an iterative solver test, where the test runs until convergence. We could then check, if the residual actually achieves (nearly) the prescribed accuracy.

test/solver/solver.cpp Outdated Show resolved Hide resolved
test/solver/solver.cpp Outdated Show resolved Hide resolved
test/solver/solver.cpp Show resolved Hide resolved
Copy link
Member

@MarcelKoch MarcelKoch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

generally lgtm, but I would like to wait until the issue with the large tolerance is resolved.

Below are some smaller remarks, mainly about the num_rhs == 0 checking.

core/test/utils/matrix_utils_test.cpp Outdated Show resolved Hide resolved
core/test/utils/matrix_utils_test.cpp Outdated Show resolved Hide resolved
cuda/solver/cb_gmres_kernels.cu Show resolved Hide resolved
dpcpp/solver/gmres_kernels.dp.cpp Show resolved Hide resolved
include/ginkgo/core/base/range.hpp Show resolved Hide resolved
MarcelKoch added a commit that referenced this pull request Feb 28, 2022
the changes to matrix_utils.hpp should be merged with #973
MarcelKoch added a commit that referenced this pull request Mar 1, 2022
the changes to matrix_utils.hpp should be merged with #973
@MarcelKoch MarcelKoch mentioned this pull request Mar 3, 2022
1 task
@upsj upsj mentioned this pull request Mar 4, 2022
27 tasks
@upsj
Copy link
Member Author

upsj commented Mar 4, 2022

rebase!

@MarcelKoch
Copy link
Member

I think we can't progress with this PR until we figured out the tolerance issue.

If we use float as value type (like we need for dpcpp single mode), then the test is completely useless. In that case, the tolerance for some solvers is ~1, so even if the code contains errors, the test will still succeed. I actually tested that with Fcg, where I introduced a bug, and the test succeeded (if I used float). If we use double, we should catch the most egregious errors, but I think it still leaves a bad feeling.

I would suggest instead of comparing the device result with the reference result after a fixed number of iterations, to just check if the device solver converges to a specific tolerance. TBH, I think that is the only property that is worth checking.

@upsj
Copy link
Member Author

upsj commented Mar 9, 2022

I tend to agree, I will probably keep this blocked and use it only to test #753. With the changes suggested by Pratik in #805, we should then be able to make logging and comparing these results much easier. Note that the errors mainly pop up with a large number of rhs, so with such a small input matrix, it may well be that one of the vectors converges very quickly, leading to instabilities.

Copy link
Member

@MarcelKoch MarcelKoch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not holding this up anymore. The important issues have been raised, but I guess they will require more work than what would be fitting here.

I do have one request, could you disable the test for the GINKGO_DPCPP_SINGLE_MODE? At least one test will give you false positives, which I think we should prevent.

the tests are disabled from CMake's side anyways
needs explicit captures for `this` in lambdas
@sonarcloud
Copy link

sonarcloud bot commented Apr 21, 2022

SonarCloud Quality Gate failed.    Quality Gate failed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 56 Code Smells

92.0% 92.0% Coverage
10.9% 10.9% Duplication

@upsj upsj merged commit 7d8f86d into develop Apr 21, 2022
@upsj upsj deleted the solver_test branch April 21, 2022 09:26
MarcelKoch added a commit that referenced this pull request Apr 21, 2022
the changes to matrix_utils.hpp should be merged with #973
MarcelKoch added a commit that referenced this pull request Apr 22, 2022
the changes to matrix_utils.hpp should be merged with #973
MarcelKoch added a commit that referenced this pull request Apr 22, 2022
the changes to matrix_utils.hpp should be merged with #973
MarcelKoch added a commit that referenced this pull request Apr 22, 2022
the changes to matrix_utils.hpp should be merged with #973
MarcelKoch added a commit that referenced this pull request Apr 25, 2022
the changes to matrix_utils.hpp should be merged with #973
MarcelKoch added a commit that referenced this pull request May 5, 2022
the changes to matrix_utils.hpp should be merged with #973
MarcelKoch added a commit that referenced this pull request May 10, 2022
the changes to matrix_utils.hpp should be merged with #973
MarcelKoch added a commit that referenced this pull request May 23, 2022
the changes to matrix_utils.hpp should be merged with #973
MarcelKoch added a commit that referenced this pull request May 23, 2022
the changes to matrix_utils.hpp should be merged with #973
MarcelKoch added a commit that referenced this pull request May 23, 2022
the changes to matrix_utils.hpp should be merged with #973
MarcelKoch added a commit that referenced this pull request May 23, 2022
the changes to matrix_utils.hpp should be merged with #973
MarcelKoch added a commit that referenced this pull request Jun 2, 2022
the changes to matrix_utils.hpp should be merged with #973
MarcelKoch added a commit that referenced this pull request Jun 14, 2022
the changes to matrix_utils.hpp should be merged with #973
greole pushed a commit that referenced this pull request Jun 15, 2022
the changes to matrix_utils.hpp should be merged with #973
MarcelKoch added a commit that referenced this pull request Jul 11, 2022
the changes to matrix_utils.hpp should be merged with #973
MarcelKoch added a commit that referenced this pull request Aug 16, 2022
the changes to matrix_utils.hpp should be merged with #973
MarcelKoch added a commit that referenced this pull request Aug 17, 2022
the changes to matrix_utils.hpp should be merged with #973
MarcelKoch added a commit that referenced this pull request Aug 24, 2022
the changes to matrix_utils.hpp should be merged with #973
MarcelKoch added a commit that referenced this pull request Oct 5, 2022
the changes to matrix_utils.hpp should be merged with #973
fritzgoebel pushed a commit that referenced this pull request Oct 14, 2022
the changes to matrix_utils.hpp should be merged with #973
MarcelKoch added a commit that referenced this pull request Oct 26, 2022
the changes to matrix_utils.hpp should be merged with #973
MarcelKoch added a commit that referenced this pull request Oct 31, 2022
the changes to matrix_utils.hpp should be merged with #973
tcojean added a commit that referenced this pull request Nov 12, 2022
Advertise release 1.5.0 and last changes

+ Add changelog,
+ Update third party libraries
+ A small fix to a CMake file

See PR: #1195

The Ginkgo team is proud to announce the new Ginkgo minor release 1.5.0. This release brings many important new features such as:
- MPI-based multi-node support for all matrix formats and most solvers;
- full DPC++/SYCL support,
- functionality and interface for GPU-resident sparse direct solvers,
- an interface for wrapping solvers with scaling and reordering applied,
- a new algebraic Multigrid solver/preconditioner,
- improved mixed-precision support,
- support for device matrix assembly,

and much more.

If you face an issue, please first check our [known issues page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues) and the [open issues list](https://github.com/ginkgo-project/ginkgo/issues) and if you do not find a solution, feel free to [open a new issue](https://github.com/ginkgo-project/ginkgo/issues/new/choose) or ask a question using the [github discussions](https://github.com/ginkgo-project/ginkgo/discussions).

Supported systems and requirements:
+ For all platforms, CMake 3.13+
+ C++14 compliant compiler
+ Linux and macOS
  + GCC: 5.5+
  + clang: 3.9+
  + Intel compiler: 2018+
  + Apple LLVM: 8.0+
  + NVHPC: 22.7+
  + Cray Compiler: 14.0.1+
  + CUDA module: CUDA 9.2+ or NVHPC 22.7+
  + HIP module: ROCm 4.0+
  + DPC++ module: Intel OneAPI 2021.3 with oneMKL and oneDPL. Set the CXX compiler to `dpcpp`.
+ Windows
  + MinGW and Cygwin: GCC 5.5+
  + Microsoft Visual Studio: VS 2019
  + CUDA module: CUDA 9.2+, Microsoft Visual Studio
  + OpenMP module: MinGW or Cygwin.


Algorithm and important feature additions:
+ Add MPI-based multi-node for all matrix formats and solvers (except GMRES and IDR). ([#676](#676), [#908](#908), [#909](#909), [#932](#932), [#951](#951), [#961](#961), [#971](#971), [#976](#976), [#985](#985), [#1007](#1007), [#1030](#1030), [#1054](#1054), [#1100](#1100), [#1148](#1148))
+ Porting the remaining algorithms (preconditioners like ISAI, Jacobi, Multigrid, ParILU(T) and ParIC(T)) to DPC++/SYCL, update to SYCL 2020, and improve support and performance ([#896](#896), [#924](#924), [#928](#928), [#929](#929), [#933](#933), [#943](#943), [#960](#960), [#1057](#1057), [#1110](#1110),  [#1142](#1142))
+ Add a Sparse Direct interface supporting GPU-resident numerical LU factorization, symbolic Cholesky factorization, improved triangular solvers, and more ([#957](#957), [#1058](#1058), [#1072](#1072), [#1082](#1082))
+ Add a ScaleReordered interface that can wrap solvers and automatically apply reorderings and scalings ([#1059](#1059))
+ Add a Multigrid solver and improve the aggregation based PGM coarsening scheme ([#542](#542), [#913](#913), [#980](#980), [#982](#982),  [#986](#986))
+ Add infrastructure for unified, lambda-based, backend agnostic, kernels and utilize it for some simple kernels ([#833](#833), [#910](#910), [#926](#926))
+ Merge different CUDA, HIP, DPC++ and OpenMP tests under a common interface ([#904](#904), [#973](#973), [#1044](#1044), [#1117](#1117))
+ Add a device_matrix_data type for device-side matrix assembly ([#886](#886), [#963](#963), [#965](#965))
+ Add support for mixed real/complex BLAS operations ([#864](#864))
+ Add a FFT LinOp for all but DPC++/SYCL ([#701](#701))
+ Add FBCSR support for NVIDIA and AMD GPUs and CPUs with OpenMP ([#775](#775))
+ Add CSR scaling ([#848](#848))
+ Add array::const_view and equivalent to create constant matrices from non-const data ([#890](#890))
+ Add a RowGatherer LinOp supporting mixed precision to gather dense matrix rows ([#901](#901))
+ Add mixed precision SparsityCsr SpMV support ([#970](#970))
+ Allow creating CSR submatrix including from (possibly discontinuous) index sets ([#885](#885), [#964](#964))
+ Add a scaled identity addition (M <- aI + bM) feature interface and impls for Csr and Dense ([#942](#942))


Deprecations and important changes:
+ Deprecate AmgxPgm in favor of the new Pgm name. ([#1149](#1149)).
+ Deprecate specialized residual norm classes in favor of a common `ResidualNorm` class ([#1101](#1101))
+ Deprecate CamelCase non-polymorphic types in favor of snake_case versions (like array, machine_topology, uninitialized_array, index_set) ([#1031](#1031), [#1052](#1052))
+ Bug fix: restrict gko::share to rvalue references (*possible interface break*) ([#1020](#1020))
+ Bug fix: when using cuSPARSE's triangular solvers, specifying the factory parameter `num_rhs` is now required when solving for more than one right-hand side, otherwise an exception is thrown ([#1184](#1184)).
+ Drop official support for old CUDA < 9.2 ([#887](#887))


Improved performance additions:
+ Reuse tmp storage in reductions in solvers and add a mutable workspace to all solvers ([#1013](#1013), [#1028](#1028))
+ Add HIP unsafe atomic option for AMD ([#1091](#1091))
+ Prefer vendor implementations for Dense dot, conj_dot and norm2 when available ([#967](#967)).
+ Tuned OpenMP SellP, COO, and ELL SpMV kernels for a small number of RHS ([#809](#809))


Fixes:
+ Fix various compilation warnings ([#1076](#1076), [#1183](#1183), [#1189](#1189))
+ Fix issues with hwloc-related tests ([#1074](#1074))
+ Fix include headers for GCC 12 ([#1071](#1071))
+ Fix for simple-solver-logging example ([#1066](#1066))
+ Fix for potential memory leak in Logger ([#1056](#1056))
+ Fix logging of mixin classes ([#1037](#1037))
+ Improve value semantics for LinOp types, like moved-from state in cross-executor copy/clones ([#753](#753))
+ Fix some matrix SpMV and conversion corner cases ([#905](#905), [#978](#978))
+ Fix uninitialized data ([#958](#958))
+ Fix CUDA version requirement for cusparseSpSM ([#953](#953))
+ Fix several issues within bash-script ([#1016](#1016))
+ Fixes for `NVHPC` compiler support ([#1194](#1194))


Other additions:
+ Simplify and properly name GMRES kernels ([#861](#861))
+ Improve pkg-config support for non-CMake libraries ([#923](#923), [#1109](#1109))
+ Improve gdb pretty printer ([#987](#987), [#1114](#1114))
+ Add a logger highlighting inefficient allocation and copy patterns ([#1035](#1035))
+ Improved and optimized test random matrix generation ([#954](#954), [#1032](#1032))
+ Better CSR strategy defaults ([#969](#969))
+ Add `move_from` to `PolymorphicObject` ([#997](#997))
+ Remove unnecessary device_guard usage ([#956](#956))
+ Improvements to the generic accessor for mixed-precision ([#727](#727))
+ Add a naive lower triangular solver implementation for CUDA ([#764](#764))
+ Add support for int64 indices from CUDA 11 onward with SpMV and SpGEMM ([#897](#897))
+ Add a L1 norm implementation ([#900](#900))
+ Add reduce_add for arrays ([#831](#831))
+ Add utility to simplify Dense View creation from an existing Dense vector ([#1136](#1136)).
+ Add a custom transpose implementation for Fbcsr and Csr transpose for unsupported vendor types ([#1123](#1123))
+ Make IDR random initilization deterministic ([#1116](#1116))
+ Move the algorithm choice for triangular solvers from Csr::strategy_type to a factory parameter ([#1088](#1088))
+ Update CUDA archCoresPerSM ([#1175](#1116))
+ Add kernels for Csr sparsity pattern lookup ([#994](#994))
+ Differentiate between structural and numerical zeros in Ell/Sellp ([#1027](#1027))
+ Add a binary IO format for matrix data ([#984](#984))
+ Add a tuple zip_iterator implementation ([#966](#966))
+ Simplify kernel stubs and declarations ([#888](#888))
+ Simplify GKO_REGISTER_OPERATION with lambdas ([#859](#859))
+ Simplify copy to device in tests and examples ([#863](#863))
+ More verbose output to array assertions ([#858](#858))
+ Allow parallel compilation for Jacobi kernels ([#871](#871))
+ Change clang-format pointer alignment to left ([#872](#872))
+ Various improvements and fixes to the benchmarking framework ([#750](#750), [#759](#759), [#870](#870), [#911](#911), [#1033](#1033), [#1137](#1137))
+ Various documentation improvements ([#892](#892), [#921](#921), [#950](#950), [#977](#977), [#1021](#1021), [#1068](#1068), [#1069](#1069), [#1080](#1080), [#1081](#1081), [#1108](#1108), [#1153](#1153), [#1154](#1154))
+ Various CI improvements ([#868](#868), [#874](#874), [#884](#884), [#889](#889), [#899](#899), [#903](#903),  [#922](#922), [#925](#925), [#930](#930), [#936](#936), [#937](#937), [#958](#958), [#882](#882), [#1011](#1011), [#1015](#1015), [#989](#989), [#1039](#1039), [#1042](#1042), [#1067](#1067), [#1073](#1073), [#1075](#1075), [#1083](#1083), [#1084](#1084), [#1085](#1085), [#1139](#1139), [#1178](#1178), [#1187](#1187))
tcojean pushed a commit that referenced this pull request Nov 12, 2022
the changes to matrix_utils.hpp should be merged with #973
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:ready-to-merge This PR is ready to merge. mod:core This is related to the core module. mod:cuda This is related to the CUDA module. mod:dpcpp This is related to the DPC++ module. mod:hip This is related to the HIP module. mod:openmp This is related to the OpenMP module. reg:build This is related to the build system. reg:testing This is related to testing. type:matrix-format This is related to the Matrix formats type:solver This is related to the solvers
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants