Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed slow COO SpMV for the OpenMP executor #340

Merged
merged 2 commits into from
Aug 29, 2019
Merged

Conversation

thoasm
Copy link
Member

@thoasm thoasm commented Aug 29, 2019

Changed the order of the loops in the OpenMP COO SpMV, so the pragma is in the outer loop.

I ran some tests on my laptop (matrix: 5320 x 2310, 100 right hand sides), 5 consecutive applys for each executor (not averaged, time shown is the sum over all 5 passes):

Before this PR (current develop):
Duration ref [us]: 1464703
Duration omp [us]: 76439644

With this PR:
Duration ref [us]: 1421583
Duration omp [us]: 3172356

I was able to run it on the CI system (OMP_NUM_THREADS=10), and I get these results (same settings as on my laptop, listed above):

Before this PR:
Duration ref [us]: 3134603
Duration omp [us]: 70496128

With this PR:
Duration ref [us]: 3246514
Duration omp [us]: 1227014

Now, same test with OMP_NUM_THREADS=40:

Before this PR:
Duration ref [us]: 3132254
Duration omp [us]: 315196010

With this PR:
Duration ref [us]: 3129671
Duration omp [us]: 881428

So, I would say we keep the pragmas for the number of right hand sides.

Closes #339

Moved the `omp parallel for` to the most outer loop of the apply,
so it is parallelized over the matrix entries instead over the number
of right hand sides for every single matrix entry.
@thoasm thoasm self-assigned this Aug 29, 2019
@codecov
Copy link

codecov bot commented Aug 29, 2019

Codecov Report

Merging #340 into develop will not change coverage.
The diff coverage is 100%.

Impacted file tree graph

@@           Coverage Diff            @@
##           develop     #340   +/-   ##
========================================
  Coverage    98.22%   98.22%           
========================================
  Files          237      237           
  Lines        17993    17993           
========================================
  Hits         17673    17673           
  Misses         320      320
Impacted Files Coverage Δ
omp/matrix/coo_kernels.cpp 100% <100%> (ø) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 1600c6d...704e008. Read the comment docs.

Copy link
Member

@yhmtsai yhmtsai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It leads a race condition because c->at(coo_row[i], j) with different i can be the same.
Change the for loop order can avoid this problem.

#pragma amp parallel for
for (size_type j = 0; j <num_cols; j++) {
    for (size_type i = 0; i < a->get_num_stored_elements(); i++) {
        ...
    }
}

If the previous implementation is slow due to too many openMP instructions, this implementation should be also faster than previous one.

@thoasm
Copy link
Member Author

thoasm commented Aug 29, 2019

@yhmtsai You are correct, good catch.

Since the previous version contained race conditions, the order of the
for-loops was swapped (first over number of right hand sides, then
matrix values). For a single right hand side, it performs close to the
reference implementation, while for multiple right hand sides, it is
faster in most cases (at least in the ones I tested).
@thoasm thoasm added type:matrix-format This is related to the Matrix formats mod:openmp This is related to the OpenMP module. labels Aug 29, 2019
@thoasm
Copy link
Member Author

thoasm commented Aug 29, 2019

@yhmtsai I am currently testing the variant you proposed, but on my laptop, it is still slower than the reference version.
I think I might go without any pragmas in the end, depending on the result of the CI system.

@thoasm thoasm added the 1:ST:ready-for-review This PR is ready for review label Aug 29, 2019
@thoasm
Copy link
Member Author

thoasm commented Aug 29, 2019

I did quite a lot of benchmarks (see the description), and I think the current version should be acceptable (at least on clusters). On my laptop, it is still slower.
It might be because the laptop has less memory channels, higher frequency on the CPU or something similar.

But I would go with the solution that works best on the CI system since it represents HPC clusters better than a laptop.

@thoasm thoasm requested a review from yhmtsai August 29, 2019 15:56
@thoasm thoasm merged commit 263a455 into develop Aug 29, 2019
@thoasm thoasm added 1:ST:ready-to-merge This PR is ready to merge. and removed 1:ST:ready-for-review This PR is ready for review labels Aug 29, 2019
@thoasm thoasm deleted the omp_coo_runtime_fix branch August 29, 2019 16:03
tcojean added a commit that referenced this pull request Oct 20, 2019
The Ginkgo team is proud to announce the new minor release of Ginkgo version
1.1.0. This release brings several performance improvements, adds Windows support, 
adds support for factorizations inside Ginkgo and a new ILU preconditioner
based on ParILU algorithm, among other things. For detailed information, check the respective issue.

Supported systems and requirements:
+ For all platforms, cmake 3.9+
+ Linux and MacOS
  + gcc: 5.3+, 6.3+, 7.3+, 8.1+
  + clang: 3.9+
  + Intel compiler: 2017+
  + Apple LLVM: 8.0+
  + CUDA module: CUDA 9.0+
+ Windows
  + MinGW and CygWin: gcc 5.3+, 6.3+, 7.3+, 8.1+
  + Microsoft Visual Studio: VS 2017 15.7+
  + CUDA module: CUDA 9.0+, Microsoft Visual Studio
  + OpenMP module: MinGW or CygWin.


The current known issues can be found in the [known issues
page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues).


Additions:
+ Upper and lower triangular solvers ([#327](#327), [#336](#336), [#341](#341), [#342](#342)) 
+ New factorization support in Ginkgo, and addition of the ParILU
  algorithm ([#305](#305), [#315](#315), [#319](#319), [#324](#324))
+ New ILU preconditioner ([#348](#348), [#353](#353))
+ Windows MinGW and Cygwin support ([#347](#347))
+ Windows Visual studio support ([#351](#351))
+ New example showing how to use ParILU as a preconditioner ([#358](#358))
+ New example on using loggers for debugging ([#360](#360))
+ Add two new 9pt and 27pt stencil examples ([#300](#300), [#306](#306))
+ Allow benchmarking CuSPARSE spmv formats through Ginkgo's benchmarks ([#303](#303))
+ New benchmark for sparse matrix format conversions ([#312](#312))
+ Add conversions between CSR and Hybrid formats ([#302](#302), [#310](#310))
+ Support for sorting rows in the CSR format by column idices ([#322](#322))
+ Addition of a CUDA COO SpMM kernel for improved performance ([#345](#345))
+ Addition of a LinOp to handle perturbations of the form (identity + scalar *
  basis * projector) ([#334](#334))
+ New sparsity matrix representation format with Reference and OpenMP
  kernels ([#349](#349), [#350](#350))

Fixes:
+ Accelerate GMRES solver for CUDA executor ([#363](#363))
+ Fix BiCGSTAB solver convergence ([#359](#359))
+ Fix CGS logging by reporting the residual for every sub iteration ([#328](#328))
+ Fix CSR,Dense->Sellp conversion's memory access violation ([#295](#295))
+ Accelerate CSR->Ell,Hybrid conversions on CUDA ([#313](#313), [#318](#318))
+ Fixed slowdown of COO SpMV on OpenMP ([#340](#340))
+ Fix gcc 6.4.0 internal compiler error ([#316](#316))
+ Fix compilation issue on Apple clang++ 10 ([#322](#322))
+ Make Ginkgo able to compile on Intel 2017 and above ([#337](#337))
+ Make the benchmarks spmv/solver use the same matrix formats ([#366](#366))
+ Fix self-written isfinite function ([#348](#348))
+ Fix Jacobi issues shown by cuda-memcheck

Tools and ecosystem:
+ Multiple improvements to the CI system and tools ([#296](#296), [#311](#311), [#365](#365))
+ Multiple improvements to the Ginkgo containers ([#328](#328), [#361](#361))
+ Add sonarqube analysis to Ginkgo ([#304](#304), [#308](#308), [#309](#309))
+ Add clang-tidy and iwyu support to Ginkgo ([#298](#298))
+ Improve Ginkgo's support of xSDK M12 policy by adding the `TPL_` arguments
  to CMake ([#300](#300))
+ Add support for the xSDK R7 policy ([#325](#325))
+ Fix examples in html documentation ([#367](#367))
tcojean added a commit that referenced this pull request Oct 21, 2019
The Ginkgo team is proud to announce the new minor release of Ginkgo version
1.1.0. This release brings several performance improvements, adds Windows support,
adds support for factorizations inside Ginkgo and a new ILU preconditioner
based on ParILU algorithm, among other things. For detailed information, check the respective issue.

Supported systems and requirements:
+ For all platforms, cmake 3.9+
+ Linux and MacOS
  + gcc: 5.3+, 6.3+, 7.3+, 8.1+
  + clang: 3.9+
  + Intel compiler: 2017+
  + Apple LLVM: 8.0+
  + CUDA module: CUDA 9.0+
+ Windows
  + MinGW and Cygwin: gcc 5.3+, 6.3+, 7.3+, 8.1+
  + Microsoft Visual Studio: VS 2017 15.7+
  + CUDA module: CUDA 9.0+, Microsoft Visual Studio
  + OpenMP module: MinGW or Cygwin.


The current known issues can be found in the [known issues
page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues).


### Additions
+ Upper and lower triangular solvers ([#327](#327), [#336](#336), [#341](#341), [#342](#342)) 
+ New factorization support in Ginkgo, and addition of the ParILU
  algorithm ([#305](#305), [#315](#315), [#319](#319), [#324](#324))
+ New ILU preconditioner ([#348](#348), [#353](#353))
+ Windows MinGW and Cygwin support ([#347](#347))
+ Windows Visual Studio support ([#351](#351))
+ New example showing how to use ParILU as a preconditioner ([#358](#358))
+ New example on using loggers for debugging ([#360](#360))
+ Add two new 9pt and 27pt stencil examples ([#300](#300), [#306](#306))
+ Allow benchmarking CuSPARSE spmv formats through Ginkgo's benchmarks ([#303](#303))
+ New benchmark for sparse matrix format conversions ([#312](#312))
+ Add conversions between CSR and Hybrid formats ([#302](#302), [#310](#310))
+ Support for sorting rows in the CSR format by column idices ([#322](#322))
+ Addition of a CUDA COO SpMM kernel for improved performance ([#345](#345))
+ Addition of a LinOp to handle perturbations of the form (identity + scalar *
  basis * projector) ([#334](#334))
+ New sparsity matrix representation format with Reference and OpenMP
  kernels ([#349](#349), [#350](#350))

### Fixes
+ Accelerate GMRES solver for CUDA executor ([#363](#363))
+ Fix BiCGSTAB solver convergence ([#359](#359))
+ Fix CGS logging by reporting the residual for every sub iteration ([#328](#328))
+ Fix CSR,Dense->Sellp conversion's memory access violation ([#295](#295))
+ Accelerate CSR->Ell,Hybrid conversions on CUDA ([#313](#313), [#318](#318))
+ Fixed slowdown of COO SpMV on OpenMP ([#340](#340))
+ Fix gcc 6.4.0 internal compiler error ([#316](#316))
+ Fix compilation issue on Apple clang++ 10 ([#322](#322))
+ Make Ginkgo able to compile on Intel 2017 and above ([#337](#337))
+ Make the benchmarks spmv/solver use the same matrix formats ([#366](#366))
+ Fix self-written isfinite function ([#348](#348))
+ Fix Jacobi issues shown by cuda-memcheck

### Tools and ecosystem improvements
+ Multiple improvements to the CI system and tools ([#296](#296), [#311](#311), [#365](#365))
+ Multiple improvements to the Ginkgo containers ([#328](#328), [#361](#361))
+ Add sonarqube analysis to Ginkgo ([#304](#304), [#308](#308), [#309](#309))
+ Add clang-tidy and iwyu support to Ginkgo ([#298](#298))
+ Improve Ginkgo's support of xSDK M12 policy by adding the `TPL_` arguments
  to CMake ([#300](#300))
+ Add support for the xSDK R7 policy ([#325](#325))
+ Fix examples in html documentation ([#367](#367))


Related PR: #370
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:ready-to-merge This PR is ready to merge. mod:openmp This is related to the OpenMP module. type:matrix-format This is related to the Matrix formats
Projects
None yet
Development

Successfully merging this pull request may close these issues.

coo matrix performance with OpenMP
3 participants