Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add cusp benchmark #303

Merged
merged 13 commits into from
May 16, 2019
Merged

Add cusp benchmark #303

merged 13 commits into from
May 16, 2019

Conversation

tcojean
Copy link
Member

@tcojean tcojean commented May 9, 2019

Move CuSPARSE benchmarking capacities to the spmv benchmark.

  • Add all Cusp benchmarks to a file cuda_linops.hpp,
  • Make all the Cusp classes be gko::LinOp for easy integration;
  • Control usage of these classes with the flag HAS_CUDA, given only when the
    benchmark is compiled with CUDA capacities;
  • Use dynamic pointer cast of std::shared_ptr<const gko::Executor> to ensure
    the Cusp classes are called with a std::shared_ptr<const gko::CudaExecutor>.
  • Remove the spmv_comparison_cuda benchmark.
  • Replace CUDA_VERSION with actual CMAKE_CUDA_COMPILER_VERSION.
  • Catch exceptions in benchmarks by const reference to get their actual type.
  • Add a function with specializations to manage the cudaDataType_t such as
    CUDA_R_64F.
  • Use pointer mode HOST for cuspCsrEx as it seems to be required.
  • Create CuSPARSE bindings for different precisions.

+ Add all `Cusp` benchmarks to a file `cuda_linops.hpp`,
+ Make all the `Cusp` classes be `gko::LinOp` for easy integration;
+ Control usage of these classes with the flag `HAS_CUDA`, given only when the
  benchmark is compiled with CUDA capacities;
+ Use dynamic cast of `gko::Executor` to ensure the `Cusp` classes are called
  with a `gko::CudaExecutor`.
+ Remove the `spmv_comparison_cuda` benchmark.
@tcojean tcojean self-assigned this May 9, 2019
@tcojean tcojean added reg:benchmarking This is related to benchmarking. mod:cuda This is related to the CUDA module. is:enhancement An improvement of an existing feature. 1:ST:ready-for-review This PR is ready for review labels May 9, 2019
pratikvn
pratikvn previously approved these changes May 9, 2019
Copy link
Member

@pratikvn pratikvn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

benchmark/spmv/cuda_linops.hpp Outdated Show resolved Hide resolved
benchmark/spmv/cuda_linops.hpp Outdated Show resolved Hide resolved
benchmark/spmv/cuda_linops.hpp Outdated Show resolved Hide resolved
benchmark/spmv/cuda_linops.hpp Outdated Show resolved Hide resolved
benchmark/spmv/cuda_linops.hpp Outdated Show resolved Hide resolved
+ `Csr_` variables become `csr_`;
+ Add a forgotten CUDA `GKO_ASSERT`.
+ Add a `#else` case for `cusparseAlgMode_t` using an older accepted value for
this variable. See the following documentation
https://docs.nvidia.com/cuda/cusparse/index.html#cusparsealgmode_t for information.
@yhmtsai
Copy link
Member

yhmtsai commented May 10, 2019

Before using the cuSPARSE API, it needs to set CUSPARSE_POINTER_MODE_HOST or move alpha and beta to device.
I prefer the second way because the Ginkgo kernel uses device parameter.

@tcojean
Copy link
Member Author

tcojean commented May 13, 2019

@yhmtsai I using the handles we already have in Ginkgo for each CUDA executor, which sets the pointer to DEVICE mode and everything seems to work properly. Why should it be HOST mode?

inline cusparseHandle_t init()
{
cusparseHandle_t handle{};
GKO_ASSERT_NO_CUSPARSE_ERRORS(cusparseCreate(&handle));
GKO_ASSERT_NO_CUSPARSE_ERRORS(
cusparseSetPointerMode(handle, CUSPARSE_POINTER_MODE_DEVICE));
return handle;
}

By default, Ginkgo handles use the `POINTER_MODE_DEVICE`, which means all
pointers should be allocated on the GPU. This pointer mode type has some
advantages as it means to implicit synchronizations with the CPU. For more
information see:
https://docs.nvidia.com/cuda/cublas/index.html#scalar-parameters.
Copy link
Member

@thoasm thoasm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly nits about using const &, or hiding Constructors and _impl functions with protected.
My concern is with const gko::CudaExecutor *gpu_exec in CuspBase when using the operator=. Do you actually need the operator=? If you don't, then just delete it and everything is fine, however, if you require it, it would be better to have gpu_exec as an std::shared_ptr<CudaExecutor> to ensure it is not deleted before the class.

benchmark/preconditioner/preconditioner.cpp Outdated Show resolved Hide resolved
benchmark/solver/solver.cpp Outdated Show resolved Hide resolved
benchmark/spmv/spmv.cpp Show resolved Hide resolved
benchmark/spmv/cuda_linops.hpp Outdated Show resolved Hide resolved
benchmark/spmv/cuda_linops.hpp Outdated Show resolved Hide resolved
benchmark/spmv/cuda_linops.hpp Show resolved Hide resolved
benchmark/spmv/cuda_linops.hpp Show resolved Hide resolved
benchmark/spmv/cuda_linops.hpp Show resolved Hide resolved
benchmark/spmv/cuda_linops.hpp Outdated Show resolved Hide resolved
benchmark/spmv/spmv.cpp Outdated Show resolved Hide resolved
Copy link
Member

@yhmtsai yhmtsai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you also add some description of these formats in spmv?

public:
void apply_impl(const gko::LinOp *, const gko::LinOp *, const gko::LinOp *,
gko::LinOp *) const override
{}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should it add GKO_NOT_IMPLEMENTED; ?
Except for cusp_csrmm, the formats only allow nrhs = 1, so they need to handle the condition in their apply function

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The benchmarker benchmarks only for one nrhs so I put this as NOT_IMPLEMENTED.

+ Catch all exceptions as const reference.
+ Put all `apply_impl` and constructors as protected
+ Add a function with specializations to manage the `cudaDataType_t` such as
CUDA_R_64F.
@tcojean
Copy link
Member Author

tcojean commented May 14, 2019

@yhmtsai I cannot get cusparseCsrmvEx with the CUSPARSE_ALG_MERGE_PATH algorithm to work whatever I do. I'm a bit puzzled since I believe everything should be done according to the documentation.

https://docs.nvidia.com/cuda/archive/9.2/cusparse/index.html#cusparse-csrmvEx

We use double type, CUSPARSE_OPERATION_NON_TRANSPOSE plus other default parameters, and all data is allocated through cudaMalloc (so properly aligned). All other csr functions work, and this works with the NAIVE algorithm. Since the buffer is not used, even allocating a dummy buffer and not calling cusparseCsrmvEx_bufferSize results in the same problem in cusparseCsrmvEx.

Do you have any idea ?

@yhmtsai
Copy link
Member

yhmtsai commented May 14, 2019

@tcojean I guess this function only allows CUSPARSE_POINTER_MODE_HOST
I set the scalar in host memory and set the pointer_mode, and then the function works.

Some other try:
I set the buffersize = the value I got in the above statement. it also failed.
set the buffer pointer = nullptr. it also failed

@tcojean
Copy link
Member Author

tcojean commented May 15, 2019

@yhmtsai Thanks for the feedback, I will try with CUSPARSE_POINTER_MODE_HOST and see if it works.

@tcojean tcojean requested a review from pratikvn May 15, 2019 09:38
Copy link
Member

@thoasm thoasm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

benchmark/spmv/cuda_linops.hpp Outdated Show resolved Hide resolved
benchmark/spmv/cuda_linops.hpp Show resolved Hide resolved
@yhmtsai
Copy link
Member

yhmtsai commented May 15, 2019

@tcojean Could you also add the description of these cuda formats in

DEFINE_string(
formats, "coo",
"A comma-separated list of formats to run."
"Supported values are: coo, csr, ell, sellp, hybrid, hybrid0, "
"hybrid25, hybrid33, hybridlimit0, hybridlimit25, hybridlimit33, "
"hybridminstorage.\n"
"coo: Coordinate storage. The CUDA kernel uses the load-balancing approach "
"suggested in Flegar et al.: Overcoming Load Imbalance for Irregular "
"Sparse Matrices.\n"
"csr: Compressed Sparse Row storage. The CUDA kernel invokes NVIDIAs "
"cuSPARSE CSR routine.\n"
"ell: Ellpack format according to Bell and Garland: Efficient Sparse "
"Matrix-Vector Multiplication on CUDA.\n"
"sellp: Sliced Ellpack uses a default block size of 32.\n"
"hybrid: Hybrid uses ell and coo to represent the matrix.\n"
"hybrid0, hybrid25, hybrid33: Hybrid uses the row distribution to decide "
"the partition.\n"
"hybridlimit0, hybridlimit25, hybrid33: Add the upper bound on the ell "
"part of hybrid0, hybrid25, hybrid33.\n"
"hybridminstorage: Hybrid uses the minimal storage to store the matrix.");

@tcojean
Copy link
Member Author

tcojean commented May 16, 2019

@yhmtsai this is done, but I did a little bit more. Here is what is in the last commit. You might also want to check it, @thoasm.

Create CuSPARSE bindings for different precisions.

  • Support IndexType int32.
  • Support ValueType float, double, complex float and complex double.
  • Add a description of the CuSPARSE benchmarks in the spmv file.

Copy link
Member

@thoasm thoasm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!
Good job on abstracting the CuSparse calls!

+ Support IndexType int32
+ Support ValueType float, double, complex float and complex double.
+ Add a description of the CuSPARSE benchmarks in the spmv file.
Copy link
Member

@yhmtsai yhmtsai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@tcojean tcojean added 1:ST:ready-to-merge This PR is ready to merge. and removed 1:ST:ready-for-review This PR is ready for review labels May 16, 2019
@tcojean tcojean merged commit 5e0ca65 into develop May 16, 2019
@tcojean tcojean deleted the add_cusp_benchmark branch May 16, 2019 20:36
tcojean added a commit that referenced this pull request Oct 20, 2019
The Ginkgo team is proud to announce the new minor release of Ginkgo version
1.1.0. This release brings several performance improvements, adds Windows support, 
adds support for factorizations inside Ginkgo and a new ILU preconditioner
based on ParILU algorithm, among other things. For detailed information, check the respective issue.

Supported systems and requirements:
+ For all platforms, cmake 3.9+
+ Linux and MacOS
  + gcc: 5.3+, 6.3+, 7.3+, 8.1+
  + clang: 3.9+
  + Intel compiler: 2017+
  + Apple LLVM: 8.0+
  + CUDA module: CUDA 9.0+
+ Windows
  + MinGW and CygWin: gcc 5.3+, 6.3+, 7.3+, 8.1+
  + Microsoft Visual Studio: VS 2017 15.7+
  + CUDA module: CUDA 9.0+, Microsoft Visual Studio
  + OpenMP module: MinGW or CygWin.


The current known issues can be found in the [known issues
page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues).


Additions:
+ Upper and lower triangular solvers ([#327](#327), [#336](#336), [#341](#341), [#342](#342)) 
+ New factorization support in Ginkgo, and addition of the ParILU
  algorithm ([#305](#305), [#315](#315), [#319](#319), [#324](#324))
+ New ILU preconditioner ([#348](#348), [#353](#353))
+ Windows MinGW and Cygwin support ([#347](#347))
+ Windows Visual studio support ([#351](#351))
+ New example showing how to use ParILU as a preconditioner ([#358](#358))
+ New example on using loggers for debugging ([#360](#360))
+ Add two new 9pt and 27pt stencil examples ([#300](#300), [#306](#306))
+ Allow benchmarking CuSPARSE spmv formats through Ginkgo's benchmarks ([#303](#303))
+ New benchmark for sparse matrix format conversions ([#312](#312))
+ Add conversions between CSR and Hybrid formats ([#302](#302), [#310](#310))
+ Support for sorting rows in the CSR format by column idices ([#322](#322))
+ Addition of a CUDA COO SpMM kernel for improved performance ([#345](#345))
+ Addition of a LinOp to handle perturbations of the form (identity + scalar *
  basis * projector) ([#334](#334))
+ New sparsity matrix representation format with Reference and OpenMP
  kernels ([#349](#349), [#350](#350))

Fixes:
+ Accelerate GMRES solver for CUDA executor ([#363](#363))
+ Fix BiCGSTAB solver convergence ([#359](#359))
+ Fix CGS logging by reporting the residual for every sub iteration ([#328](#328))
+ Fix CSR,Dense->Sellp conversion's memory access violation ([#295](#295))
+ Accelerate CSR->Ell,Hybrid conversions on CUDA ([#313](#313), [#318](#318))
+ Fixed slowdown of COO SpMV on OpenMP ([#340](#340))
+ Fix gcc 6.4.0 internal compiler error ([#316](#316))
+ Fix compilation issue on Apple clang++ 10 ([#322](#322))
+ Make Ginkgo able to compile on Intel 2017 and above ([#337](#337))
+ Make the benchmarks spmv/solver use the same matrix formats ([#366](#366))
+ Fix self-written isfinite function ([#348](#348))
+ Fix Jacobi issues shown by cuda-memcheck

Tools and ecosystem:
+ Multiple improvements to the CI system and tools ([#296](#296), [#311](#311), [#365](#365))
+ Multiple improvements to the Ginkgo containers ([#328](#328), [#361](#361))
+ Add sonarqube analysis to Ginkgo ([#304](#304), [#308](#308), [#309](#309))
+ Add clang-tidy and iwyu support to Ginkgo ([#298](#298))
+ Improve Ginkgo's support of xSDK M12 policy by adding the `TPL_` arguments
  to CMake ([#300](#300))
+ Add support for the xSDK R7 policy ([#325](#325))
+ Fix examples in html documentation ([#367](#367))
tcojean added a commit that referenced this pull request Oct 21, 2019
The Ginkgo team is proud to announce the new minor release of Ginkgo version
1.1.0. This release brings several performance improvements, adds Windows support,
adds support for factorizations inside Ginkgo and a new ILU preconditioner
based on ParILU algorithm, among other things. For detailed information, check the respective issue.

Supported systems and requirements:
+ For all platforms, cmake 3.9+
+ Linux and MacOS
  + gcc: 5.3+, 6.3+, 7.3+, 8.1+
  + clang: 3.9+
  + Intel compiler: 2017+
  + Apple LLVM: 8.0+
  + CUDA module: CUDA 9.0+
+ Windows
  + MinGW and Cygwin: gcc 5.3+, 6.3+, 7.3+, 8.1+
  + Microsoft Visual Studio: VS 2017 15.7+
  + CUDA module: CUDA 9.0+, Microsoft Visual Studio
  + OpenMP module: MinGW or Cygwin.


The current known issues can be found in the [known issues
page](https://github.com/ginkgo-project/ginkgo/wiki/Known-Issues).


### Additions
+ Upper and lower triangular solvers ([#327](#327), [#336](#336), [#341](#341), [#342](#342)) 
+ New factorization support in Ginkgo, and addition of the ParILU
  algorithm ([#305](#305), [#315](#315), [#319](#319), [#324](#324))
+ New ILU preconditioner ([#348](#348), [#353](#353))
+ Windows MinGW and Cygwin support ([#347](#347))
+ Windows Visual Studio support ([#351](#351))
+ New example showing how to use ParILU as a preconditioner ([#358](#358))
+ New example on using loggers for debugging ([#360](#360))
+ Add two new 9pt and 27pt stencil examples ([#300](#300), [#306](#306))
+ Allow benchmarking CuSPARSE spmv formats through Ginkgo's benchmarks ([#303](#303))
+ New benchmark for sparse matrix format conversions ([#312](#312))
+ Add conversions between CSR and Hybrid formats ([#302](#302), [#310](#310))
+ Support for sorting rows in the CSR format by column idices ([#322](#322))
+ Addition of a CUDA COO SpMM kernel for improved performance ([#345](#345))
+ Addition of a LinOp to handle perturbations of the form (identity + scalar *
  basis * projector) ([#334](#334))
+ New sparsity matrix representation format with Reference and OpenMP
  kernels ([#349](#349), [#350](#350))

### Fixes
+ Accelerate GMRES solver for CUDA executor ([#363](#363))
+ Fix BiCGSTAB solver convergence ([#359](#359))
+ Fix CGS logging by reporting the residual for every sub iteration ([#328](#328))
+ Fix CSR,Dense->Sellp conversion's memory access violation ([#295](#295))
+ Accelerate CSR->Ell,Hybrid conversions on CUDA ([#313](#313), [#318](#318))
+ Fixed slowdown of COO SpMV on OpenMP ([#340](#340))
+ Fix gcc 6.4.0 internal compiler error ([#316](#316))
+ Fix compilation issue on Apple clang++ 10 ([#322](#322))
+ Make Ginkgo able to compile on Intel 2017 and above ([#337](#337))
+ Make the benchmarks spmv/solver use the same matrix formats ([#366](#366))
+ Fix self-written isfinite function ([#348](#348))
+ Fix Jacobi issues shown by cuda-memcheck

### Tools and ecosystem improvements
+ Multiple improvements to the CI system and tools ([#296](#296), [#311](#311), [#365](#365))
+ Multiple improvements to the Ginkgo containers ([#328](#328), [#361](#361))
+ Add sonarqube analysis to Ginkgo ([#304](#304), [#308](#308), [#309](#309))
+ Add clang-tidy and iwyu support to Ginkgo ([#298](#298))
+ Improve Ginkgo's support of xSDK M12 policy by adding the `TPL_` arguments
  to CMake ([#300](#300))
+ Add support for the xSDK R7 policy ([#325](#325))
+ Fix examples in html documentation ([#367](#367))


Related PR: #370
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1:ST:ready-to-merge This PR is ready to merge. is:enhancement An improvement of an existing feature. mod:cuda This is related to the CUDA module. reg:benchmarking This is related to benchmarking.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants