Skip to content
/ CV-CUDA Public

CV-CUDA™ is an open-source, GPU accelerated library for cloud-scale image processing and computer vision.

License

Notifications You must be signed in to change notification settings

CVCUDA/CV-CUDA

Repository files navigation

CV-CUDA

License

Version

Platform

CUDA GCC Python CMake

CV-CUDA is an open-source project that enables building efficient cloud-scale Artificial Intelligence (AI) imaging and computer vision (CV) applications. It uses graphics processing unit (GPU) acceleration to help developers build highly efficient pre- and post-processing pipelines. CV-CUDA originated as a collaborative effort between NVIDIA and ByteDance.

Refer to our Developer Guide for more information on the operators available.

Getting Started

To get a local copy up and running follow these steps.

Compatibility

CV-CUDA Build Platform CUDA Version CUDA Compute Capability Hardware Architectures Nvidia Driver Python Versions Supported Compilers (build from source) API compatibility with prebuilt binaries OS/Linux distributions tested with prebuilt packages
x86_64_cu11 x86_64 11.7 or later SM7 and later Volta, Turing, Ampere, Hopper, Ada Lovelace r525 or later*** 3.8, 3.9, 3.10, 3.11 gcc>=9*
gcc>=11**
gcc>=9 Ubuntu>= 20.04
WSL2/Ubuntu>=20.04
x86_64_cu12 x86_64 12.2 or later SM7 and later Volta, Turing, Ampere, Hopper, Ada Lovelace r525 or later*** 3.8, 3.9, 3.10, 3.11 gcc>=9*
gcc>=11**
gcc>=9 Ubuntu>= 20.04
WSL2/Ubuntu>=20.04
aarch64_cu11 aarch64 11.4 SM7 and later Jetson AGX Orin JetPack 5.1 3.8 gcc>=9*
gcc>=11**
gcc>=9 Jetson Linux 35.x
aarch64_cu12 aarch64 12.2 SM7 and later Jetson AGX Orin, IGX Orin + Ampere RTX6000, IGX Orin + ADA RTX6000 JetPack 6.0 DP, r535 (IGX OS v0.6) 3.10 gcc>=9*
gcc>=11**
gcc>=9 Jetson Linux 36.2
IGX OS v0.6

* partial build, no test module (see Known Limitations)
** full build, including test module
*** samples require driver r535 or later to run and are only officially supported with CUDA 12.

Known limitations and issues

  • For GCC versions lower than 11.0, C++17 support needs to be enabled when compiling CV-CUDA.
  • The C++ test module cannot build with gcc<11 (requires specific C++-20 features). With gcc-9 or gcc-10, please build with option -DBUILD_TESTS=0
  • CV-CUDA Samples require driver r535 or later to run and are only officially supported with CUDA 12.
  • Only one CUDA version (CUDA 11.x or CUDA 12.x) of CV-CUDA packages (Debian packages, tarballs, Python Wheels) can be installed at a time. Please uninstall all packages from a given CUDA version before installing packages from a different version.
  • Documentation built on Ubuntu 20.04 needs an up-to-date version of sphinx (pip install --upgrade sphinx) as well as explicitly parsing the system's default python version ./ci/build_docs path/to/build -DPYTHON_VERSIONS="<py_ver>".
  • The Resize and RandomResizedCrop operators incorrectly interpolate pixel values near the boundary of an image or tensor when using cubic interpolation. This will be fixed in an upcoming release.

Installation

For convenience, we provide pre-built packages for various combinations of CUDA versions, Python versions and architectures here. The following steps describe how to install CV-CUDA from such pre-built packages.

We support two main alternative pathways:

  • Standalone Python Wheels (containing C++/CUDA Libraries and Python bindings)
  • DEB or Tar archive installation (C++/CUDA Libraries, Headers, Python bindings)

Choose the installation method that meets your environment needs.

Python Wheel File Installation

Download the appropriate .whl file for your computer architecture, Python and CUDA version from the release assets of current CV-CUDA release. Release information of all CV-CUDA releases can be found here. Once downloaded, execute the pip install command to install the Python wheel. For example:

pip install cvcuda_<cu_ver>-<x.x.x>-cp<py_ver>-cp<py_ver>-linux_<arch>.whl

where <cu_ver> is the desired CUDA version, <x.x.x> is the CV-CUDA release version, <py_ver> is the desired Python version and <arch> is the desired architecture.

Please note that the Python wheels are standalone, they include both the C++/CUDA libraries and the Python bindings.

DEB File Installation

Install C++/CUDA libraries (cvcuda-lib*) and development headers (cvcuda-dev*) using apt:

sudo apt install -y ./cvcuda-lib-<x.x.x>-<cu_ver>-<arch>-linux.deb ./cvcuda-dev-<x.x.x>-<cu_ver>-<arch>-linux.deb

Install Python bindings (cvcuda-python*) using apt:

sudo apt install -y ./cvcuda-python<py_ver>-<x.x.x>-<cu_ver>-<arch>-linux.deb

where <cu_ver> is the desired CUDA version, <py_ver> is the desired Python version and <arch> is the desired architecture.

Tar File Installation

Install C++/CUDA libraries (cvcuda-lib*) and development headers (cvcuda-dev*):

tar -xvf cvcuda-lib-<x.x.x>-<cu_ver>-<arch>-linux.tar.xz
tar -xvf cvcuda-dev-<x.x.x>-<cu_ver>-<arch>-linux.tar.xz

Install Python bindings (cvcuda-python*)

tar -xvf cvcuda-python<py_ver>-<x.x.x>-<cu_ver>-<arch>-linux.tar.xz

where <cu_ver> is the desired CUDA version, <py_ver> is the desired Python version and <arch> is the desired architecture.

Build from Source

Follow these instruction to build CV-CUDA from source:

1. Set up your local CV-CUDA repository

Install the dependencies needed to setup up the repository:

  • git
  • git-lfs: to retrieve binary files from remote repository

On Ubuntu >= 20.04, install the following packages using apt:

sudo apt install -y git git-lfs

Clone the repository

git clone https://github.com/CVCUDA/CV-CUDA.git

Assuming the repository was cloned in ~/cvcuda, it needs to be properly configured by running the init_repo.sh script only once.

cd ~/cvcuda
./init_repo.sh

2. Build CV-CUDA

Install the dependencies required to build CV-CUDA:

  • g++-11: compiler to be used
  • cmake (>= 3.20), ninja-build (optional): manage build rules
  • python3-dev: for python bindings
  • libssl-dev: needed by the testsuite (MD5 hashing utilities)
  • CUDA toolkit
  • patchelf

On Ubuntu >= 20.04, install the following packages using apt:

sudo apt install -y g++-11 cmake ninja-build python3-dev libssl-dev patchelf

Any version of the 11.x or 12.x CUDA toolkit should work. CV-CUDA was tested with 11.7 and 12.2, these versions are thus recommended.

sudo apt install -y cuda-11-7
# or
sudo apt install -y cuda-12-2

Build the project:

ci/build.sh [release|debug] [output build tree path] [-DBUILD_TESTS=1|0] [-DPYTHON_VERSIONS='3.8;3.9;3.10;3.11'] [-DPUBLIC_API_COMPILERS='gcc-9;gcc-11;clang-11;clang-14']
  • The default build type is 'release'.
  • If output build tree path isn't specified, it will be build-rel for release builds, and build-deb for debug.
  • The library is in build-rel/lib and executables (tests, etc...) are in build-rel/bin.
  • The -DBUILD_TESTS option can be used to disable/enable building the tests (enabled by default, see Known Limitations).
  • The -DPYTHON_VERSIONS option can be used to select Python versions to build bindings and Wheels for. By default, only the default system Python3 version will be selected.
  • The -DPUBLIC_API_COMPILERS option can be used to select the compilers used to check public API compatibility. By default, gcc-11, gcc-9, clang-11, and clang-14 is tried to be selected and checked.

3. Build Documentation

Known limitation: Documentation built on Ubuntu 20.04 needs an up-to-date version of sphinx (pip install --upgrade sphinx) as well as explicitly parsing the system's default python version ./ci/build_docs path/to/build -DPYTHON_VERSIONS="<py_ver>".

Install the dependencies required to build the documentation:

  • doxygen: parse header files for reference documentation
  • python3, python3-pip: to install some python packages needed
  • sphinx, breathe, recommonmark, graphiviz: to render the documentation
  • sphinx-rtd-theme: documentation theme used

On Ubuntu, install the following packages using apt and pip:

sudo apt install -y doxygen graphviz python3 python3-pip sphinx
python3 -m pip install breathe recommonmark graphviz sphinx-rtd-theme

Build the documentation:

ci/build_docs.sh [build folder]

Default build folder is 'build'.

4. Build and run Samples

For instructions on how to build samples from source and run them, see the Samples documentation.

5. Run Tests

Install the dependencies required for running the tests:

  • python3, python3-pip: to run python bindings tests
  • torch: dependencies needed by python bindings tests

On Ubuntu >= 20.04, install the following packages using apt and pip:

sudo apt install -y python3 python3-pip
python3 -m pip install pytest torch numpy==1.26

The tests are in <buildtree>/bin. You can run the script below to run all tests at once. Here's an example when build tree is created in build-rel:

build-rel/bin/run_tests.sh

6. Package installers and Python Wheels

Package installers

Installers can be generated using the following cpack command once you have successfully built the project:

cd build-rel
cpack .

This will generate in the build directory both Debian installers and tarballs (*.tar.xz), needed for integration in other distros.

For a fine-grained choice of what installers to generate, the full syntax is:

cpack . -G [DEB|TXZ]
  • DEB for Debian packages
  • TXZ for *.tar.xz tarballs.

Python Wheels

By default during the release build, Python bindings and wheels are created for the available CUDA version and the specified Python version(s). The wheels are stored in build-rel/pythonX.Y/wheel folder, where build-rel is the build directory used to build the release build and X and Y are Python major and minor versions.

The built wheels can be installed using pip. For example, to install the Python wheel built for CUDA 12.x, Python 3.10 on Linux x86_64 systems:

pip install cvcuda_cu12-<x.x.x>-cp310-cp310-linux_x86_64.whl

Contributing

CV-CUDA is an open source project. As part of the Open Source Community, we are committed to the cycle of learning, improving, and updating that makes this community thrive. However, CV-CUDA is not yet ready for external contributions.

To understand the process for contributing the CV-CUDA, see our Contributing page. To understand our commitment to the Open Source Community, and providing an environment that both supports and respects the efforts of all contributors, please read our Code of Conduct.

CV-CUDA Make Operator Tool

The mkop.sh script is a powerful tool for creating a scaffold for new operators in the CV-CUDA library. It automates several tasks, ensuring consistency and saving time.

Features of mkop.sh:

  1. Operator Stub Creation: Generates no-op (no-operation) operator templates, which serve as a starting point for implementing new functionalities.

  2. File Customization: Modifies template files to include the new operator's name, ensuring consistent naming conventions across the codebase.

  3. CMake Integration: Adds the new operator files to the appropriate CMakeLists, facilitating seamless compilation and integration into the build system.

  4. Python Bindings: Creates Python wrapper stubs for the new operator, allowing it to be used within Python environments.

  5. Test Setup: Generates test files for both C++ and Python, enabling immediate development of unit tests for the new operator.

How to Use mkop.sh:

Run the script with the desired operator name. The script assumes it's located in ~/cvcuda/tools/mkop.

./mkop.sh [Operator Name]

If the script is run from a different location, provide the path to the CV-CUDA root directory.

./mkop.sh [Operator Name] [CV-CUDA root]

NOTE: The first letter of the new operator name is captitalized where needed to match the rest of the file structures.

Process Details:

  • Initial Setup: The script begins by validating the input and setting up necessary variables. It then capitalizes the first letter of the operator name to adhere to naming conventions.

  • Template Modification: It processes various template files (Public.h, PrivateImpl.cpp, etc.), replacing placeholders with the new operator name. This includes adjusting file headers, namespaces, and function signatures.

  • CMake and Python Integration: The script updates CMakeLists.txt files and Python module files to include the new operator, ensuring it's recognized by the build system and Python interface.

  • Testing Framework: Finally, it sets up test files for both C++ and Python, allowing developers to immediately start writing tests for the new operator.

License

CV-CUDA operates under the Apache-2.0 license.

Security

CV-CUDA, as a NVIDIA program, is committed to secure development practices. Please read our Security page to learn more.

Acknowledgements

CV-CUDA is developed jointly by NVIDIA and ByteDance.