- What is cuTAGI ?
- Python Installation
- C++/CUDA Installation
- Directory Structure
- License
- Related Papers
- Citation
cuTAGI is an open-source Bayesian neural networks library that is based on Tractable Approximate Gaussian Inference (TAGI) theory. cuTAGI includes several of the common neural network layer architectures such as full-connected, convolutional, and transpose convolutional layers, as well as skip connections, pooling and normalization layers. cuTAGI is capable of performing different tasks such as supervised learning, unsupervised learning, and reinforcement learning. The library includes some of the advanced features such as the capacity to propagate uncertainties from the input to the output layer using the the full covariance mode for hidden layers, the capacity to estimate the derivative of a neural network, and the capacity to quantify heteroscedastic aleatory uncertainty.
cuTAGI is under development and new features will be added as they are ready. Currently supported tasks are:
- Supervised learning
- Regression
- Long Short-Term Memory (LSTM)
- Classification using fully-connected, convolutional and residual architectures
- Unsupervised learning
- Autoencoders
Coming soon...
- Unsupervised learning: GANs
- Reinforcement learning: DQN
- +++
Examples of regression task using the diagonal (top left) or full (top right) covariance modes for hidden layers, an example of heteroscedastic aleatory uncertainty inferrence (bottom left), and an example for the estimation of the derivative of a function modeled by a neural network (bottom right).
- Compiler with C++14 support
- CMake>=3.23
- CUDA toolkit (optional)
pytagi
is a Python wrapper of C++/CUDA backend for TAGI method. The developers can install either distributed or local versions of pytagi
. Currently pytagi
only supports Python version >=3.9 on both MacOS and Ubuntu.
We recommend installing miniconda for managing Python environment, yet pytagi
works well with other alternatives.
- Install miniconda by following these instructions
- Create a conda environment
conda create --name your_env_name python=3.10
- Activate conda environment
conda activate your_env_name
- Create conda environment
- Install requirements
pip install -r requirements.txt
- Install
pytagi
pip install pytagi
- Test
pytagi
packagepython -m python_examples.regression_runner
NOTE: This PyPI distributed version does not require the codebase in this repository. The developers can create their own applications (see python_examples).
- Clone this repository. Note that
git submodule
command allows cloning pybind11 which is the binding python package of C++/CUDA.git clone https://github.com/lhnguyen102/cuTAGI.git cd cuTAGI git submodule update --init --recursive
- Create conda environment
- Install requirements
pip install -r requirements.txt
- Install
pytagi
packagepip install .
- Test
pytagi
packagepython -m python_examples.regression_runner
cutagi
is the native version implemented in C++/CUDA for TAGI method. We highly recommend installing cuTAGI using Docker method to facilitate the installation.
- Install Docker by following these instructions
- Build docker image
- CPU build
bash bin/build.sh
- CUDA build
bash bin/build.sh -d cuda
*NOTE: During the build and run, make sure that Docker desktop application is opened. The commands for runing tasks such as classification and regression can be found here
-
Install CUDA toolkit >=10.1 in
/usr/local/
and add the CUDA location to PATH. For example, adding the following to your~/.bashrc
export PATH="/usr/local/cuda-10.1/bin:$PATH" export LD_LIBRARY_PATH="/usr/local/cuda-10.1/lib64:$LD_LIBRARY_PATH"
-
Install GCC compiler by entering this line in
Terminal
sudo apt install g++
-
Install CMake by following these instructions
-
Build the project using CMake by the folder
cuTAGI
and entering these lines inTerminal
cmake . -B build cmake --build build --config RelWithDebInfo -j 16
-
Download and install MS Visual Studio 2019 community and C/C++ by following these instructions
-
Install CUDA toolkit >=10.1 and add CUDA location to Environment variables (see Step 5.3)
-
Copy all extenstion files from CUDA to MS Visual Studio. See this link for further details.
COPY FROM C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\extras\visual_studio_integration\MSBuildExtensions TO C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\MSBuild\Microsoft\VC\v160\BuildCustomizations
-
Download and install CMake Windows x64 Installer and add the install directory (e.g.,
C:\Program Files\CMake\bin
) to PATH in Environment variables -
Add CMake CUDA compiler to Environment variables.
variable = CMAKE_CUDA_COMPILER value = C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin\nvcc.exe
-
Build the project using CMake by navigating to the folder
cuTAGI
and entering these lines inCommand Prompt
cmake . -B build cmake --build build --config RelWithDebInfo -j 16
*NOTE: Users must enter the CUDA version installed on their machine. Here, we illustrate the installation with CUDA version v10.1 (see Step 1 for Ubuntu and 3 & 5 for Windows).
-
Install gcc and g++ via
Terminal
brew install gcc
-
Install CMake by following these instructions
-
Add CMake to PATH. Add the following line to your
.zshrc
fileexport PATH="/Applications/CMake.app/Contents/bin/:$PATH"
-
Build the project using CMake by the folder
cuTAGI
and entering these lines inTerminal
cmake . -B build cmake --build build --config RelWithDebInfo -j 16
- Install gcc and g++ w.r.t operating system such as Ubuntu, Window, and Mac OS
- Install CMake
- Install the following prerequites
- Visual Studio Code
- C++ extension for VS Code
- CMake Tools extension for VS Code
-
Two fully connected layer
cfg/2fc.txt
build/main cfg_mnist_2fc.txt
-
Two convolutional layers
cfg/2conv.txt
.build/main cfg_mnist_2conv.txt
-
Two convolutional layers each followed by a batch-normalization layer
cfg/2conv_bn.txt
build/main cfg_mnist_2conv_bn.txt
build/main cfg_mnist_ae.txt
-
UCI dataset
build/main cfg_bh_2fc.txt
-
1D toy example using diagonal covariance matrix
build/main cfg_toy_example_fc.txt
-
1D toy example using full covariance matrix
build/main cfg_toy_full_cov_fc.txt
-
Heteroscedastic noise inference for 1D toy example
build/main cfg_toy_ni_fc.txt
All above-mentioned tasks can be run in docker container using the following commands
- Docker with CPU build
bash bin/run.sh -c cfg_mnist_2fc.txt
- Docker with CUDA build
bash bin/run.sh -c cfg_mnist_2fc.txt -d cuda
.
├── bin # Bash scripts for building and runing docker images
├── cfg # User input (.txt)
├── data # Database
├── include # Header files
├── saved_param # Saved network's parameters (.csv)
├── saved_results # Saved network's inference (.csv)
├── src # Source files
│ ├── activation_fun.cu # Activations functions
│ ├── activation_fun_cpu.cpp # CPU version for activations functions
│ ├── common.cpp # Common functionalities
│ ├── cost.cpp # Performance metric
│ ├── dataloader.cpp # Load train and test data
│ ├── data_transfer.cu # Transfer data host from/to device
│ ├── data_transfer_cpu.cpp # Transfer data within cpus
│ ├── derivative_calcul.cu # Derivative calculation for fully-connected layer
│ ├── derivative_calcul_cpu.cpp # CPU version for computing derivatives of fully-connected layer
│ ├── feature_availability.cpp # Feature verification
│ ├── feed_forward.cu # Prediction
│ ├── feed_forward_cpu.cpp # CPU version for prediction
│ ├── global_param_update.cu # Update network's parameters
│ ├── global_param_update_cpu.cpp # CPU version for updating network's parameters
│ ├── indices.cpp # Pre-compute indices for network
│ ├── lstm_feed_backward.cu # Feed backward of lstm layer
│ ├── lstm_feed_backward_cpu.cpp # CPU version for feed backward of lstm layer
│ ├── lstm_feed_forward.cu # Feed foreward of lstm layer
│ ├── lstm_feed_forward_cpu.cpp # CPU version for feed forward of lstm layer
│ ├── net_init.cpp # Initialize the network
│ ├── net_prop.cpp # Network's properties
│ ├── param_feed_backward.cu # Learn network's parameters
│ ├── param_feed_backward_cpu.cpp # CPU version for learning network's parameters
│ ├── state_feed_backward.cu # Learn network's hidden states
│ ├── state_feed_backward_cpu.cpp # CPU version for learning network's hidden states
│ ├── task.cu # Task command
│ ├── task_cpu.cpp # CPU version for task command
│ ├── user_input.cpp # User input variables
│ └── utils.cpp # Different tools
├── config.py # Generate network architecture (.txt)
├── main.cu # The ui
├── main.cpp # CPU version for the ui
cuTAGI is released under the MIT license.
THIS IS AN OPEN SOURCE SOFTWARE FOR RESEARCH PURPOSES ONLY. THIS IS NOT A PRODUCT. NO WARRANTY EXPRESSED OR IMPLIED.
- Tractable approximate Gaussian inference for Bayesian neural networks (James-A. Goulet, Luong-Ha Nguyen, and Said Amiri. JMLR, 2021, 20-1009, Volume 22, Number 251, pp. 1-23.)
- Analytically tractable hidden-states inference in Bayesian neural networks (Luong-Ha Nguyen and James-A. Goulet. JMLR, 2022, 21-0758, Volume 23, pp. 1-33.)
- Analytically tractable inference in deep neural networks (Luong-Ha Nguyen and James-A. Goulet. 2021, Arxiv:2103.05461)
- Analytically tractable Bayesian deep Q-Learning (Luong-Ha Nguyen and James-A. Goulet. 2021, Arxiv:2106.1108)
@misc{cutagi2022,
Author = {Luong-Ha Nguyen and James-A. Goulet},
Title = {cu{TAGI}: a {CUDA} library for {B}ayesian neural networks with Tractable Approximate {G}aussian Inference},
Year = {2022},
journal = {GitHub repository},
howpublished = {https://github.com/lhnguyen102/cuTAGI}
}