MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to mix symbolic and imperative programming to maximize efficiency and productivity. At its core, MXNet contains a dynamic dependency scheduler that automatically parallelizes both symbolic and imperative operations on the fly. A graph optimization layer on top of that makes symbolic execution fast and memory efficient. MXNet is portable and lightweight, scaling effectively to multiple GPUs and multiple machines.
MXNet is also more than a deep learning project. It is also a collection of blue prints and guidelines for building deep learning systems, and interesting insights of DL systems for hackers.
- Version 0.9.3 Release - First 0.9 official release.
- Version 0.9.1 Release (NNVM refactor) - NNVM branch is merged into master now. An official release will be made soon.
- Version 0.8.0 Release
- Updated Image Classification with new Pre-trained Models
- Python Notebooks for How to Use MXNet
- MKLDNN for Faster CPU Performance
- MXNet Memory Monger, Training Deeper Nets with Sublinear Memory Cost
- Tutorial for NVidia GTC 2016
- Embedding Torch layers and functions in MXNet
- MXNet.js: Javascript Package for Deep Learning in Browser (without server)
- Design Note: Design Efficient Deep Learning Data Loading Module
- MXNet on Mobile Device
- Distributed Training
- Guide to Creating New Operators (Layers)
- Go binding for inference
- Amalgamation and Go Binding for Predictors - Outdated
- Training Deep Net on 14 Million Images on A Single Machine
- Documentation and Tutorials
- Design Notes
- Code Examples
- Installation
- Pretrained Models
- Contribute to MXNet
- Frequent Asked Questions
- Design notes providing useful insights that can re-used by other DL projects
- Flexible configuration for arbitrary computation graph
- Mix and match imperative and symbolic programming to maximize flexibility and efficiency
- Lightweight, memory efficient and portable to smart devices
- Scales up to multi GPUs and distributed setting with auto parallelism
- Support for Python, R, C++ and Julia
- Cloud-friendly and directly compatible with S3, HDFS, and Azure
Generic Installation Steps:
Install the system requirement following the ROCm’s installation guide
Installation Steps on HCC and NVCC PLATFORM
Prerequisites
Install CUDA 8.0 following the NVIDIA’s installation guide to setup MXNet with GPU support
Note: Make sure to add CUDA install path to LD_LIBRARY_PATH. Example - export LD_LIBRARY_PATH=/usr/local/cuda/lib64/:$LD_LIBRARY_PATH
Building MXNet from source is a 2 step process.
- Build the MXNet core shared library, libmxnet.so, from the C++ sources.
- Build the language specific bindings. Example - Python bindings, Scala bindings.
Minimum Requirements
ROCm installation
Step 1: Add the ROCm apt repository For Debian based systems, like Ubuntu, configure the Debian ROCm repository as follows:
$ wget -qO - https://repo.radeon.com/rocm/apt/debian/rocm.gpg.key | sudo apt-key add -
$ sudo sh -c 'echo deb [arch=amd64] https://repo.radeon.com/rocm/apt/debian/ xenial main > /etc/apt/sources.list.d/rocm.list'
Step 2: Install or Update Next, update the apt-get repository list and install/update the rocm package: Warning: Before proceeding, make sure to completely uninstall any previous ROCm package
$ sudo apt-get update
$ sudo apt-get install rocm
Step 3: Install dependent libraries
$ sudo apt-get install rocm-device-libs rocblas rocm-libs
For detailed installation steps refer the given installation link
Build the MXNet core shared library
Step 1 : Install build tools and git.
$ sudo apt-get update
$ sudo apt-get install -y build-essential git
Step 2 : Install OpenCV
MXNet uses OpenCV for efficient image loading and augmentation operations.
$ sudo apt-get install -y libopencv-dev
Step 3 : To build MXNet with Thrust
$ git clone --recursive https://github.com/ROCmSoftwarePlatform/Thrust
Add thrust path to the Makefile,
ifeq ($(HIP_PLATFORM), hcc)
HIPINCLUDE += -I<Root path of Thrust>
<Example: HIPINCLUDE += -I../Thrust>
endif
Step 4 : Download MXNet sources and build MXNet core shared library.
$ git clone --recursive https://github.com/ROCmSoftwarePlatform/mxnet
$ cd mxnet
To compile on HCC PLATFORM:
$ export HIP_PLATFORM=hcc
$ make -jn (n = no of cores)
To compile on NVCC PLATFORM:
$ export HIP_PLATFORM=nvcc
$ make -jn (n = no of cores)
Note:
- USE_OPENCV, USE_BLAS, USE_CUDA, USE_CUDA_PATH are make file flags to set compilation options to use OpenCV, CUDA libraries. You can explore and use more compilation options in make/config.mk. Make sure to set USE_CUDA_PATH to right CUDA installation path. In most cases it is - /usr/local/cuda.
- MXNet uses rocBLAS, hcFFT, hcRNG and lapack libraries for accelerated numerical computations. cuDNN is not enabled as it is being migrated to Miopen.
Install the MXNet Python binding
Step 1 : Install prerequisites - python, setup-tools, python-pip and numpy.
$ sudo apt-get install -y python-dev python-setuptools python-numpy python-pip
Step 2 : Install the MXNet Python binding.
$ cd python
$ sudo python setup.py install
- Please use mxnet/issues for how to use mxnet and reporting bugs
© Contributors, 2015-2017. Licensed under an Apache-2.0 license.
Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems. In Neural Information Processing Systems, Workshop on Machine Learning Systems, 2015
MXNet emerged from a collaboration by the authors of cxxnet, minerva, and purine2. The project reflects what we have learned from the past projects. MXNet combines aspects of each of these projects to achieve flexibility, speed, and memory efficiency.