Skip to content
forked from ROCm/mxnet

Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

License

Notifications You must be signed in to change notification settings

aleksthegreat/mxnet

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

for Deep Learning

Build Status Documentation Status GitHub license

banner

MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to mix symbolic and imperative programming to maximize efficiency and productivity. At its core, MXNet contains a dynamic dependency scheduler that automatically parallelizes both symbolic and imperative operations on the fly. A graph optimization layer on top of that makes symbolic execution fast and memory efficient. MXNet is portable and lightweight, scaling effectively to multiple GPUs and multiple machines.

MXNet is also more than a deep learning project. It is also a collection of blue prints and guidelines for building deep learning systems, and interesting insights of DL systems for hackers.

Join the chat at https://gitter.im/dmlc/mxnet

What's New

Contents

Features

  • Design notes providing useful insights that can re-used by other DL projects
  • Flexible configuration for arbitrary computation graph
  • Mix and match imperative and symbolic programming to maximize flexibility and efficiency
  • Lightweight, memory efficient and portable to smart devices
  • Scales up to multi GPUs and distributed setting with auto parallelism
  • Support for Python, R, C++ and Julia
  • Cloud-friendly and directly compatible with S3, HDFS, and Azure

Installation Guide for HIP Port

Generic Installation Steps:

Install the system requirement following the ROCm’s installation guide

Installation Steps on HCC and NVCC PLATFORM

Prerequisites

Install CUDA 8.0 following the NVIDIA’s installation guide to setup MXNet with GPU support

Note: Make sure to add CUDA install path to LD_LIBRARY_PATH. Example - export LD_LIBRARY_PATH=/usr/local/cuda/lib64/:$LD_LIBRARY_PATH

Building MXNet from source is a 2 step process.

  1. Build the MXNet core shared library, libmxnet.so, from the C++ sources.
  2. Build the language specific bindings. Example - Python bindings, Scala bindings.

Minimum Requirements

  1. GCC 4.8 or later to compile C++ 11.
  2. GNU Make

ROCm installation

Step 1: Add the ROCm apt repository For Debian based systems, like Ubuntu, configure the Debian ROCm repository as follows:

$ wget -qO - https://repo.radeon.com/rocm/apt/debian/rocm.gpg.key | sudo apt-key add -
$ sudo sh -c 'echo deb [arch=amd64] https://repo.radeon.com/rocm/apt/debian/ xenial main > /etc/apt/sources.list.d/rocm.list'

Step 2: Install or Update Next, update the apt-get repository list and install/update the rocm package: Warning: Before proceeding, make sure to completely uninstall any previous ROCm package

$ sudo apt-get update
$ sudo apt-get install rocm

Step 3: Install dependent libraries

$ sudo apt-get install rocm-device-libs rocblas rocm-libs 

For detailed installation steps refer the given installation link

Build the MXNet core shared library

Step 1 : Install build tools and git.

$ sudo apt-get update
$ sudo apt-get install -y build-essential git

Step 2 : Install OpenCV

MXNet uses OpenCV for efficient image loading and augmentation operations.

$ sudo apt-get install -y libopencv-dev

Step 3 : To build MXNet with Thrust

$ git clone --recursive https://github.com/ROCmSoftwarePlatform/Thrust

Add thrust path to the Makefile,

ifeq ($(HIP_PLATFORM), hcc)
               HIPINCLUDE += -I<Root path of Thrust>
               <Example: HIPINCLUDE += -I../Thrust>
endif

Step 4 : Download MXNet sources and build MXNet core shared library.

$ git clone --recursive https://github.com/ROCmSoftwarePlatform/mxnet
$ cd mxnet

To compile on HCC PLATFORM:

$ export HIP_PLATFORM=hcc
$ make -jn (n = no of cores)

To compile on NVCC PLATFORM:

$ export HIP_PLATFORM=nvcc
$ make -jn (n = no of cores) 

Note:

  1. USE_OPENCV, USE_BLAS, USE_CUDA, USE_CUDA_PATH are make file flags to set compilation options to use OpenCV, CUDA libraries. You can explore and use more compilation options in make/config.mk. Make sure to set USE_CUDA_PATH to right CUDA installation path. In most cases it is - /usr/local/cuda.
  2. MXNet uses rocBLAS, hcFFT, hcRNG and lapack libraries for accelerated numerical computations. cuDNN is not enabled as it is being migrated to Miopen.

Install the MXNet Python binding

Step 1 : Install prerequisites - python, setup-tools, python-pip and numpy.

$ sudo apt-get install -y python-dev python-setuptools python-numpy python-pip

Step 2 : Install the MXNet Python binding.

$ cd python
$ sudo python setup.py install 

Ask Questions

  • Please use mxnet/issues for how to use mxnet and reporting bugs

License

© Contributors, 2015-2017. Licensed under an Apache-2.0 license.

Reference Paper

Tianqi Chen, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang. MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems. In Neural Information Processing Systems, Workshop on Machine Learning Systems, 2015

History

MXNet emerged from a collaboration by the authors of cxxnet, minerva, and purine2. The project reflects what we have learned from the past projects. MXNet combines aspects of each of these projects to achieve flexibility, speed, and memory efficiency.

About

Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • C++ 31.1%
  • Python 28.3%
  • Jupyter Notebook 13.6%
  • Scala 9.1%
  • Perl 6.5%
  • Cuda 5.3%
  • Other 6.1%