Skip to content

Latest commit

 

History

History

exla

EXLA

Google's XLA (Accelerated Linear Algebra) compiler/backend for Nx. It supports just-in-time (JIT) compilation to GPU (both CUDA and ROCm) and TPUs.

See the documentation.

Installation

In order to use EXLA, you will need Elixir installed. Then create an Elixir project via the mix build tool:

$ mix new my_app

Then you can add EXLA as dependency in your mix.exs:

def deps do
  [
    {:exla, "~> 0.5"}
  ]
end

If you are using Livebook or IEx, you can instead run:

Mix.install([
  {:exla, "~> 0.5"}
])

Once installed, you must configure Nx to use EXLA by default. Check out the "Configuration" section in the docs to learn how to do so.

XLA binaries

EXLA relies on the XLA package to provide the necessary XLA binaries. Whenever possible it tries to download precompiled builds, but you may need to build from source if there is no version matching your target environment. For more details, including GPU/TPU support see the usage section.

Common installation issues

  • Missing Dependencies
    • Some Erlang installs do not include some of the dependencies needed to compile the EXLA NIF. You may need to install erlang-dev separately.
  • Incompatible protocol buffer versions
    • Error message: "this file was generated by an older version of protoc which is incompatible with your Protocol Buffer headers".
    • If you have protoc installed on your machine, it may conflict with the protoc precompiled inside XLA. Uninstall, unlink, or remove protoc from your path to continue.

Usage with Nerves

For cross-compilation, you need to set your XLA_TARGET_PLATFORM variable to the correct target platform value (i.e. aarch64-linux-gnu for the Raspberry Pi 4).

Contributing

Building locally

EXLA is a regular Elixir project, therefore, to run it locally:

mix deps.get
mix test

By default, EXLA passes ["-jN"] as a Make argument, where N is System.schedulers_online() - 2, capped at 1. config :exla, :make_args, ... can be used to override this default setting.

In order to run tests on a specific device, use the EXLA_TARGET environment variable, which is a dev-only variable for this project (it has no effect when using EXLA as a dependency). For example, EXLA_TARGET=cuda or EXLA_TARGET=rocm. Make sure to also specify XLA_TARGET to fetch or compile a proper version of the XLA binary.

Building with Docker

The easiest way to build is with Docker. For GPU support, you'll also need to set up the NVIDIA Container Toolkit.

To build, clone this repo, select your preferred Dockerfile, and run:

docker build --rm -t exla:host . # Host Docker image
docker build --rm -t exla:cuda10.2 . # CUDA 10.2 Docker image
docker build --rm -t exla:rocm . # ROCm Docker image

Then to run (without Cuda):

docker run -it \
  -v $PWD:$PWD \
  -e TEST_TMPDIR=$PWD/tmp/bazel_cache \
  -e BUILD_CACHE=$PWD/tmp/xla_extension_cache \
  -w $PWD \
  --rm exla:host bash

With CUDA enabled:

Note: XLA_TARGET should match your CUDA version. See: https://github.com/elixir-nx/xla#xla_target

docker run -it \
  -v $PWD:$PWD \
  -e TEST_TMPDIR=$PWD/tmp/bazel_cache \
  -e BUILD_CACHE=$PWD/tmp/xla_extension_cache \
  -e XLA_TARGET=cuda102 \
  -e EXLA_TARGET=cuda \
  -w $PWD \
  --gpus=all \
  --rm exla:cuda10.2 bash

With ROCm enabled:

docker run -it \
  -v $PWD:$PWD \
  -e TEST_TMPDIR=$PWD/tmp/bazel_cache \
  -e BUILD_CACHE=$PWD/tmp/xla_extension_cache \
  -e XLA_TARGET=rocm \
  -e EXLA_TARGET=rocm \
  -w $PWD \
  --device=/dev/kfd \
  --device=/dev/dri \
  --group-add video \
  --rm exla:rocm bash

Inside the container you can interact with the API from IEx using:

iex -S mix

Or you can run an example:

mix run examples/regression.exs

To run tests:

mix test

License

Copyright (c) 2020 Sean Moriarity

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http:https://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.