Skip to content

Latest commit

 

History

History
37 lines (19 loc) · 1.12 KB

README.md

File metadata and controls

37 lines (19 loc) · 1.12 KB

ColossalAI Implementation for BLOOM Inference

Under development. This repo is going to support BLOOM Inference with optimizations, such as Tensor Parallelism, Int8 quantization, with the help of ColossalAI.

Fast Inference Solutions for BLOOM

This repo provides demos and packages to perform fast inference solutions for BLOOM. Some of the solutions have their own repos in which case a link to the corresponding repos is provided instead.

Some of the solutions provide both half-precision and int8-quantized solution.

Client-side solutions

Solutions developed to perform large batch inference locally:

Pytorch:

JAX:

Server solutions

Solutions developed to be used in a server mode (i.e. varied batch size, varied request rate):

Pytorch:

Rust: