Skip to content

An implementation of model parallel GPT-3-like models on GPUs, based on the DeepSpeed library. Designed to be able to train models in the hundreds of billions of parameters or larger.

License

Notifications You must be signed in to change notification settings

WaleedAsadullah/gpt-neox

 
 

Repository files navigation

GPT-NeoX

An implementation of model parallel GPT-3-like models on GPUs, based on the DeepSpeed library. Designed to be able to train models in the hundreds of billions of parameters or larger.

Requirements

$ pip install -r requirements.txt

Test deepspeed locally

$ deepspeed train_enwik8.py \
	--deepspeed \
	--deepspeed_config ./configs/base_deepspeed.json

Sparse Attention

To use sparse attention in your GPTNeoX model, you first need to make sure Deepspeed is installed with sparse attention enabled. You can use the following script to install all the dependencies as well as reinstall Deepspeed.

$ ./install_deepspeed.sh

Then

model = GPTNeoX(
    num_tokens = 20000,
    dim = 512,
    seq_len = SEQ_LEN,
    depth = 12,
    heads = 8,
    sparse_attn = True,
)

Or if you want it for specific layers

model = GPTNeoX(
    num_tokens = 20000,
    dim = 512,
    seq_len = SEQ_LEN,
    depth = 12,
    heads = 8,
    sparse_attn = (True, False) * 6, # interleaved
)

About

An implementation of model parallel GPT-3-like models on GPUs, based on the DeepSpeed library. Designed to be able to train models in the hundreds of billions of parameters or larger.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 98.5%
  • Shell 1.5%