Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement distributed training using Kubernetes #77

Merged
merged 17 commits into from
Jan 23, 2021
Merged
Prev Previous commit
Update README.md
  • Loading branch information
StellaAthena committed Jan 23, 2021
commit 221d73a3833054c42fcb4e8f99c63adb79832f01
55 changes: 22 additions & 33 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,57 +1,46 @@
# GPT-NeoX
An implementation of model parallel GPT-3-like models on GPUs, based on the DeepSpeed library. Designed to be able to train models in the hundreds of billions of parameters or larger.
An implementation of model parallel GPT-3-like models on GPUs, based on the DeepSpeed library. Designed to be able to train models in the hundreds of billions of parameters or larger. This repository is under development and may change rapidly without warning.

## Requirements

```bash
$ pip install -r requirements.txt
```

Test deepspeed locally
## Running the code

The anatomy of a call to the DeepSpeed engine is the following
```bash
$ deepspeed train_enwik8.py \
$ deepspeed --hostfile=host_path train_script.py \
--deepspeed \
--deepspeed_config ./configs/base_deepspeed.json
```

## Sparse Attention
### Running the code locally

To use sparse attention in your GPTNeoX model, you first need to make sure Deepspeed is installed with sparse attention enabled. You can use the following script to install all the dependencies as well as reinstall Deepspeed.
### Running the code on a server

```bash
$ ./install_deepspeed.sh
```
This code is set up to run automatically on as many GPUs as are avaliable. To run across multiple machines, you need to make use of a hostfile which lists the IP address of each machine you wish to run the code on followed by the number of GPUs to use. For example, `123.45.67.890 slots=8` instructs the code to run on all eight GPUs of the machine at `123.45.67.890`. Each machine should be listed on a separate line with no end-of-line punctuation. It is officially recommended that you set up passwordless ssh, but we have had success entering the password at run-time. To have your hostfile used by GPT-NeoX automatically, store it at `~/jobs/hostfile`. Otherwise, you can provide it as an argument as shown above.

Then

```python
model = GPTNeoX(
num_tokens = 20000,
dim = 512,
seq_len = SEQ_LEN,
depth = 12,
heads = 8,
sparse_attn = True,
)
```
**EleutherAI members:**

Or if you want it for specific layers

```python
model = GPTNeoX(
num_tokens = 20000,
dim = 512,
seq_len = SEQ_LEN,
depth = 12,
heads = 8,
sparse_attn = (True, False) * 6, # interleaved
)
```
### ~/scripts/

The directory `~/scripts/` stores various scripts for automatically starting runs with particular settings and configs that we have found useful. They can be run using `sh scripts/script_name.sh` but should not be relied upon. We do not guarentee forward compatibility of any scripts.

## Datasets

### Tokenizers

### Using our data

### Using your data

## Advanced Options

## Contribute

If you want to get involved, check out our repo projects. Anything that is listed as "todo" or has not been assigned to anyone is fair game, but please leave a comment so that we know you're working on it!

## Resources
If you have trouble getting the model to run, consider consulting [this guide](https://gist.github.com/kevinwatkins/232b88bfecbeca8d48d612a3e9cf65e4) to installing in a GCE virtual machine.
If you have trouble getting the model to run, consider consulting [this guide](https://gist.github.com/kevinwatkins/232b88bfecbeca8d48d612a3e9cf65e4) to installing in a GCE virtual machine. You may also find the (very sparse) [DeepSpeed docs](https://www.deepspeed.ai) helpful.