Skip to content

Commit

Permalink
Merge pull request #1 from EleutherAI/main
Browse files Browse the repository at this point in the history
update from original
  • Loading branch information
ShivanshuPurohit committed Jan 26, 2021
2 parents 72aee05 + cb37b36 commit 5d31cab
Show file tree
Hide file tree
Showing 25 changed files with 812 additions and 84 deletions.
32 changes: 32 additions & 0 deletions .github/ISSUE_TEMPLATE/bug_report.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: bug
assignees: ''

---

**Describe the bug**
A clear and concise description of what the bug is.

**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error

**Expected behavior**
A clear and concise description of what you expected to happen.

**Screenshots**
If applicable, add screenshots to help explain your problem.

**What hardware are you running on?**
- DGX-1 Machine
- NVIDIA 970
- Microsoft Azure

**Additional context**
Add any other context about the problem here.
20 changes: 20 additions & 0 deletions .github/ISSUE_TEMPLATE/feature_request.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: feature request
assignees: ''

---

**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

**Describe the solution you'd like**
A clear and concise description of what you want to happen.

**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.

**Additional context**
Add any other context or screenshots about the feature request here.
25 changes: 25 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
FROM atlanticcrypto/cuda-ssh-server:10.2-cudnn

RUN apt-get update && \
apt-get install -y git python3.8 python3.8-dev python3-pip sudo pdsh && \
update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 1 && \
update-alternatives --install /usr/bin/python python /usr/bin/python3.8 1 && \
python3 -m pip install --upgrade pip && \
pip3 install torch pipx && \
python3 -m pipx ensurepath

RUN mkdir -p ~/.ssh /app && \
echo 'Host *' > ~/.ssh/config && \
echo ' StrictHostKeyChecking no' >> ~/.ssh/config && \
echo 'AuthorizedKeysFile .ssh/authorized_keys' >> /etc/ssh/sshd_config && \
echo 'PasswordAuthentication yes' >> /etc/ssh/sshd_config

WORKDIR /app

COPY install_deepspeed.sh /app
RUN sh ./install_deepspeed.sh

COPY requirements.txt /app
RUN pip install -r requirements.txt

COPY . /app
59 changes: 26 additions & 33 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,53 +1,46 @@
# GPT-NeoX
An implementation of model parallel GPT-3-like models on GPUs, based on the DeepSpeed library. Designed to be able to train models in the hundreds of billions of parameters or larger.
An implementation of model parallel GPT-3-like models on GPUs, based on the DeepSpeed library. Designed to be able to train models in the hundreds of billions of parameters or larger. This repository is under development and may change rapidly without warning.

## Requirements

```bash
$ pip install -r requirements.txt
```

Test deepspeed locally
## Running the code

The anatomy of a call to the DeepSpeed engine is the following
```bash
$ deepspeed train_enwik8.py \
$ deepspeed --hostfile=host_path train_script.py \
--deepspeed \
--deepspeed_config ./configs/base_deepspeed.json
```

## Sparse Attention
### Running the code locally

To use sparse attention in your GPTNeoX model, you first need to make sure Deepspeed is installed with sparse attention enabled. You can use the following script to install all the dependencies as well as reinstall Deepspeed.
### Running the code on a server

```bash
$ ./install_deepspeed.sh
```
This code is set up to run automatically on as many GPUs as are avaliable. To run across multiple machines, you need to make use of a hostfile which lists the IP address of each machine you wish to run the code on followed by the number of GPUs to use. For example, `123.45.67.890 slots=8` instructs the code to run on all eight GPUs of the machine at `123.45.67.890`. Each machine should be listed on a separate line with no end-of-line punctuation. It is officially recommended that you set up passwordless ssh, but we have had success entering the password at run-time. To have your hostfile used by GPT-NeoX automatically, store it at `~/jobs/hostfile`. Otherwise, you can provide it as an argument as shown above.

Then

```python
model = GPTNeoX(
num_tokens = 20000,
dim = 512,
seq_len = SEQ_LEN,
depth = 12,
heads = 8,
sparse_attn = True,
)
```
**EleutherAI members:**

Or if you want it for specific layers

```python
model = GPTNeoX(
num_tokens = 20000,
dim = 512,
seq_len = SEQ_LEN,
depth = 12,
heads = 8,
sparse_attn = (True, False) * 6, # interleaved
)
```
### ~/scripts/

The directory `~/scripts/` stores various scripts for automatically starting runs with particular settings and configs that we have found useful. They can be run using `sh scripts/script_name.sh` but should not be relied upon. We do not guarentee forward compatibility of any scripts.

## Datasets

### Tokenizers

### Using our data

### Using your data

## Advanced Options

## Contribute

If you want to get involved, check out our repo projects. Anything that is listed as "todo" or has not been assigned to anyone is fair game, but please leave a comment so that we know you're working on it!

## Resources
If you have trouble getting the model to run, consider consulting [this guide](https://gist.github.com/kevinwatkins/232b88bfecbeca8d48d612a3e9cf65e4) to installing in a GCE virtual machine.
If you have trouble getting the model to run, consider consulting [this guide](https://gist.github.com/kevinwatkins/232b88bfecbeca8d48d612a3e9cf65e4) to installing in a GCE virtual machine. You may also find the (very sparse) [DeepSpeed docs](https://www.deepspeed.ai) helpful.
44 changes: 44 additions & 0 deletions configs/deepspeed_zero1.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
{
"train_batch_size": 1280,
"gradient_accumulation_steps": 80,
"gradient_clipping": 1.0,
"wall_clock_breakdown": true,
"tensorboard": {
"enabled": true,
"output_path": "./logs",
"job_name": "gptneox"
},
"optimizer": {
"type": "OneBitAdam",
"params": {
"lr": 2e-4,
"freeze_step":2,
"cuda-aware":true
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 0.00015,
"warmup_num_steps": 5000
}
},
"fp16": {
"enabled": true
},
"zero_optimization": {
"stage": 1,
"contiguous_gradients" : true,
"cpu_offload": false
},
"activation_checkpointing": {
"partition_activations": true,
"cpu_checkpointing": false,
"contiguous_memory_optimization": false,
"number_checkpoints": 1,
"synchronize_checkpoint_boundary": false,
"profile": false
}

}
49 changes: 49 additions & 0 deletions configs/deepspeed_zero2.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
{
"train_batch_size": 1028,
"gradient_accumulation_steps": 1,
"gradient_clipping": 1.0,
"tensorboard": {
"enabled": true,
"output_path": "./logs",
"job_name": "gptneox"
},
"optimizer": {
"type": "OneBitAdam",
"params": {
"lr": 2e-4,
"freeze_step":2,
"cuda-aware":true
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 0.00015,
"warmup_num_steps": 5000
}
},
"fp16": {
"enabled": true
},
"wall_clock_breakdown": true,
"zero_optimization": {
"stage": 2,
"contiguous_gradients" : true,
"cpu_offload": true,
"overlap_comm": true
},
"logging": {
"steps_per_print": 100,
"wall_clock_breakdown": true
},
"activation_checkpointing": {
"comment": "to turn on activation checkpointing, set this to a positive integer. Do not touch other params.",
"partition_activations": false,
"cpu_checkpointing": false,
"contiguous_memory_optimization": false,
"number_checkpoints": null,
"synchronize_checkpoint_boundary": false,
"profile": false
}
}
15 changes: 8 additions & 7 deletions configs/gpt3_small.json
Original file line number Diff line number Diff line change
Expand Up @@ -5,22 +5,23 @@
"add_padding_token": false
},
"dataset": {
"name": "owt2",
"train_path": "./data/owt2/train/*",
"eval_path": "./data/owt2/eval/*",
"name": "enron_tfr",
"train_path": "./data/enron_tfr/tokenized/*.tfrecords",
"eval_path": "./data/enron_tfr/tokenized/*.tfrecords",
"seed": 1,
"shuffle_input_filenames": true,
"pretokenized": true,
"filetype": "tfrecords",
"mode": "chunks"
"filetype": "tfrecords"
},
"num_epochs": 10,
"train_steps": 572300,
"eval_batch_size": 32,
"learning_rate": 0.0006,
"generate_every": 500,
"generate_length": 256,
"seq_len": 1024,
"hidden_dim": 768,
"n_layers": 12,
"n_heads": 12,
"dim_head": 64
"dim_head": 64,
"train_batch_size": 256
}
24 changes: 24 additions & 0 deletions deploy_k8s.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
kubectl delete deploy/eleuther-neox
kubectl apply -f kubernetes/deploy_k8s.yml
ssh-keygen -t rsa -f id_rsa -N ""
echo Waiting for deploy to complete...
kubectl wait --for=condition=available --timeout=600s deployment/eleuther-neox || exit

kubectl get pods -o wide | grep eleuther-neox | awk '{print $6 " slots=8"}' > hosts
export MASTER_ID=$(kubectl get pods | grep eleuther-neox | awk '{print $1}' | head -n 1)
echo $MASTER_ID
kubectl cp $PWD/hosts $MASTER_ID:/app
kubectl cp $PWD/id_rsa $MASTER_ID:/root/.ssh

mv id_rsa.pub authorized_keys

for id in $(kubectl get pods | grep eleuther-neox | awk '{print $1}')
do
echo copying keys to $id
kubectl cp $PWD/authorized_keys $id:/root/.ssh/
echo 'chmod 600 ~/.ssh/authorized_keys && chmod 700 ~/.ssh && chown -R root /root/.ssh' | kubectl exec --stdin $id -- /bin/bash
done
rm authorized_keys hosts
rm id_rsa*

kubectl exec --stdin --tty $MASTER_ID -- /bin/bash
Loading

0 comments on commit 5d31cab

Please sign in to comment.