Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement distributed training using Kubernetes #77

Merged
merged 17 commits into from
Jan 23, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 25 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
FROM atlanticcrypto/cuda-ssh-server:10.2-cudnn
StellaAthena marked this conversation as resolved.
Show resolved Hide resolved

RUN apt-get update && \
apt-get install -y git python3.8 python3.8-dev python3-pip sudo pdsh && \
update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.8 1 && \
update-alternatives --install /usr/bin/python python /usr/bin/python3.8 1 && \
python3 -m pip install --upgrade pip && \
pip3 install torch pipx && \
python3 -m pipx ensurepath

RUN mkdir -p ~/.ssh /app && \
echo 'Host *' > ~/.ssh/config && \
echo ' StrictHostKeyChecking no' >> ~/.ssh/config && \
echo 'AuthorizedKeysFile .ssh/authorized_keys' >> /etc/ssh/sshd_config && \
echo 'PasswordAuthentication yes' >> /etc/ssh/sshd_config

WORKDIR /app

COPY install_deepspeed.sh /app
RUN sh ./install_deepspeed.sh

COPY requirements.txt /app
RUN pip install -r requirements.txt

COPY . /app
55 changes: 22 additions & 33 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,57 +1,46 @@
# GPT-NeoX
An implementation of model parallel GPT-3-like models on GPUs, based on the DeepSpeed library. Designed to be able to train models in the hundreds of billions of parameters or larger.
An implementation of model parallel GPT-3-like models on GPUs, based on the DeepSpeed library. Designed to be able to train models in the hundreds of billions of parameters or larger. This repository is under development and may change rapidly without warning.

## Requirements

```bash
$ pip install -r requirements.txt
```

Test deepspeed locally
## Running the code

The anatomy of a call to the DeepSpeed engine is the following
```bash
$ deepspeed train_enwik8.py \
$ deepspeed --hostfile=host_path train_script.py \
--deepspeed \
--deepspeed_config ./configs/base_deepspeed.json
```

## Sparse Attention
### Running the code locally

To use sparse attention in your GPTNeoX model, you first need to make sure Deepspeed is installed with sparse attention enabled. You can use the following script to install all the dependencies as well as reinstall Deepspeed.
### Running the code on a server

```bash
$ ./install_deepspeed.sh
```
This code is set up to run automatically on as many GPUs as are avaliable. To run across multiple machines, you need to make use of a hostfile which lists the IP address of each machine you wish to run the code on followed by the number of GPUs to use. For example, `123.45.67.890 slots=8` instructs the code to run on all eight GPUs of the machine at `123.45.67.890`. Each machine should be listed on a separate line with no end-of-line punctuation. It is officially recommended that you set up passwordless ssh, but we have had success entering the password at run-time. To have your hostfile used by GPT-NeoX automatically, store it at `~/jobs/hostfile`. Otherwise, you can provide it as an argument as shown above.

Then

```python
model = GPTNeoX(
num_tokens = 20000,
dim = 512,
seq_len = SEQ_LEN,
depth = 12,
heads = 8,
sparse_attn = True,
)
```
**EleutherAI members:**

Or if you want it for specific layers

```python
model = GPTNeoX(
num_tokens = 20000,
dim = 512,
seq_len = SEQ_LEN,
depth = 12,
heads = 8,
sparse_attn = (True, False) * 6, # interleaved
)
```
### ~/scripts/

The directory `~/scripts/` stores various scripts for automatically starting runs with particular settings and configs that we have found useful. They can be run using `sh scripts/script_name.sh` but should not be relied upon. We do not guarentee forward compatibility of any scripts.

## Datasets

### Tokenizers

### Using our data

### Using your data

## Advanced Options

## Contribute

If you want to get involved, check out our repo projects. Anything that is listed as "todo" or has not been assigned to anyone is fair game, but please leave a comment so that we know you're working on it!

## Resources
If you have trouble getting the model to run, consider consulting [this guide](https://gist.github.com/kevinwatkins/232b88bfecbeca8d48d612a3e9cf65e4) to installing in a GCE virtual machine.
If you have trouble getting the model to run, consider consulting [this guide](https://gist.github.com/kevinwatkins/232b88bfecbeca8d48d612a3e9cf65e4) to installing in a GCE virtual machine. You may also find the (very sparse) [DeepSpeed docs](https://www.deepspeed.ai) helpful.
6 changes: 5 additions & 1 deletion configs/deepspeed_zero2.json
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
{
"train_batch_size": 256,
"train_batch_size": 1028,
"gradient_accumulation_steps": 1,
"gradient_clipping": 1.0,
"tensorboard": {
Expand Down Expand Up @@ -31,6 +31,10 @@
"contiguous_gradients" : false,
"cpu_offload": false
},
"logging": {
"steps_per_print": 100,
"wall_clock_breakdown": true
},
"activation_checkpointing": {
"comment": "to turn on activation checkpointing, set this to a positive integer. Do not touch other params.",
"partition_activations": false,
Expand Down
24 changes: 24 additions & 0 deletions deploy_k8s.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
kubectl delete deploy/eleuther-neox
kubectl apply -f kubernetes/deploy_k8s.yml
ssh-keygen -t rsa -f id_rsa -N ""
echo Waiting for deploy to complete...
kubectl wait --for=condition=available --timeout=600s deployment/eleuther-neox || exit

kubectl get pods -o wide | grep eleuther-neox | awk '{print $6 " slots=8"}' > hosts
export MASTER_ID=$(kubectl get pods | grep eleuther-neox | awk '{print $1}' | head -n 1)
echo $MASTER_ID
kubectl cp $PWD/hosts $MASTER_ID:/app
kubectl cp $PWD/id_rsa $MASTER_ID:/root/.ssh

mv id_rsa.pub authorized_keys

for id in $(kubectl get pods | grep eleuther-neox | awk '{print $1}')
do
echo copying keys to $id
kubectl cp $PWD/authorized_keys $id:/root/.ssh/
echo 'chmod 600 ~/.ssh/authorized_keys && chmod 700 ~/.ssh && chown -R root /root/.ssh' | kubectl exec --stdin $id -- /bin/bash
done
rm authorized_keys hosts
rm id_rsa*

kubectl exec --stdin --tty $MASTER_ID -- /bin/bash
1 change: 1 addition & 0 deletions gpt_neox/gpt_neox.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import torch
import torch.utils.checkpoint
import torch.nn.functional as F
from torch import nn, einsum
from functools import partial
Expand Down
57 changes: 57 additions & 0 deletions kubernetes/deploy_k8s.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: eleuther-neox
spec:
strategy:
type: Recreate
replicas: 4
selector:
matchLabels:
app.kubernetes.io/name: eleuther-neox
template:
metadata:
labels:
app.kubernetes.io/name: eleuther-neox
spec:
terminationGracePeriodSeconds: 10
containers:
- name: neox
command: ["/usr/sbin/sshd"]
args: ["-D"]
tty: true
image: leogao2/gpt-neox
ports:
- name: sshd
containerPort: 2222
Copy link

@amannm amannm Jan 23, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't this be 22 as suggested by the EXPOSE at https://github.com/coreweave/cuda-ssh-server/blob/master/Dockerfile#L37 ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's what I had thought as well, but Leo and I tested it and it seems to work with 2222. Not sure why / what difference it makes though.

protocol: TCP
volumeMounts:
- mountPath: /dev/shm
name: dshm

resources:
requests:
cpu: 30
memory: 40Gi
limits:
nvidia.com/gpu: 8

affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
# Edit for different GPU
- key: gpu.nvidia.com/model
operator: In
values:
- GeForce_RTX_2080_Ti
- key: failure-domain.beta.kubernetes.io/region
operator: In
values:
- ORD1
volumes:
- name: dshm
emptyDir:
medium: Memory
restartPolicy: Always