Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error upon finishing a run #216

Closed
sdtblck opened this issue Apr 7, 2021 · 0 comments · Fixed by #248
Closed

Error upon finishing a run #216

sdtblck opened this issue Apr 7, 2021 · 0 comments · Fixed by #248
Labels
bug Something isn't working

Comments

@sdtblck
Copy link
Contributor

sdtblck commented Apr 7, 2021

Not critical as it doesn't really affect anything, but whenever a run finishes I get this error

wandb: ERROR Problem finishing run
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/wandb/sdk/wandb_run.py", line 1454, in _atexit_cleanup
    self._on_finish()
  File "/usr/local/lib/python3.8/dist-packages/wandb/sdk/wandb_run.py", line 1606, in _on_finish
    print("")
  File "/home/mchorse/sampling/gpt-neox/megatron/logging.py", line 41, in write
    self.std.write(data)
OSError: [Errno 9] Bad file descriptor
wandb: ERROR Problem finishing run
Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/wandb/sdk/wandb_run.py", line 1454, in _atexit_cleanup
    self._on_finish()
  File "/usr/local/lib/python3.8/dist-packages/wandb/sdk/wandb_run.py", line 1606, in _on_finish
    print("")
  File "/home/mchorse/sampling/gpt-neox/megatron/logging.py", line 41, in write
    self.std.write(data)
OSError: [Errno 9] Bad file descriptor
Killing subprocess 630472
Killing subprocess 630473
Killing subprocess 630474
Killing subprocess 630476
Traceback (most recent call last):
  File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/mchorse/gpt-neox/src/deepspeed/deepspeed/launcher/launch.py", line 179, in <module>
    main()
  File "/home/mchorse/gpt-neox/src/deepspeed/deepspeed/launcher/launch.py", line 169, in main
    sigkill_handler(signal.SIGTERM, None)  # not coming back
  File "/home/mchorse/gpt-neox/src/deepspeed/deepspeed/launcher/launch.py", line 147, in sigkill_handler
    raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python', '-u', 'pretrain_gpt2.py', '--local_rank=3', '--num-layers', '12', '--hidden-size', '768', '--num-attention-heads', '12', '--max-position-embeddings', '2048', '--attention-dropout', '0', '--hidden-dropout', '0', '--weight-decay', '0', '--batch-size', '4', '--checkpoint-activations', '--checkpoint-num-layers', '1', '--train-iters', '50', '--log-interval', '100', '--tensorboard-dir', '/mnt/ssd-cluster/tensorboard', '--no-weight-tying', '--pos-emb', 'none', '--norm', 'rmsnorm', '--lr-decay-style', 'cosine', '--lr-decay-iters', '320000', '--warmup', '0.01', '--save', '/mnt/ssd-cluster/checkpoints', '--save-interval', '10000', '--keep-last-n-checkpoints', '4', '--load', '/mnt/ssd-cluster/checkpoints', '--model-parallel-size', '1', '--pipe-parallel-size', '1', '--distributed-backend', 'nccl', '--eval-iters', '10', '--eval-interval', '1000', '--data-path', '/mnt/ssd-cluster/data/enron/enron_text_document', '--split', '949,50,1', '--vocab-file', '/mnt/ssd-cluster/data/gpt2-vocab.json', '--merge-file', '/mnt/ssd-cluster/data/gpt2-merges.txt', '--seq-length', '2048', '--data-impl', 'mmap', '--log-dir', '/mnt/ssd-cluster/logs', '--partition-activations', '--synchronize-each-layer', '--wandb_group', 'FLCvtT5P3t5CLXdLeivvbF', '--wandb_team', 'eleutherai', '--git_hash', '901d79e', '--deepspeed', '--fp16', '--gas', '1', '--zero-stage', '0', '--zero-reduce-scatter', '--zero-contiguous-gradients', '--zero-reduce-bucket-size', '500000000', '--zero-allgather-bucket-size', '500000000', '--clip-grad', '1.0', '--lr', '0.0006', '--adam-beta1', '0.9', '--adam-beta2', '0.95', '--adam-eps', '1e-08', '--momentum', '0.0', '--deepspeed_config', '{"train_batch_size":16.0,"train_micro_batch_size_per_gpu":4,"gradient_accumulation_steps":1,"optimizer":{"type":"Adam","params":{"lr":0.0006,"max_grad_norm":1.0,"betas":[0.9,0.95]}},"fp16":{"fp16":true,"enabled":true,"loss_scale":0,"loss_scale_window":1000,"hysteresis":2,"min_loss_scale":1},"gradient_clipping":1.0,"zero_optimization":{"stage":0,"allgather_partitions":true,"allgather_bucket_size":500000000,"overlap_comm":true,"reduce_scatter":true,"reduce_bucket_size":500000000,"contiguous_gradients":true,"cpu_offload":false},"steps_per_print":10,"wall_clock_breakdown":true,"deepspeed":true}']' returned non-zero exit status 255.
@sdtblck sdtblck added the bug Something isn't working label Apr 7, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant