Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"RuntimeError: CUDA out of memory" on lm-eval 0.3.0 through GPT-NeoX evaluate past a certain number of nodes #884

Open
AIproj opened this issue Sep 23, 2023 · 0 comments
Assignees
Labels
bug Something isn't working. duplicate This issue or pull request already exists. help wanted Contributors and extra help welcome.

Comments

@AIproj
Copy link

AIproj commented Sep 23, 2023

Hello,
I'm using the evaluate.py script from GPT-NeoX which loads the installed lm-eval 0.3.0 library. I'm using Megatron and DeeperSpeed on Summit, which has 6 V100s 16GB per node.
The issue is related to the number of nodes the evals run on, and the choice of task. For example, hellaswag runs fine with a 410M model evaluated on 2 nodes (but crashes with e.g. 8 nodes), but arc_easy truthfulqa_mc hellaswag hendrycksTest-* will crash with the following trace:

Running loglikelihood requests
  0%|          | 48/111262 [00:09<6:11:25,  4.99it/s]Traceback (most recent call last):
  File "/gpfs/alpine/csc499/scratch/adami/gpt-neox/evaluate.py", line 75, in <module>
  0%|          | 95/111262 [00:10<3:25:47,  9.00it/s]
    main()
  File "/gpfs/alpine/csc499/scratch/adami/gpt-neox/evaluate.py", line 36, in main
    results = run_eval_harness(
  File "/gpfs/alpine/csc499/scratch/adami/gpt-neox/eval_tasks/eval_adapter.py", line 461, in run_eval_harness
Traceback (most recent call last):
  File "/gpfs/alpine/csc499/scratch/adami/gpt-neox/evaluate.py", line 75, in <module>
    main()
  File "/gpfs/alpine/csc499/scratch/adami/gpt-neox/evaluate.py", line 36, in main
    results = run_eval_harness(
  File "/gpfs/alpine/csc499/scratch/adami/gpt-neox/eval_tasks/eval_adapter.py", line 461, in run_eval_harness
    return adapter.run_eval(
  File "/gpfs/alpine/csc499/scratch/adami/miniconda3/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return adapter.run_eval(
  File "/gpfs/alpine/csc499/scratch/adami/miniconda3/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/gpfs/alpine/csc499/scratch/adami/gpt-neox/eval_tasks/eval_adapter.py", line 421, in run_eval
    return func(*args, **kwargs)
  File "/gpfs/alpine/csc499/scratch/adami/gpt-neox/eval_tasks/eval_adapter.py", line 421, in run_eval
    results = evaluator.evaluate(
  File "/gpfs/alpine/csc499/scratch/adami/miniconda3/lib/python3.9/site-packages/lm_eval/utils.py", line 161, in _wrapper
    results = evaluator.evaluate(
  File "/gpfs/alpine/csc499/scratch/adami/miniconda3/lib/python3.9/site-packages/lm_eval/utils.py", line 161, in _wrapper
    return fn(*args, **kwargs)
  File "/gpfs/alpine/csc499/scratch/adami/miniconda3/lib/python3.9/site-packages/lm_eval/evaluator.py", line 247, in evaluate
    return fn(*args, **kwargs)
  File "/gpfs/alpine/csc499/scratch/adami/miniconda3/lib/python3.9/site-packages/lm_eval/evaluator.py", line 247, in evaluate
    resps = getattr(lm, reqtype)([req.args for req in reqs])
  File "/gpfs/alpine/csc499/scratch/adami/miniconda3/lib/python3.9/site-packages/lm_eval/base.py", line 185, in loglikelihood
    resps = getattr(lm, reqtype)([req.args for req in reqs])
  File "/gpfs/alpine/csc499/scratch/adami/miniconda3/lib/python3.9/site-packages/lm_eval/base.py", line 185, in loglikelihood
    return self._loglikelihood_tokens(new_reqs)
  File "/gpfs/alpine/csc499/scratch/adami/gpt-neox/eval_tasks/eval_adapter.py", line 232, in _loglikelihood_tokens
    return self._loglikelihood_tokens(new_reqs)
  File "/gpfs/alpine/csc499/scratch/adami/gpt-neox/eval_tasks/eval_adapter.py", line 232, in _loglikelihood_tokens
    logits = self._model_call(torch.cat(inps, dim=0))
  File "/gpfs/alpine/csc499/scratch/adami/gpt-neox/eval_tasks/eval_adapter.py", line 351, in _model_call
    logits = self._model_call(torch.cat(inps, dim=0))
  File "/gpfs/alpine/csc499/scratch/adami/gpt-neox/eval_tasks/eval_adapter.py", line 351, in _model_call
    logits = self._dp_gather(logits)
  File "/gpfs/alpine/csc499/scratch/adami/gpt-neox/eval_tasks/eval_adapter.py", line 331, in _dp_gather
    logits = self._dp_gather(logits)
  File "/gpfs/alpine/csc499/scratch/adami/gpt-neox/eval_tasks/eval_adapter.py", line 331, in _dp_gather
    torch.distributed.all_gather(
  File "/gpfs/alpine/csc499/scratch/adami/miniconda3/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 2070, in all_gather
    torch.distributed.all_gather(
  File "/gpfs/alpine/csc499/scratch/adami/miniconda3/lib/python3.9/site-packages/torch/distributed/distributed_c10d.py", line 2070, in all_gather
    work = group.allgather([tensor_list], [tensor])
RuntimeError: CUDA out of memory. Tried to allocate 3.19 GiB (GPU 0; 15.78 GiB total capacity; 9.12 GiB already allocated; 471.88 MiB free; 10.60 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
    work = group.allgather([tensor_list], [tensor])
RuntimeError: CUDA out of memory. Tried to allocate 3.19 GiB (GPU 0; 15.78 GiB total capacity; 9.12 GiB already allocated; 471.88 MiB free; 10.60 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

This model uses pp=mp=1; the same thing happens with other values, for example with a 7B model with higher mp and pp values. With sciq, it happens on 4 nodes but not 2. Pasting the args here:

INFO:root:NeoXArgs.calculate_derived() Total number of GPUs determined to be: 24
-------------------- arguments --------------------
  attention_config ................ ['global', 'global', 'global', 'global', 'global', 'global', 'global', 'global', 'global', 'global', 'global', 'global', 'global', 'global', 'global', 'global', 'global', 'global', 'global', 'global', 'global', 'global', 'global', 'global']updated
  attention_dropout ............... 0.0.........................updated
  batch_size ...................... 4...........................updated
  bias_gelu_fusion ................ True........................updated
  checkpoint_activations .......... True........................updated
  checkpoint_factor ............... 1000........................updated
  clip_grad ....................... 1.0.........................updated
  config_files .................... {'410M.yml': '# GPT-2 pretraining setup\n{\n  # parallelism settings ( you will want to change these based on your cluster setup, ideally scheduling pipeline stages\n  # across the node boundaries )\n  "pipe-parallel-size": 1,\n  "model-parallel-size": 1, # one copy of the model per node\n\n# model settings\n  "num-layers": 24,\n  "hidden-size": 1024,\n  "num-attention-heads": 16,\n  "seq-length": 2048,\n  "max-position-embeddings": 2048,\n  "pos-emb": "rotary",\n  "rotary-pct": 0.25,\n  "no-weight-tying": true,\n  "gpt-j-residual": true,\n  "output-layer-parallelism": "column",\n\n  # these should provide some speedup but takes a while to build, set to true if desired\n  "scaled-upper-triang-masked-softmax-fusion": true,\n  "bias-gelu-fusion": true,\n\n  # init methods\n  "init_method": "small_init",\n  "output_layer_init_method": "wang_init",\n\n  "optimizer": {\n    "type": "Adam",\n    "params": {\n      "lr": 0.0003,\n      "betas": [0.9, 0.95],\n      "eps": 1.0e-8,\n    }\n  },\n  "min_lr": 0.00003,\n\n  "zero_optimization": {\n    "stage": 1,\n    "allgather_partitions": True,\n    "allgather_bucket_size": 500000000,\n    "overlap_comm": True,\n    "reduce_scatter": True,\n    "reduce_bucket_size": 500000000,\n    "contiguous_gradients": True,\n    "cpu_offload": False\n  },\n\n  # LLAMA Config\n  # batch / data settings\n  "train_batch_size": 96, #1104 # approximately 2.2M batch size across 46 nodes \n  "train_micro_batch_size_per_gpu": 4,\n  "data-impl": "mmap",\n  "split": "949,50,1",\n\n  # activation checkpointing\n  "checkpoint-activations": true,\n  "checkpoint-num-layers": 1,\n  "partition-activations": true,\n  "synchronize-each-layer": true,\n\n  # regularization\n  "gradient_clipping": 1.0,\n  "weight-decay": 0.1,\n  "hidden-dropout": 0.0,\n  "attention-dropout": 0.0,\n\n  # precision settings of LLaMa\n  "fp16": {\n    "enabled": true,\n  #  "type": "bfloat16", # set bf16 as precision\n    "loss_scale": 0,\n    "loss_scale_window": 1000,\n    "hysteresis": 2,\n    "min_loss_scale": 1\n  },\n\n#  "fp32_allreduce": True, # without a patch to torch, bf16 models have to do the allreduce in fp32\n  # misc. training settings\n  "train-iters": 2212,\n  "lr-decay-iters": 2212,\n  "distributed-backend": "nccl",\n  "lr-decay-style": "cosine",\n  "warmup": 0.01,\n  "checkpoint-factor": 1000,\n  "eval-interval": 100,\n  "eval-iters": 10,\n\n  # logging\n  "log-interval": 1,\n  "steps_per_print": 1,\n  "keep-last-n-checkpoints": 1000,\n  "wall_clock_breakdown": true,\n}\n', 'local_setup_llama.yml': '# Suggested data paths when using GPT-NeoX locally\n{\n  # "data-path": "data/enwik8/enwik8_text_document",\n\n  # or for weighted datasets:\n  "train-data-paths": [ANONYMISED],\n  "train-data-weights": [\n    2.5,\n    4.5,\n    15.0,\n    4.5,\n    4.5,\n    13.4,\n    13.4,\n    13.4,\n    13.4,\n    13.4\n  ],\n  "test-data-weights": [\n    1.\n  ],\n  "valid-data-weights": [\n    1.\n  ],\n\n  # If weight_by_num_documents is True, Builds dataset weights from a multinomial distribution over groups of data according to the number of documents in each group.\n  # WARNING: setting this to True will override any user provided weights\n  # "weight_by_num_documents": false,\n  # "weighted_sampler_alpha": 0.3,\n\n  "tokenizer-type": "HFTokenizer",\n  "vocab-file": "20B_tokenizer.json",\n  # "merge-file": "data/gpt2-merges.txt",\n\n  "save": "checkpoints",\n  "load": "/gpfs/alpine/csc499/scratch/adami/neox_converted/mp1_pp1/pythia", #fixed_checkpoints/JOB-2963060_pythia-c-410M-iters-181793_warmup-0.01_max-lr-0.00015_min-lr-1.5e-05_pretrain"\n  "checkpoint_validation_with_forward_pass": False,\n\n  "tensorboard-dir": "tensorboard",\n  "log-dir": "ANONYMISED",\n  "use_wandb": False,\n  # "wandb_host": "https://api.wandb.ai",\n  "wandb_project": "red_pajama",\n\n  "launcher": "jsrun",\n  "deepspeed_jsrun": true,\n  "task_check_before_jsrun": true,\n\n\n  "finetune": true,\n\n  "num_workers": 0,\n}'}updated
  data_impl ....................... mmap........................updated
  deepspeed_jsrun ................. True........................updated
  dynamic_loss_scale .............. True........................updated
  eval_interval ................... 100.........................updated
  eval_iters ...................... 10..........................updated
  eval_tasks ...................... ['sciq']....................updated
  finetune ........................ True........................updated
  fp16 ............................ {'enabled': True, 'loss_scale': 0, 'loss_scale_window': 1000, 'hysteresis': 2, 'min_loss_scale': 1}updated
  gas ............................. 1...........................updated
  global_num_gpus ................. 24..........................updated
  gpt_j_residual .................. True........................updated
  gradient_clipping ............... 1.0.........................updated
  hidden_dropout .................. 0.0.........................updated
  hidden_size ..................... 1024........................updated
  init_method ..................... small_init..................updated
  is_pipe_parallel ................ True........................updated
  keep_last_n_checkpoints ......... 1000........................updated
  launcher ........................ jsrun.......................updated
  load ............................ ANONYMISED
  log_dir ......................... ANONYMISED
  log_interval .................... 1...........................updated
  lr .............................. 0.0003......................updated
  lr_decay_iters .................. 2212........................updated
  lr_decay_style .................. cosine......................updated
  max_position_embeddings ......... 2048........................updated
  min_lr .......................... 3e-05.......................updated
  no_weight_tying ................. True........................updated
  num_attention_heads ............. 16..........................updated
  num_layers ...................... 24..........................updated
  num_workers ..................... 0...........................updated
  optimizer ....................... {'type': 'Adam', 'params': {'lr': 0.0003, 'betas': [0.9, 0.95], 'eps': 1e-08}}updated
  optimizer_type .................. Adam........................updated
  output_layer_init_method ........ wang_init...................updated
  output_layer_parallelism ........ column......................updated
  partition_activations ........... True........................updated
  pipe_parallel_size .............. 1...........................updated
  pos_emb ......................... rotary......................updated
  precision ....................... fp16........................updated
  rotary_pct ...................... 0.25........................updated
  save ............................ checkpoints.................updated
  save_iters ...................... [1000, 2000]................updated
  scaled_upper_triang_masked_softmax_fusion  True...............updated
  seq_length ...................... 2048........................updated
  sparsity_config ................. {}..........................updated
  split ........................... 949,50,1....................updated
  steps_per_print ................. 1...........................updated
  synchronize_each_layer .......... True........................updated
  task_check_before_jsrun ......... True........................updated
  tensorboard_dir ................. tensorboard.................updated
  test_data_paths ................. ['/gpfs/alpine/csc499/proj-shared/incite_datasets/red_pajama_data/the_pile/test_tokenized_text_document']updated
  test_data_weights ............... [1.0].......................updated
  text_gen_type ................... unconditional...............updated
  tokenizer_type .................. HFTokenizer.................updated
  train_batch_size ................ 96..........................updated
  train_data_paths ................ ANONYMISED
  train_data_weights .............. [2.5, 4.5, 15.0, 4.5, 4.5, 13.4, 13.4, 13.4, 13.4, 13.4]updated
  train_iters ..................... 2212........................updated
  train_micro_batch_size_per_gpu .. 4...........................updated
  use_wandb ....................... False.......................updated
  user_script ..................... ANONYMISED
  valid_data_paths ................ ANONYMISED
  valid_data_weights .............. [1.0].......................updated
  vocab_file ...................... 20B_tokenizer.jsonupdated
  wall_clock_breakdown ............ True........................updated
  wandb_project ................... red_pajama..................updated
  weight_decay .................... 0.1.........................updated
  zero_allgather_bucket_size ...... 500000000...................updated
  zero_contiguous_gradients ....... True........................updated
  zero_optimization ............... {'stage': 1, 'allgather_partitions': True, 'allgather_bucket_size': 500000000, 'overlap_comm': True, 'reduce_scatter': True, 'reduce_bucket_size': 500000000, 'contiguous_gradients': True, 'cpu_offload': False}updated
  zero_reduce_bucket_size ......... 500000000...................updated
  zero_reduce_scatter ............. True........................updated
  zero_stage ...................... 1...........................updated
  activation ...................... gelu........................default
  adlr_autoresume ................. False.......................default
  adlr_autoresume_interval ........ 1000........................default
  amp ............................. None........................default
  apply_query_key_layer_scaling ... False.......................default
  attention_softmax_in_fp32 ....... False.......................default
  autotuning ...................... None........................default
  autotuning_run .................. None........................default
  base_shapes_file ................ None........................default
  bias_dropout_fusion ............. False.......................default
  char_level_ppl .................. False.......................default
  checkpoint_in_cpu ............... False.......................default
  checkpoint_num_layers ........... 1...........................default
  checkpoint_scale ................ linear......................default
  checkpoint_validation_with_forward_pass  False................default
  comment ......................... None........................default
  contiguous_checkpointing ........ False.......................default
  coord_check ..................... False.......................default
  curriculum_learning ............. None........................default
  curriculum_seqlen ............... 0...........................default
  data_path ....................... None........................default
  deepscale ....................... False.......................default
  deepscale_config ................ None........................default
  deepspeed ....................... True........................default
  deepspeed_activation_checkpointing  True......................default
  deepspeed_mpi ................... False.......................default
  deepspeed_slurm ................. False.......................default
  detect_nvlink_pairs ............. False.......................default
  distributed_backend ............. nccl........................default
  do_test ......................... None........................default
  do_train ........................ None........................default
  do_valid ........................ None........................default
  dump_state ...................... False.......................default
  eod_mask_loss ................... False.......................default
  eval_results_prefix ............. ............................default
  exclude ......................... None........................default
  exit_interval ................... None........................default
  extra_save_iters ................ None........................default
  flops_profiler .................. None........................default
  fp16_lm_cross_entropy ........... False.......................default
  fp32_allreduce .................. False.......................default
  git_hash ........................ cb38760.....................default
  gmlp_attn_dim ................... 64..........................default
  gpt_j_tied ...................... False.......................default
  gradient_accumulation_steps ..... 1...........................default
  gradient_noise_scale_cpu_offload  False.......................default
  gradient_noise_scale_n_batches .. 5...........................default
  gradient_predivide_factor ....... 1.0.........................default
  hostfile ........................ None........................default
  hysteresis ...................... 2...........................default
  include ......................... None........................default
  init_method_std ................. 0.02........................default
  iteration ....................... None........................default
  layernorm_epsilon ............... 1e-05.......................default
  lazy_mpu_init ................... False.......................default
  local_rank ...................... None........................default
  log_grad_norm ................... False.......................default
  log_grad_pct_zeros .............. False.......................default
  log_gradient_noise_scale ........ False.......................default
  log_optimizer_states ............ False.......................default
  log_param_norm .................. False.......................default
  loss_scale ...................... None........................default
  loss_scale_window ............... 1000.0......................default
  make_vocab_size_divisible_by .... 128.........................default
  master_addr ..................... None........................default
  master_port ..................... 29500.......................default
  maximum_tokens .................. 64..........................default
  merge_file ...................... None........................default
  min_scale ....................... 1.0.........................default
  mmap_warmup ..................... False.......................default
  model_parallel_size ............. 1...........................default
  mup_attn_temp ................... 1.0.........................default
  mup_embedding_mult .............. 1.0.........................default
  mup_init_scale .................. 1.0.........................default
  mup_output_temp ................. 1.0.........................default
  mup_rp_embedding_mult ........... 1.0.........................default
  mup_width_scale ................. 2...........................default
  no_load_optim ................... False.......................default
  no_load_rng ..................... False.......................default
  no_save_optim ................... False.......................default
  no_save_rng ..................... False.......................default
  no_ssh_check .................... False.......................default
  norm ............................ layernorm...................default
  num_gpus ........................ None........................default
  num_nodes ....................... -1..........................default
  num_samples ..................... 1...........................default
  num_unique_layers ............... None........................default
  onnx_safe ....................... False.......................default
  opt_pos_emb_offset .............. 0...........................default
  override_lr_scheduler ........... False.......................default
  padded_vocab_size ............... None........................default
  param_sharing_style ............. grouped.....................default
  pipe_partition_method ........... type:transformer|mlp........default
  prescale_gradients .............. False.......................default
  profile_backward ................ False.......................default
  prompt_end ......................
...........................default
  rank ............................ None........................default
  recompute ....................... False.......................default
  return_logits ................... False.......................default
  rms_norm_epsilon ................ 1e-08.......................default
  rotary_emb_base ................. 10000.......................default
  rpe_max_distance ................ 128.........................default
  rpe_num_buckets ................. 32..........................default
  sample_input_file ............... None........................default
  sample_output_file .............. samples.txt.................default
  save_base_shapes ................ False.......................default
  scaled_masked_softmax_fusion .... False.......................default
  scalenorm_epsilon ............... 1e-08.......................default
  scheduler ....................... None........................default
  seed ............................ 1234........................default
  short_seq_prob .................. 0.1.........................default
  soft_prompt_tuning .............. None........................default
  sparse_gradients ................ False.......................default
  temperature ..................... 0.0.........................default
  top_k ........................... 0...........................default
  top_p ........................... 0.0.........................default
  use_bnb_optimizer ............... False.......................default
  use_checkpoint_lr_scheduler ..... False.......................default
  use_cpu_initialization .......... False.......................default
  use_mup ......................... False.......................default
  use_shared_fs ................... True........................default
  wandb_group ..................... None........................default
  wandb_host ...................... https://api.wandb.ai........default
  wandb_init_all_ranks ............ False.......................default
  wandb_team ...................... None........................default
  warmup .......................... 0.01........................default
  weight_by_num_documents ......... False.......................default
  weighted_sampler_alpha .......... 0.3.........................default
  world_size ...................... None........................default
  zero_allow_untested_optimizer ... False.......................default
---------------- end of arguments ----------------

You can replicate this with the Pythia 410M model (at least on Summit). Another issue looks similar #849 even though they don't use GPT-NeoX I believe.
I've been talking to @Quentin-Anthony and @curt-tigges about this issue, so pinging them here in case they have additional information. The OoM issues do not happen because of loading the optimiser. The quantity of memory in excess scales with the number of nodes used. Every nodes signals an OoM issue. I'm guessing that the main worker of a pipelined model receives a constant quantity of info per pipelined model in the network during the accumulation of log likelihoods (i.e. with pp=1 on 2 nodes, it gets 2x , and 4x on 4 nodes etc), which is why it works until a certain number of nodes, varying with the eval task.

@AIproj AIproj added bug Something isn't working. duplicate This issue or pull request already exists. help wanted Contributors and extra help welcome. labels Sep 23, 2023
@dashstander dashstander self-assigned this Oct 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working. duplicate This issue or pull request already exists. help wanted Contributors and extra help welcome.
Projects
None yet
Development

No branches or pull requests

2 participants