-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Usage] ImportError: cannot import name 'ShardedDDPOption' from 'transformers.trainer' #1042
Comments
Same issue, looking for fix. How does everyone else manage to run this? |
Sorry for the confusion. It should now be fixed in the main branch. Please let me know if it works for you, thanks. |
@haotian-liu Thank you! The new commit solved the above issue, but there are warning message is constantly being printed out during finetuning: First warning (printed once):
Repeated warnings (continuously printed):
My installed packages are
Appears to be a previous issue you have addressed before #661 (comment) I tried
|
Would you mind sharing your command? I cannot reproduce the issue on my side. Thanks. |
@haotian-liu Of course, here is my bash script #!/bin/bash
deepspeed llava/train/train_mem.py \
--lora_enable True --lora_r 128 --lora_alpha 256 --mm_projector_lr 2e-5 \
--bits 4 \
--deepspeed ./scripts/zero2.json \
--model_name_or_path liuhaotian/llava-v1.5-13b \
--version v1 \
--data_path ./playground/data/my_instruct_82k.json \
--image_folder ./playground/data/my_images \
--vision_tower openai/clip-vit-large-patch14-336 \
--mm_projector_type mlp2x_gelu \
--mm_vision_select_layer -2 \
--mm_use_im_start_end False \
--mm_use_im_patch_token False \
--image_aspect_ratio pad \
--group_by_modality_length True \
--bf16 True \
--output_dir ./checkpoints/llava-v1.5-13b-task-lora \
--num_train_epochs 1 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--gradient_accumulation_steps 1 \
--evaluation_strategy "no" \
--save_strategy "steps" \
--save_steps 50000 \
--save_total_limit 1 \
--learning_rate 2e-4 \
--weight_decay 0. \
--warmup_ratio 0.03 \
--lr_scheduler_type "cosine" \
--logging_steps 1 \
--tf32 True \
--model_max_length 2048 \
--gradient_checkpointing True \
--dataloader_num_workers 4 \
--lazy_preprocess True \
--report_to wandb Please let me know if theres more info I can provide |
@haotian-liu Sorry false alarm, had some bad training examples in my dataset. No more tokenization mismatch warnings once those were removed. Thank you very much for your help! |
Nice, closing this issue :) |
How to fix it, please tell me 0.0 |
Just disable import ShardedDDPOption, which is not used. |
Describe the issue
transformers no longer has SharedDDPOption after v4.35.0
The text was updated successfully, but these errors were encountered: