-
-
Notifications
You must be signed in to change notification settings - Fork 16.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Exception after training on Docker image e88040337fa3 #8764
Comments
👋 Hello @sstainba, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution. If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you. If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available. For business inquiries or professional support requests please visit https://ultralytics.com or email [email protected]. RequirementsPython>=3.7.0 with all requirements.txt installed including PyTorch>=1.7. To get started: git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install EnvironmentsYOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
StatusIf this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on macOS, Windows, and Ubuntu every 24 hours and on every commit. |
@sstainba this is a pytorch issue that should be resolved in later versions of torch. You can safely ignore it. |
👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs. Access additional YOLOv5 🚀 resources:
Access additional Ultralytics ⚡ resources:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed! Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐! |
@glenn-jocher I am curious what the pytorch issue is. After running into this myself and digging into this a bit more, I found documentation from nVidia related to WSL2 limitations: here. Basically, in this scenario, uses of pin_memory=True are very limited. This seems more like an nVidia\WSL2 issue than a pytorch issue. For yolov5, I was able successfully train in Docker+WSL2+CUDA only after changing pin_memory to False in utils/dataloaders.py I wonder if you might consider exposing pin_memory as an argument to train, val, detect, etc. to allow for greater flexibility. |
@CanisLupus518 got it, thanks for the info! I'll take a look at simply setting this to False. Last time I profiled it did provide benefits on later GPUs like T4 and V100. What CUDA device are you using? |
@CanisLupus518 could we conditionally pin memory depending no the CUDA version or device properties? |
@CanisLupus518 I profiled 3 epochs of VOC, got .121 hours with pin_memory=True and .126 hours with pin_memory=False. Epoch times were about 1:51 True and 1:56 false, so I think we want to enable by default, but I'll add an environment variable to allow setting. |
@CanisLupus518 good news 😃! Your original issue may now be fixed ✅ in PR #9250. This PR allows the user to disable pin_memory during training/val using a new PIN_MEMORY=False python train.py To receive this update:
Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀! |
@glenn-jocher wow. Thank you so much. To answer your question, I’m using an RTX 3070 Ti. |
@CanisLupus518 you're welcome! 😀 Excellent choice in GPUs, the RTX 3070 Ti is a powerhouse. Let us know if you encounter any other issues or have further suggestions. Happy training with YOLOv5 and the RTX 3070 Ti 🚀! |
Search before asking
YOLOv5 Component
Training
Bug
After training two different custom models, both runs end with several exceptions thrown:
This is exception is repeated about 20 times.
Environment
Docker image e88040337fa3
Windows 11 Host using WSL2
Nvidia 3060 12GB GPU
Minimal Reproducible Example
Doesn't appear to be specific to my work. This happened with two different data sets.
Dataset 1: 300 images 1280 x1024, 1 Tag/Class
Command:
train.py --rect --imgsz 1280 --img 1024 --epochs 500 --name test1 --cache --batch 4 --data /usr/src/datasets/data.yaml --weights yolov5m6.pt
Dataset 2: 200 images 416 x 416, 1 Tag/Class
Command:
train.py --img 416 --epochs 500 --name splash --cache --batch 8 --data /usr/src/datasets/splash/data.yaml --weights yolov5s.pt
Additional
No response
Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: