Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PyTorch 1.6.0 update with native AMP #573

Merged
merged 7 commits into from
Jul 31, 2020
Merged

PyTorch 1.6.0 update with native AMP #573

merged 7 commits into from
Jul 31, 2020

Conversation

Lornatang
Copy link
Contributor

@Lornatang Lornatang commented Jul 31, 2020

PyTorch have Automatic Mixed Precision (AMP) Training.

fix #555 and #557

πŸ› οΈ PR Summary

Made with ❀️ by Ultralytics Actions

🌟 Summary

Refinement of mixed precision training in YOLOv5 with native PyTorch support.

πŸ“Š Key Changes

  • 🧹 Removed old APEX-based mixed precision code.
  • βž• Added native PyTorch mixed precision via torch.cuda.amp.
  • πŸ›  Fixed memory reporting to use torch.cuda.memory_reserved instead of torch.cuda.memory_cached.
  • 🧱 Replaced manual mixed precision with amp.GradScaler for gradient scaling.
  • 🎚 Removed conditional for CPU in DDP (Distributed Data Parallel) setup.
  • 🧹 Code cleanup for readability and simplicity.
  • πŸ”„ Changed some if conditions to reflect new CUDA boolean variable usage.

🎯 Purpose & Impact

  • πŸ’ͺ Increased Efficiency: Use of the latest native PyTorch functionalities for mixed precision training should improve the training efficiency and speed.
  • 🧠 Improved Clarity and Maintenance: The removal of external dependencies like NVIDIA's APEX and the transition to built-in methods makes the code cleaner and easier to understand and maintain.
  • 🐞 Bug Fixes: Fixes the memory reporting method which should provide more accurate gpu memory usage information.
  • ▢️ User Experience: Users with the latest PyTorch version will experience a more streamlined setup process without the need for additional APEX installation.
  • πŸ“ˆ Consistency and Stability: The refactor should make training behavior more consistent and stable across different hardware setups.

@glenn-jocher glenn-jocher changed the title Update to torch1.6 PyTorch 1.6.0 update with native AMP Jul 31, 2020
@glenn-jocher glenn-jocher merged commit c020875 into ultralytics:master Jul 31, 2020
@Lornatang Lornatang deleted the update-to-torch1.6 branch August 1, 2020 02:47
KMint1819 pushed a commit to KMint1819/yolov5 that referenced this pull request May 12, 2021
* PyTorch have Automatic Mixed Precision (AMP) Training.

* Fixed the problem of inconsistent code length indentation

* Fixed the problem of inconsistent code length indentation

* Mixed precision training is turned on by default
KMint1819 pushed a commit to KMint1819/yolov5 that referenced this pull request May 12, 2021
KMint1819 pushed a commit to KMint1819/yolov5 that referenced this pull request May 12, 2021
BjarneKuehl pushed a commit to fhkiel-mlaip/yolov5 that referenced this pull request Aug 26, 2022
* PyTorch have Automatic Mixed Precision (AMP) Training.

* Fixed the problem of inconsistent code length indentation

* Fixed the problem of inconsistent code length indentation

* Mixed precision training is turned on by default
BjarneKuehl pushed a commit to fhkiel-mlaip/yolov5 that referenced this pull request Aug 26, 2022
BjarneKuehl pushed a commit to fhkiel-mlaip/yolov5 that referenced this pull request Aug 26, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

PyTorch 1.6 function name modification
2 participants