Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[docs][train]Make Train example titles, heading more consistent #39606

Merged
merged 18 commits into from
Sep 14, 2023
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Apply suggestions from code review
feedback from code review

Co-authored-by: matthewdeng <[email protected]>
Signed-off-by: angelinalg <[email protected]>
  • Loading branch information
angelinalg and matthewdeng committed Sep 13, 2023
commit 82db390501b22e6d173f9c72b3550b23e380a47f
2 changes: 1 addition & 1 deletion doc/source/train/examples.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ Beginner
* - DeepSpeed
- :ref:`Train with DeepSpeed ZeRO-3 <deepspeed_example>`
* - TensorFlow
- :ref:`Train with TensorFlow MNIST <tensorflow_mnist_example>`
- :ref:`Train an MNIST Image Classifier with TensorFlow <tensorflow_mnist_example>`
* - Horovod
- :ref:`Train with Horovod and PyTorch <horovod_example>`

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Distributed Training with Hugging Face Accelerate
=================================================

This example does distributed data parallel training
with Hugging Face (HF) Accelerate, Ray Train, and Ray Data.
with Hugging Face Accelerate, Ray Train, and Ray Data.
It fine-tunes a BERT model and is adapted from
https://github.com/huggingface/accelerate/blob/main/examples/nlp_example.py

Expand Down
4 changes: 2 additions & 2 deletions doc/source/train/examples/deepspeed/deepspeed_example.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Train with DeepSpeed ZeRO-3 and Ray Train
=========================================

This is an intermediate example that shows how to do distributed training with DeepSpeed ZeRO-3 and Ray Train.
It demonstrates how to use :ref:`Ray Dataset <data>` with DeepSpeed ZeRO-3 and Ray Train.
It demonstrates how to use :ref:`Ray Data <data>` with DeepSpeed ZeRO-3 and Ray Train.
If you just want to quickly convert your existing TorchTrainer scripts into Ray Train, you can refer to the :ref:`Train with DeepSpeed <train-deepspeed>`.


Expand All @@ -21,4 +21,4 @@ See also

* :ref:`Ray Train Examples <train-examples>` for more use cases.

* :ref:`Get Started with DeepSpeed <train-horovod>` for a tutorial.
* :ref:`Get Started with DeepSpeed <train-deepspeed>` for a tutorial.
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
"\n",
":::{note}\n",
"\n",
"This is an intermediate example demonstrates how to use [Ray Dataset](data) with PyTorch Lightning in Ray Train.\n",
"This is an intermediate example demonstrates how to use [Ray Data](data) with PyTorch Lightning in Ray Train.\n",
"\n",
"If you just want to quickly convert your existing PyTorch Lightning scripts into Ray Train, you can refer to the [Lightning Quick Start Guide](train-pytorch-lightning).\n",
"\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,7 @@ Fine-tune of Stable Diffusion with DreamBooth and Ray Train
===========================================================

This is an intermediate example that shows how to do DreamBooth fine-tuning of a Stable Diffusion model using Ray Train.
It demonstrates how to use :ref:`Ray Dataset <data>` with PyTorch Lightning in Ray Train.
If you just want to quickly convert your existing Transformer scripts into Ray Train, you can refer to the :ref:`Getting Started with Transformers <train-pytorch-transformers>`.
It demonstrates how to use :ref:`Ray Data <data>` with PyTorch Lightning in Ray Train.


See the original `DreamBooth project homepage <https://dreambooth.github.io/>`_ for more details on what this fine-tuning method achieves.
Expand Down
2 changes: 1 addition & 1 deletion doc/source/train/huggingface-accelerate.rst
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ You only need to run your existing training code with a TorchTrainer. You can ex
Model and data preparation for distributed training is completely handled by the `Accelerator <https://huggingface.co/docs/accelerate/main/en/package_reference/accelerator#accelerate.Accelerator>`_
object and its `Accelerator.prepare() <https://huggingface.co/docs/accelerate/main/en/package_reference/accelerator#accelerate.Accelerator.prepare>`_ method.

Unlike with native PyTorch, PyTorch Lightning, or HuggingFace Transformers, **don't** call any additional Ray Train utilities
Unlike with native PyTorch, PyTorch Lightning, or Hugging Face Transformers, **don't** call any additional Ray Train utilities
like :meth:`~ray.train.torch.prepare_model` or :meth:`~ray.train.torch.prepare_data_loader` in your training function.

Configure Accelerate
Expand Down
Loading