Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[docs] Cleanup for other AIR concepts #39400

Merged
merged 8 commits into from
Sep 8, 2023
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Merge branch 'master' into more-air-cleanups
  • Loading branch information
richardliaw committed Sep 8, 2023
commit c9250569dbeb5c134856e394eefd92e23355b133
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@
"source": [
"## Prepare Dataset and Module\n",
"\n",
"The Pytorch Lightning Trainer takes either `torch.utils.data.DataLoader` or `pl.LightningDataModule` as data inputs. You can keep using them without any changes for the Ray Train LightningTrainer. "
"The Pytorch Lightning Trainer takes either `torch.utils.data.DataLoader` or `pl.LightningDataModule` as data inputs. You can keep using them without any changes with Ray Train. "
]
},
{
Expand Down
37 changes: 2 additions & 35 deletions doc/source/tune/examples/tune-pytorch-lightning.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -18,16 +18,7 @@
"\n",
"The main abstraction of PyTorch Lightning is the `LightningModule` class, which should be extended by your application. There is [a great post on how to transfer your models from vanilla PyTorch to Lightning](https://towardsdatascience.com/from-pytorch-to-pytorch-lightning-a-gentle-introduction-b371b7caaf09).\n",
"\n",
"The class structure of PyTorch Lightning makes it very easy to define and tune model\n",
"parameters. This tutorial will show you how to use Tune with {class}`LightningTrainer <ray.train.lightning.LightningTrainer>` to find the best set of\n",
"parameters for your application on the example of training a MNIST classifier. Notably,\n",
"the `LightningModule` does not have to be altered at all for this - so you can\n",
"use it plug and play for your existing models, assuming their parameters are configurable!\n",
"\n",
":::{note}\n",
"If you don't want to use {class}`LightningTrainer <ray.train.lightning.LightningTrainer>` and prefer using vanilla lightning trainer with function trainable, please refer to this document: {ref}`Using vanilla Pytorch Lightning with Tune <tune-vanilla-pytorch-lightning-ref>`.\n",
"\n",
":::\n",
"The class structure of PyTorch Lightning makes it very easy to define and tune model parameters. This tutorial will show you how to use Tune with Ray Train's {class}`TorchTrainer <ray.train.torch.TorchTrainer>` to find the best set of parameters for your application on the example of training a MNIST classifier. Notably, the `LightningModule` does not have to be altered at all for this - so you can use it plug and play for your existing models, assuming their parameters are configurable!\n",
"\n",
":::{note}\n",
"To run this example, you will need to install the following:\n",
Expand Down Expand Up @@ -268,18 +259,7 @@
"source": [
"### Configuring the search space\n",
"\n",
"Now we configure the parameter search space using {class}`LightningConfigBuilder <ray.train.lightning.LightningConfigBuilder>`. We would like to choose between three different layer and batch sizes. The learning rate should be sampled uniformly between `0.0001` and `0.1`. The `tune.loguniform()` function is syntactic sugar to make sampling between these different orders of magnitude easier, specifically we are able to also sample small values.\n",
"\n",
":::{note}\n",
"In `LightningTrainer`, the frequency of metric reporting is the same as the frequency of checkpointing. For example, if you set `builder.checkpointing(..., every_n_epochs=2)`, then for every 2 epochs, all the latest metrics will be reported to the Ray Tune session along with the latest checkpoint. Please make sure the target metrics(e.g. metrics specified in `TuneConfig`, schedulers, and searchers) are logged before saving a checkpoint.\n",
"\n",
":::\n",
"\n",
"\n",
":::{note}\n",
"Use `LightningConfigBuilder.checkpointing()` to specify the monitor metric and checkpoint frequency for the Lightning ModelCheckpoint callback. To properly save checkpoints, you must also provide a Train {class}`CheckpointConfig <ray.train.CheckpointConfig>`. Otherwise, LightningTrainer will create a default CheckpointConfig, which saves all the reported checkpoints by default.\n",
"\n",
":::"
"Now we configure the parameter search space. We would like to choose between different layer dimensions, learning rate, and batch sizes. The learning rate should be sampled uniformly between `0.0001` and `0.1`. The `tune.loguniform()` function is syntactic sugar to make sampling between these different orders of magnitude easier, specifically we are able to also sample small values. Similarly for `tune.choice()`, which samples from all the provided options."
]
},
{
Expand Down Expand Up @@ -377,19 +357,6 @@
" num_workers=3, use_gpu=True, resources_per_worker={\"CPU\": 1, \"GPU\": 1}\n",
")\n",
"\n",
"# Searchable configs across different trials\n",
"searchable_lightning_config = (\n",
" LightningConfigBuilder()\n",
" .module(config={\n",
" \"layer_1_size\": tune.choice([32, 64, 128]),\n",
" \"layer_2_size\": tune.choice([64, 128, 256]),\n",
" \"lr\": tune.loguniform(1e-4, 1e-1),\n",
" })\n",
" .build()\n",
")\n",
"\n",
"# Make sure to also define a CheckpointConfig here\n",
"# to properly save checkpoints in a Ray Train Checkpoint format.\n",
"run_config = RunConfig(\n",
" checkpoint_config=CheckpointConfig(\n",
" num_to_keep=2,\n",
Expand Down
Loading
You are viewing a condensed version of this merge commit. You can view the full changes here.