diff --git a/doc/source/ray-overview/getting-started.md b/doc/source/ray-overview/getting-started.md index c017bece99c2e..6e8d456c26f49 100644 --- a/doc/source/ray-overview/getting-started.md +++ b/doc/source/ray-overview/getting-started.md @@ -530,12 +530,24 @@ Learn more about Ray Core ## Ray Cluster Quickstart -Deploy your applications on Ray clusters, often with minimal code changes to your existing code. +Deploy your applications on Ray clusters on AWS, GCP, Azure, and more, often with minimal code changes to your existing code. + `````{dropdown} ray Clusters: Launching a Ray Cluster on AWS :animate: fade-in-slide-down Ray programs can run on a single machine, or seamlessly scale to large clusters. + +:::{note} +To run this example install the following: + +```bash +pip install -U "ray[default]" boto3 +``` + +If you haven't already, configure your credentials as described in the https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html#guide-credentials[documentation for boto3]. +::: + Take this simple example that waits for individual nodes to join the cluster. ````{dropdown} example.py @@ -545,16 +557,16 @@ Take this simple example that waits for individual nodes to join the cluster. :language: python ``` ```` -You can also download this example from our [GitHub repository](https://github.com/ray-project/ray/blob/master/doc/yarn/example.py). -Go ahead and store it locally in a file called `example.py`. +You can also download this example from the [GitHub repository](https://github.com/ray-project/ray/blob/master/doc/yarn/example.py). +Store it locally in a file called `example.py`. -To execute this script in the cloud, just download [this configuration file](https://github.com/ray-project/ray/blob/master/python/ray/autoscaler/aws/example-full.yaml), +To execute this script in the cloud, download [this configuration file](https://github.com/ray-project/ray/blob/master/python/ray/autoscaler/aws/example-minimal.yaml), or copy it here: ````{dropdown} cluster.yaml :animate: fade-in-slide-down -```{literalinclude} ../../../python/ray/autoscaler/aws/example-full.yaml +```{literalinclude} ../../../python/ray/autoscaler/aws/example-minimal.yaml :language: yaml ``` ```` @@ -570,7 +582,7 @@ ray submit cluster.yaml example.py --start :outline: :expand: -Learn more about launching Ray Clusters +Learn more about launching Ray Clusters on AWS, GCP, Azure, and more ``` ````` diff --git a/python/ray/autoscaler/aws/example-minimal.yaml b/python/ray/autoscaler/aws/example-minimal.yaml index 09a2727d1311c..c230d37673af4 100644 --- a/python/ray/autoscaler/aws/example-minimal.yaml +++ b/python/ray/autoscaler/aws/example-minimal.yaml @@ -5,3 +5,44 @@ cluster_name: aws-example-minimal provider: type: aws region: us-west-2 + +# The maximum number of workers nodes to launch in addition to the head +# node. +max_workers: 3 + +# Tell the autoscaler the allowed node types and the resources they provide. +# The key is the name of the node type, which is for debugging purposes. +# The node config specifies the launch config and physical instance type. +available_node_types: + ray.head.default: + # The node type's CPU and GPU resources are auto-detected based on AWS instance type. + # If desired, you can override the autodetected CPU and GPU resources advertised to the autoscaler. + # You can also set custom resources. + # For example, to mark a node type as having 1 CPU, 1 GPU, and 5 units of a resource called "custom", set + # resources: {"CPU": 1, "GPU": 1, "custom": 5} + resources: {} + # Provider-specific config for this node type, e.g., instance type. By default + # Ray auto-configures unspecified fields such as SubnetId and KeyName. + # For more documentation on available fields, see + # http://boto3.readthedocs.io/en/latest/reference/services/ec2.html#EC2.ServiceResource.create_instances + node_config: + InstanceType: m5.large + ray.worker.default: + # The minimum number of worker nodes of this type to launch. + # This number should be >= 0. + min_workers: 3 + # The maximum number of worker nodes of this type to launch. + # This parameter takes precedence over min_workers. + max_workers: 3 + # The node type's CPU and GPU resources are auto-detected based on AWS instance type. + # If desired, you can override the autodetected CPU and GPU resources advertised to the autoscaler. + # You can also set custom resources. + # For example, to mark a node type as having 1 CPU, 1 GPU, and 5 units of a resource called "custom", set + # resources: {"CPU": 1, "GPU": 1, "custom": 5} + resources: {} + # Provider-specific config for this node type, e.g., instance type. By default + # Ray auto-configures unspecified fields such as SubnetId and KeyName. + # For more documentation on available fields, see + # http://boto3.readthedocs.io/en/latest/reference/services/ec2.html#EC2.ServiceResource.create_instances + node_config: + InstanceType: m5.large