Skip to content

Commit

Permalink
[docs] sphinx design migration 2/N (ray-project#34707)
Browse files Browse the repository at this point in the history
  • Loading branch information
maxpumperla authored Apr 24, 2023
1 parent 770cb74 commit 47014ce
Show file tree
Hide file tree
Showing 13 changed files with 1,065 additions and 932 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -82,30 +82,32 @@ Ensure that the Ray Client port on the head node is reachable from your local ma
This means opening that port up by configuring security groups or other access controls (on `EC2 <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/authorizing-access-to-an-instance.html>`_)
or proxying from your local machine to the cluster (on `K8s <https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/#forward-a-local-port-to-a-port-on-the-pod>`_).

.. tabbed:: AWS

With the Ray cluster launcher, you can configure the security group
to allow inbound access by defining :ref:`cluster-configuration-security-group`
in your `cluster.yaml`.

.. code-block:: yaml
# An unique identifier for the head node and workers of this cluster.
cluster_name: minimal_security_group
# Cloud-provider specific configuration.
provider:
type: aws
region: us-west-2
security_group:
GroupName: ray_client_security_group
IpPermissions:
- FromPort: 10001
ToPort: 10001
IpProtocol: TCP
IpRanges:
# This will enable inbound access from ALL IPv4 addresses.
- CidrIp: 0.0.0.0/0
.. tab-set::

.. tab-item:: AWS

With the Ray cluster launcher, you can configure the security group
to allow inbound access by defining :ref:`cluster-configuration-security-group`
in your `cluster.yaml`.

.. code-block:: yaml
# An unique identifier for the head node and workers of this cluster.
cluster_name: minimal_security_group
# Cloud-provider specific configuration.
provider:
type: aws
region: us-west-2
security_group:
GroupName: ray_client_security_group
IpPermissions:
- FromPort: 10001
ToPort: 10001
IpProtocol: TCP
IpRanges:
# This will enable inbound access from ALL IPv4 addresses.
- CidrIp: 0.0.0.0/0
Step 3: Run Ray code
~~~~~~~~~~~~~~~~~~~~
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -21,32 +21,34 @@ including the running jobs, actors, workers, nodes, etc.
By default, the :ref:`cluster launcher <vm-cluster-quick-start>` and :ref:`KubeRay operator <kuberay-quickstart>` will launch the dashboard, but will
not publicly expose the port.

.. tabbed:: If using the VM cluster launcher
.. tab-set::

You can securely port-forward local traffic to the dashboard via the ``ray
dashboard`` command.
.. tab-item:: If using the VM cluster launcher

.. code-block:: shell
You can securely port-forward local traffic to the dashboard via the ``ray
dashboard`` command.

$ ray dashboard [-p <port, 8265 by default>] <cluster config file>
.. code-block:: shell
The dashboard will now be visible at ``https://localhost:8265``.
$ ray dashboard [-p <port, 8265 by default>] <cluster config file>
.. tabbed:: If using Kubernetes
The dashboard will now be visible at ``https://localhost:8265``.

The KubeRay operator makes the dashboard available via a Service targeting
the Ray head pod, named ``<RayCluster name>-head-svc``. You can access the
dashboard from within the Kubernetes cluster at ``https://<RayCluster name>-head-svc:8265``.
.. tab-item:: If using Kubernetes

You can also view the dashboard from outside the Kubernetes cluster by
using port-forwarding:
The KubeRay operator makes the dashboard available via a Service targeting
the Ray head pod, named ``<RayCluster name>-head-svc``. You can access the
dashboard from within the Kubernetes cluster at ``https://<RayCluster name>-head-svc:8265``.

.. code-block:: shell
You can also view the dashboard from outside the Kubernetes cluster by
using port-forwarding:

$ kubectl port-forward service/raycluster-autoscaler-head-svc 8265:8265
.. code-block:: shell
For more information about configuring network access to a Ray cluster on
Kubernetes, see the :ref:`networking notes <kuberay-networking>`.
$ kubectl port-forward service/raycluster-autoscaler-head-svc 8265:8265
For more information about configuring network access to a Ray cluster on
Kubernetes, see the :ref:`networking notes <kuberay-networking>`.


Using Ray Cluster CLI tools
Expand All @@ -63,29 +65,31 @@ These CLI commands can be run on any node in a Ray Cluster. Examples for
executing these commands from a machine outside the Ray Cluster are provided
below.

.. tabbed:: If using the VM cluster launcher
.. tab-set::

.. tab-item:: If using the VM cluster launcher

Execute a command on the cluster using ``ray exec``:
Execute a command on the cluster using ``ray exec``:

.. code-block:: shell
.. code-block:: shell
$ ray exec <cluster config file> "ray status"
$ ray exec <cluster config file> "ray status"
.. tabbed:: If using Kubernetes
.. tab-item:: If using Kubernetes

Execute a command on the cluster using ``kubectl exec`` and the configured
RayCluster name. We will use the Service targeting the Ray head pod to
execute a CLI command on the cluster.
Execute a command on the cluster using ``kubectl exec`` and the configured
RayCluster name. We will use the Service targeting the Ray head pod to
execute a CLI command on the cluster.

.. code-block:: shell
.. code-block:: shell
# First, find the name of the Ray head service.
$ kubectl get pod | grep <RayCluster name>-head
# NAME READY STATUS RESTARTS AGE
# <RayCluster name>-head-xxxxx 2/2 Running 0 XXs
# First, find the name of the Ray head service.
$ kubectl get pod | grep <RayCluster name>-head
# NAME READY STATUS RESTARTS AGE
# <RayCluster name>-head-xxxxx 2/2 Running 0 XXs
# Then, use the name of the Ray head service to run `ray status`.
$ kubectl exec <RayCluster name>-head-xxxxx -- ray status
# Then, use the name of the Ray head service to run `ray status`.
$ kubectl exec <RayCluster name>-head-xxxxx -- ray status
.. _multi-node-metrics:

Expand Down
92 changes: 49 additions & 43 deletions doc/source/cluster/vms/getting-started.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,37 +31,41 @@ Setup

Before we start, you will need to install some Python dependencies as follows:

.. tabbed:: AWS
.. tab-set::

.. code-block:: shell
.. tab-item:: AWS

$ pip install -U "ray[default]" boto3
.. code-block:: shell
.. tabbed:: Azure
$ pip install -U "ray[default]" boto3
.. code-block:: shell
.. tab-item:: Azure

$ pip install -U "ray[default]" azure-cli azure-core
.. code-block:: shell
.. tabbed:: GCP
$ pip install -U "ray[default]" azure-cli azure-core
.. code-block:: shell
.. tab-item:: GCP

$ pip install -U "ray[default]" google-api-python-client
.. code-block:: shell
$ pip install -U "ray[default]" google-api-python-client
Next, if you're not set up to use your cloud provider from the command line, you'll have to configure your credentials:

.. tabbed:: AWS
.. tab-set::

.. tab-item:: AWS

Configure your credentials in ``~/.aws/credentials`` as described in `the AWS docs <https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html>`_.
Configure your credentials in ``~/.aws/credentials`` as described in `the AWS docs <https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html>`_.

.. tabbed:: Azure
.. tab-item:: Azure

Log in using ``az login``, then configure your credentials with ``az account set -s <subscription_id>``.
Log in using ``az login``, then configure your credentials with ``az account set -s <subscription_id>``.

.. tabbed:: GCP
.. tab-item:: GCP

Set the ``GOOGLE_APPLICATION_CREDENTIALS`` environment variable as described in `the GCP docs <https://cloud.google.com/docs/authentication/getting-started>`_.
Set the ``GOOGLE_APPLICATION_CREDENTIALS`` environment variable as described in `the GCP docs <https://cloud.google.com/docs/authentication/getting-started>`_.

Create a (basic) Python application
-----------------------------------
Expand Down Expand Up @@ -154,45 +158,47 @@ To start a Ray Cluster, first we need to define the cluster configuration. The c

A minimal sample cluster configuration file looks as follows:

.. tabbed:: AWS
.. tab-set::

.. tab-item:: AWS

.. literalinclude:: ../../../../python/ray/autoscaler/aws/example-minimal.yaml
:language: yaml
.. literalinclude:: ../../../../python/ray/autoscaler/aws/example-minimal.yaml
:language: yaml


.. tabbed:: Azure
.. tab-item:: Azure

.. code-block:: yaml
.. code-block:: yaml
# An unique identifier for the head node and workers of this cluster.
cluster_name: minimal
# An unique identifier for the head node and workers of this cluster.
cluster_name: minimal
# Cloud-provider specific configuration.
provider:
type: azure
location: westus2
resource_group: ray-cluster
# Cloud-provider specific configuration.
provider:
type: azure
location: westus2
resource_group: ray-cluster
# How Ray will authenticate with newly launched nodes.
auth:
ssh_user: ubuntu
# you must specify paths to matching private and public key pair files
# use `ssh-keygen -t rsa -b 4096` to generate a new ssh key pair
ssh_private_key: ~/.ssh/id_rsa
# changes to this should match what is specified in file_mounts
ssh_public_key: ~/.ssh/id_rsa.pub
# How Ray will authenticate with newly launched nodes.
auth:
ssh_user: ubuntu
# you must specify paths to matching private and public key pair files
# use `ssh-keygen -t rsa -b 4096` to generate a new ssh key pair
ssh_private_key: ~/.ssh/id_rsa
# changes to this should match what is specified in file_mounts
ssh_public_key: ~/.ssh/id_rsa.pub
.. tabbed:: GCP
.. tab-item:: GCP

.. code-block:: yaml
.. code-block:: yaml
# A unique identifier for the head node and workers of this cluster.
cluster_name: minimal
# A unique identifier for the head node and workers of this cluster.
cluster_name: minimal
# Cloud-provider specific configuration.
provider:
type: gcp
region: us-west1
# Cloud-provider specific configuration.
provider:
type: gcp
region: us-west1
Save this configuration file as ``config.yaml``. You can specify a lot more details in the configuration file: instance types to use, minimum and maximum number of workers to start, autoscaling strategy, files to sync, and more. For a full reference on the available configuration properties, please refer to the :ref:`cluster YAML configuration options reference <cluster-config>`.

Expand Down
Loading

0 comments on commit 47014ce

Please sign in to comment.