-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Azure cluster launcher] Deletion takes ~10 minutes without stopped node caching, fast with stopped node caching #25971
Comments
@gramhagen here's the GitHub issue |
Hi, I'm a bot from the Ray team :) To help human contributors to focus on more relevant issues, I will automatically add the stale label to issues that have had no activity for more than 4 months. If there is no further activity in the 14 days, the issue will be closed!
You can always ask for help on our discussion forum or Ray's public slack channel. |
(Context: I’m doing a round of cleanup of waiting-for-triage issues) I'm marking as P2 since on docs we have clarified Azure cluster launcher is community maintained. cc @AmeerHajAli to put in Infra backlog |
…ays (#31645) This reverts prior changes to node naming which led to non-unique names, causing constant node refreshing Currently the Azure autoscaler blocks on node destruction, so that was removed in this change Related issue number Closes #31538 Closes #25971 --------- Signed-off-by: Scott Graham <[email protected]> Co-authored-by: Scott Graham <[email protected]>
…ays (ray-project#31645) This reverts prior changes to node naming which led to non-unique names, causing constant node refreshing Currently the Azure autoscaler blocks on node destruction, so that was removed in this change Related issue number Closes ray-project#31538 Closes ray-project#25971 --------- Signed-off-by: Scott Graham <[email protected]> Co-authored-by: Scott Graham <[email protected]> Signed-off-by: Jack He <[email protected]>
…ays (ray-project#31645) This reverts prior changes to node naming which led to non-unique names, causing constant node refreshing Currently the Azure autoscaler blocks on node destruction, so that was removed in this change Related issue number Closes ray-project#31538 Closes ray-project#25971 --------- Signed-off-by: Scott Graham <[email protected]> Co-authored-by: Scott Graham <[email protected]>
…ays (ray-project#31645) This reverts prior changes to node naming which led to non-unique names, causing constant node refreshing Currently the Azure autoscaler blocks on node destruction, so that was removed in this change Related issue number Closes ray-project#31538 Closes ray-project#25971 --------- Signed-off-by: Scott Graham <[email protected]> Co-authored-by: Scott Graham <[email protected]> Signed-off-by: Edward Oakes <[email protected]>
…ays (ray-project#31645) This reverts prior changes to node naming which led to non-unique names, causing constant node refreshing Currently the Azure autoscaler blocks on node destruction, so that was removed in this change Related issue number Closes ray-project#31538 Closes ray-project#25971 --------- Signed-off-by: Scott Graham <[email protected]> Co-authored-by: Scott Graham <[email protected]>
…ays (ray-project#31645) This reverts prior changes to node naming which led to non-unique names, causing constant node refreshing Currently the Azure autoscaler blocks on node destruction, so that was removed in this change Related issue number Closes ray-project#31538 Closes ray-project#25971 --------- Signed-off-by: Scott Graham <[email protected]> Co-authored-by: Scott Graham <[email protected]>
…ays (ray-project#31645) This reverts prior changes to node naming which led to non-unique names, causing constant node refreshing Currently the Azure autoscaler blocks on node destruction, so that was removed in this change Related issue number Closes ray-project#31538 Closes ray-project#25971 --------- Signed-off-by: Scott Graham <[email protected]> Co-authored-by: Scott Graham <[email protected]>
…ays (ray-project#31645) This reverts prior changes to node naming which led to non-unique names, causing constant node refreshing Currently the Azure autoscaler blocks on node destruction, so that was removed in this change Related issue number Closes ray-project#31538 Closes ray-project#25971 --------- Signed-off-by: Scott Graham <[email protected]> Co-authored-by: Scott Graham <[email protected]> Signed-off-by: elliottower <[email protected]>
…ays (ray-project#31645) This reverts prior changes to node naming which led to non-unique names, causing constant node refreshing Currently the Azure autoscaler blocks on node destruction, so that was removed in this change Related issue number Closes ray-project#31538 Closes ray-project#25971 --------- Signed-off-by: Scott Graham <[email protected]> Co-authored-by: Scott Graham <[email protected]> Signed-off-by: Jack He <[email protected]>
…re configurable and robust (#44100) This PR addresses a few issues when launching clusters with Azure: Any changes made to subnets of the deployed virtual network(s) are bashed upon redeployment. Any service endpoints, route tables, or delegations are removed when redeploying (which happens on any of the ray CLI calls) due to this open Azure issue. This PR provides a workaround for the issue by copying the existing subnet configuration into the deployment template if a subnet already exists with the cluster unique id within the same resource group. VM termination is extremely lengthy and does not clean up all dependencies. When VMs are provisioned, dependencies such as disks, NICs, and public IP addresses are also provisioned. However, because the termination process does not wait for the VM to be deleted and the dependent resources cannot be deleted at the same time as the VM, these dependencies are often left in the resource group after termination. This can cause issues with quotas (i.e., reaching a limit of public IP addresses or disks) and wastes resources. This PR moves node termination into a pool of threads so that node deletion can be parallelized (since waiting for each node to be deleted takes a long time) and all dependencies can be correctly deleted once their VMs no longer exist. VMs can have status code ProvisioningState/failed/RetryableError, causing an unpacking error. This line throws an exception when the provisioning state is the string above, resulting in incorrect provisioning/termination of the node. This PR addresses that issue by slicing the list of status strings and only using the first two. The default quota for public IP addresses in Azure is only 100, which can result in quota limits being hit for larger clusters. This PR adds an option (use_external_head_ip) for only provisioning a public IP address for the head node (instead of all nodes or no nodes). This allows a user to still communicate with the head node via a public IP address without running into quota limits on public IP addresses. This option works in tandem with use_internal_ips - if both are set to True, then a public IP address will only be provisioned for the head node. If use_external_head_ip is omitted, the behavior is unchanged from the current behavior (i.e., public IPs will be provisioned for all nodes if use_internal_ips is False, otherwise no public IPs will be provisioned). I've tested all of these fixes using ray up/ray dashboard/ray down on Azure clusters of 4-32 nodes to make sure the start up/teardown works correctly and the correct amount of resources are provisioned. Related issue number Node termination times are discussed in #25971 --------- Signed-off-by: Mike Danielczuk <[email protected]> Signed-off-by: Mike Danielczuk <[email protected]> Co-authored-by: Scott Graham <[email protected]>
…re configurable and robust (ray-project#44100) This PR addresses a few issues when launching clusters with Azure: Any changes made to subnets of the deployed virtual network(s) are bashed upon redeployment. Any service endpoints, route tables, or delegations are removed when redeploying (which happens on any of the ray CLI calls) due to this open Azure issue. This PR provides a workaround for the issue by copying the existing subnet configuration into the deployment template if a subnet already exists with the cluster unique id within the same resource group. VM termination is extremely lengthy and does not clean up all dependencies. When VMs are provisioned, dependencies such as disks, NICs, and public IP addresses are also provisioned. However, because the termination process does not wait for the VM to be deleted and the dependent resources cannot be deleted at the same time as the VM, these dependencies are often left in the resource group after termination. This can cause issues with quotas (i.e., reaching a limit of public IP addresses or disks) and wastes resources. This PR moves node termination into a pool of threads so that node deletion can be parallelized (since waiting for each node to be deleted takes a long time) and all dependencies can be correctly deleted once their VMs no longer exist. VMs can have status code ProvisioningState/failed/RetryableError, causing an unpacking error. This line throws an exception when the provisioning state is the string above, resulting in incorrect provisioning/termination of the node. This PR addresses that issue by slicing the list of status strings and only using the first two. The default quota for public IP addresses in Azure is only 100, which can result in quota limits being hit for larger clusters. This PR adds an option (use_external_head_ip) for only provisioning a public IP address for the head node (instead of all nodes or no nodes). This allows a user to still communicate with the head node via a public IP address without running into quota limits on public IP addresses. This option works in tandem with use_internal_ips - if both are set to True, then a public IP address will only be provisioned for the head node. If use_external_head_ip is omitted, the behavior is unchanged from the current behavior (i.e., public IPs will be provisioned for all nodes if use_internal_ips is False, otherwise no public IPs will be provisioned). I've tested all of these fixes using ray up/ray dashboard/ray down on Azure clusters of 4-32 nodes to make sure the start up/teardown works correctly and the correct amount of resources are provisioned. Related issue number Node termination times are discussed in ray-project#25971 --------- Signed-off-by: Mike Danielczuk <[email protected]> Signed-off-by: Mike Danielczuk <[email protected]> Co-authored-by: Scott Graham <[email protected]>
…re configurable and robust (ray-project#44100) This PR addresses a few issues when launching clusters with Azure: Any changes made to subnets of the deployed virtual network(s) are bashed upon redeployment. Any service endpoints, route tables, or delegations are removed when redeploying (which happens on any of the ray CLI calls) due to this open Azure issue. This PR provides a workaround for the issue by copying the existing subnet configuration into the deployment template if a subnet already exists with the cluster unique id within the same resource group. VM termination is extremely lengthy and does not clean up all dependencies. When VMs are provisioned, dependencies such as disks, NICs, and public IP addresses are also provisioned. However, because the termination process does not wait for the VM to be deleted and the dependent resources cannot be deleted at the same time as the VM, these dependencies are often left in the resource group after termination. This can cause issues with quotas (i.e., reaching a limit of public IP addresses or disks) and wastes resources. This PR moves node termination into a pool of threads so that node deletion can be parallelized (since waiting for each node to be deleted takes a long time) and all dependencies can be correctly deleted once their VMs no longer exist. VMs can have status code ProvisioningState/failed/RetryableError, causing an unpacking error. This line throws an exception when the provisioning state is the string above, resulting in incorrect provisioning/termination of the node. This PR addresses that issue by slicing the list of status strings and only using the first two. The default quota for public IP addresses in Azure is only 100, which can result in quota limits being hit for larger clusters. This PR adds an option (use_external_head_ip) for only provisioning a public IP address for the head node (instead of all nodes or no nodes). This allows a user to still communicate with the head node via a public IP address without running into quota limits on public IP addresses. This option works in tandem with use_internal_ips - if both are set to True, then a public IP address will only be provisioned for the head node. If use_external_head_ip is omitted, the behavior is unchanged from the current behavior (i.e., public IPs will be provisioned for all nodes if use_internal_ips is False, otherwise no public IPs will be provisioned). I've tested all of these fixes using ray up/ray dashboard/ray down on Azure clusters of 4-32 nodes to make sure the start up/teardown works correctly and the correct amount of resources are provisioned. Related issue number Node termination times are discussed in ray-project#25971 --------- Signed-off-by: Mike Danielczuk <[email protected]> Signed-off-by: Mike Danielczuk <[email protected]> Co-authored-by: Scott Graham <[email protected]>
What happened + What you expected to happen
This came up in a Discuss post. The user highlights confusing, undesired behavior where the Azure cluster launcher waits for virtual machines to completely terminate (~10 minutes) when
cache_stopped_nodes=False
. The expected behavior is that node removal is fast, as is the behavior whencache_stopped_nodes=True
. "Fast" here meansidle_timeout_minutes
.The fix is to remove this blocking
wait
, so that the node provider makes the request to terminate the virtual machine, but does not block waiting for termination.More context:
Versions / Dependencies
I presume the user tested on latest released Ray, but not sure. The root cause is present in the 1.13 branch.
Reproduction script
Use example-full.yaml with
cache_stopped_nodes=True|False
andidle_timeout_minutes=1
Issue Severity
Low: It annoys or frustrates me.
The text was updated successfully, but these errors were encountered: