Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: sdrs_advanced_options on r/datastore_cluster #1749

Merged
merged 1 commit into from
Sep 8, 2022

Conversation

zxinyu08
Copy link
Contributor

@zxinyu08 zxinyu08 commented Sep 8, 2022

Description

Fixes error parsing string as enum type for sdrs_advanced_options on r/datastore_cluster.

Triage details and fix validation could be found in the linked issue.

Release Note

resource/datastore_cluster: Fixes error parsing string as enum type for sdrs_advanced_options (GH-1749)

References

Closes #1448

@github-actions github-actions bot added provider Type: Provider size/xs Relative Sizing: Extra-Small labels Sep 8, 2022
@tenthirtyam tenthirtyam changed the title fix: error parsing string as enum type for sdrs_advanced_option on r/datastore_cluster fix: error parsing string as enum type for sdrs_advanced_option on r/datastore_cluster Sep 8, 2022
@tenthirtyam tenthirtyam added bug Type: Bug needs-review Status: Pull Request Needs Review labels Sep 8, 2022
@tenthirtyam tenthirtyam self-requested a review September 8, 2022 12:27
@tenthirtyam tenthirtyam added this to the v2.3.0 milestone Sep 8, 2022
@tenthirtyam tenthirtyam added the area/storage Area: Storage label Sep 8, 2022
@tenthirtyam
Copy link
Collaborator

Pull request submitted by:

Xinyu Zhang
MTS | vSAN Product Engineering @ VMware, Inc.

Copy link
Collaborator

@tenthirtyam tenthirtyam left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Many thanks @zxinyu08 for all your help triaging and identifying the fix for this issue 0 LGTM! 🚀

I've performed the testing for this change with 💯 success.

Terraform Configuration:
terraform {
  required_providers {
    vsphere = {
      source  = "local/hashicorp/vsphere"
      version = "2.3.0"
    }
  }
  required_version = ">= 1.2.9"
}

provider "vsphere" {
  vsphere_server       = var.vsphere_server
  user                 = var.vsphere_username
  password             = var.vsphere_password
  allow_unverified_ssl = var.vsphere_insecure
}

data "vsphere_datacenter" "datacenter" {
  name = var.vsphere_datacenter
}

data "vsphere_host" "vsphere_cluster_hosts" {
  count         = length(var.vsphere_cluster_hosts)
  name          = var.vsphere_cluster_hosts[count.index]
  datacenter_id = data.vsphere_datacenter.datacenter.id
}

resource "vsphere_datastore_cluster" "datastore_cluster" {
  datacenter_id                          = data.vsphere_datacenter.datacenter.id
  name                                   = "remote-nfs-dsc"
  sdrs_enabled                           = true
  sdrs_automation_level                  = "automated"
  sdrs_default_intra_vm_affinity         = true
  sdrs_free_space_threshold              = 1
  sdrs_free_space_threshold_mode         = "utilization"
  sdrs_free_space_utilization_difference = 25
  sdrs_io_latency_threshold              = 30
  sdrs_io_load_balance_enabled           = true
  sdrs_io_load_imbalance_threshold       = 50
  sdrs_io_reservable_percent_threshold   = 60
  sdrs_io_reservable_threshold_mode      = "automated"
  sdrs_load_balance_interval             = 480
  sdrs_space_utilization_threshold       = 85
  sdrs_advanced_options                  = {
        "IgnoreAffinityRulesForMaintenance" = "1"
    }
}

resource "vsphere_nas_datastore" "datastore" {
  for_each             = var.vsphere_datastore_nfs
  type                 = var.vsphere_datastore_type
  name                 = each.value["name"]
  remote_hosts         = each.value["remote_hosts"]
  remote_path          = each.value["remote_path"]
  access_mode          = var.vsphere_datastore_nfs_access_mode
  host_system_ids      = data.vsphere_host.vsphere_cluster_hosts[*].id
  datastore_cluster_id = vsphere_datastore_cluster.datastore_cluster.id
}
Results:
Terraform apply --auto-approve  
data.vsphere_datacenter.datacenter: Reading...
data.vsphere_datacenter.datacenter: Read complete after 0s [id=datacenter-3]
data.vsphere_host.vsphere_cluster_hosts[0]: Reading...
data.vsphere_host.vsphere_cluster_hosts[0]: Read complete after 0s [id=host-10]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # vsphere_datastore_cluster.datastore_cluster will be created
  + resource "vsphere_datastore_cluster" "datastore_cluster" {
      + datacenter_id                          = "datacenter-3"
      + id                                     = (known after apply)
      + name                                   = "remote-nfs-dsc"
      + sdrs_advanced_options                  = {
          + "IgnoreAffinityRulesForMaintenance" = "1"
        }
      + sdrs_automation_level                  = "automated"
      + sdrs_default_intra_vm_affinity         = true
      + sdrs_enabled                           = true
      + sdrs_free_space_threshold              = 1
      + sdrs_free_space_threshold_mode         = "utilization"
      + sdrs_free_space_utilization_difference = 25
      + sdrs_io_latency_threshold              = 30
      + sdrs_io_load_balance_enabled           = true
      + sdrs_io_load_imbalance_threshold       = 50
      + sdrs_io_reservable_percent_threshold   = 60
      + sdrs_io_reservable_threshold_mode      = "automated"
      + sdrs_load_balance_interval             = 480
      + sdrs_space_utilization_threshold       = 85
    }

  # vsphere_nas_datastore.datastore["datastore0"] will be created
  + resource "vsphere_nas_datastore" "datastore" {
      + access_mode          = "readWrite"
      + accessible           = (known after apply)
      + capacity             = (known after apply)
      + datastore_cluster_id = (known after apply)
      + free_space           = (known after apply)
      + host_system_ids      = [
          + "host-10",
        ]
      + id                   = (known after apply)
      + maintenance_mode     = (known after apply)
      + multiple_host_access = (known after apply)
      + name                 = "remote-nfs-01"
      + protocol_endpoint    = (known after apply)
      + remote_hosts         = [
          + "172.16.11.2",
        ]
      + remote_path          = "/volume1/lab-nfs/remote-nfs-01"
      + type                 = "NFS"
      + uncommitted_space    = (known after apply)
      + url                  = (known after apply)
    }

  # vsphere_nas_datastore.datastore["datastore1"] will be created
  + resource "vsphere_nas_datastore" "datastore" {
      + access_mode          = "readWrite"
      + accessible           = (known after apply)
      + capacity             = (known after apply)
      + datastore_cluster_id = (known after apply)
      + free_space           = (known after apply)
      + host_system_ids      = [
          + "host-10",
        ]
      + id                   = (known after apply)
      + maintenance_mode     = (known after apply)
      + multiple_host_access = (known after apply)
      + name                 = "remote-nfs-02"
      + protocol_endpoint    = (known after apply)
      + remote_hosts         = [
          + "172.16.11.2",
        ]
      + remote_path          = "/volume1/lab-nfs/remote-nfs-02"
      + type                 = "NFS"
      + uncommitted_space    = (known after apply)
      + url                  = (known after apply)
    }

  # vsphere_nas_datastore.datastore["datastore2"] will be created
  + resource "vsphere_nas_datastore" "datastore" {
      + access_mode          = "readWrite"
      + accessible           = (known after apply)
      + capacity             = (known after apply)
      + datastore_cluster_id = (known after apply)
      + free_space           = (known after apply)
      + host_system_ids      = [
          + "host-10",
        ]
      + id                   = (known after apply)
      + maintenance_mode     = (known after apply)
      + multiple_host_access = (known after apply)
      + name                 = "remote-nfs-03"
      + protocol_endpoint    = (known after apply)
      + remote_hosts         = [
          + "172.16.11.2",
        ]
      + remote_path          = "/volume1/lab-nfs/remote-nfs-03"
      + type                 = "NFS"
      + uncommitted_space    = (known after apply)
      + url                  = (known after apply)
    }

Plan: 4 to add, 0 to change, 0 to destroy.
vsphere_datastore_cluster.datastore_cluster: Creating...
vsphere_datastore_cluster.datastore_cluster: Creation complete after 0s [id=group-p150008]
vsphere_nas_datastore.datastore["datastore2"]: Creating...
vsphere_nas_datastore.datastore["datastore1"]: Creating...
vsphere_nas_datastore.datastore["datastore0"]: Creating...
vsphere_nas_datastore.datastore["datastore1"]: Creation complete after 0s [id=datastore-150009]
vsphere_nas_datastore.datastore["datastore0"]: Creation complete after 1s [id=datastore-150011]
vsphere_nas_datastore.datastore["datastore2"]: Creation complete after 1s [id=datastore-150010]

Apply complete! Resources: 4 added, 0 changed, 0 destroyed.

~/Desktop/vsphere-storage-nfs-datastore via 💠 default took 2s 
image

@tenthirtyam tenthirtyam changed the title fix: error parsing string as enum type for sdrs_advanced_option on r/datastore_cluster fix: sdrs_advanced_option on r/datastore_cluster Sep 8, 2022
@tenthirtyam tenthirtyam changed the title fix: sdrs_advanced_option on r/datastore_cluster fix: sdrs_advanced_options on r/datastore_cluster Sep 8, 2022
@tenthirtyam tenthirtyam merged commit fc0a1c1 into hashicorp:main Sep 8, 2022
@tenthirtyam tenthirtyam removed the needs-review Status: Pull Request Needs Review label Sep 8, 2022
tenthirtyam added a commit that referenced this pull request Sep 8, 2022
Updates `CHANGELOG.md` to include the fixes provided in #1749.

Signed-off-by: Ryan Johnson <[email protected]>
@github-actions
Copy link

github-actions bot commented Oct 9, 2022

I'm going to lock this pull request because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Oct 9, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
area/storage Area: Storage bug Type: Bug provider Type: Provider size/xs Relative Sizing: Extra-Small
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Error parsing string as enum type for sdrs_advanced_options on r/datastore_cluster
2 participants