Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changing k8 default node pool autoscaling parameters causes TF to lose all information about resources within the cluster #790

Open
LaurisJakobsons opened this issue Feb 14, 2022 · 1 comment

Comments

@LaurisJakobsons
Copy link

Bug Report

Describe the bug

Whenever changes are made to the (default) node pool autoscaling parameters within the digitalocean_kubernetes_cluster resource, terraform loses all the information about resources that were created within the cluster. This breaks the terraform state to the point where everything needs to be destroyed & re-applied to fix it.

Let's say I've created digitalocean_kubernetes_cluster resource and along with that, added several kubernetes resources within the same cluster using terraform. If I change the autoscaling parameters of the default node pool within the digitalocean_kubernetes_cluster resource, terraform loses all the information about kubernetes resources that were created in the cluster and tries to apply them again, resulting in numerous resource "already exists" errors, because they are still present on the cluster, just terraform has lost all the information about them.

(NOTE: Autoscaling changes are applied correctly)

Affected Resource(s)

  • digitalocean_kubernetes_cluster

Expected Behavior

Node pool autoscaling changes should be applied without causing terraform to lose information about other resources created within the cluster.

Actual Behavior

All the kubernetes resources created within the cluster are lost from terraform state.

Steps to Reproduce

  1. Create a digitalocean_kubernetes_cluster
  2. Add resources within the cluster using kubernetes provider
  3. Edit digitalocean_kubernetes_cluster node pool autoscaling parameters & apply changes

Terraform Configuration Files

resource "digitalocean_kubernetes_cluster" "primary" {
  name     = var.cluster_name
  region   = var.cluster_region
  version  = data.digitalocean_kubernetes_versions.current.latest_version
  vpc_uuid = digitalocean_vpc.cluster_vpc.id

  node_pool {
    name = "${var.cluster_name}-node-pool"
    size = var.worker_size
    auto_scale = true
    min_nodes = 1
    max_nodes = var.max_worker_count ## Issue occurs when this value is changed and re-applied
    tags = [local.cluster_id_tag]
  }
}

Additional context

Using terraform version 1.1.5, digitalocean provider 2.17.1, kubernetes provider 2.8.0

@mkjmdski
Copy link
Contributor

mkjmdski commented Mar 3, 2022

This is related to: #424 because terraform looses provider connection data when you are changing node pool size. The lifecycle will want to remove and create new cluster and though at this point data in kubernetes provider retrieved from digitalocean_kubernetes_cluster resource is unknown (because will be known after apply) in terraform run resulting in error.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants