Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The example code provider configuration, Terraform 0.13+ and depends_on #624

Open
devurandom opened this issue Apr 24, 2021 · 2 comments
Open

Comments

@devurandom
Copy link
Contributor

This is a late follow-up on the technique of using depends_on to order the invocation of modules in #564.

The depends_on = [var.cluster_id] trick also works when applied to data "digitalocean_kubernetes_cluster" "primary". In fact it appears that having it anywhere inside the module is sufficient. Removing it from the module (e.g. because resource "local_file" "kubeconfig" is not used or wanted) will make the module fail in the plan phase (when using terraform apply: before the plan is shown for confirmation by the user):

Error: Unable to find cluster with name: REDACTED

Terraform 0.13 supports depends_on between modules: https://www.terraform.io/docs/language/meta-arguments/depends_on.html

I tried using that, but Terraform complains:

Error: Module module.REDACTED contains provider configuration
Providers cannot be configured within modules using count, for_each or depends_on.

The documentation is quite explicit that this does not work:

Provider configurations belong in the root module of a Terraform configuration. (Child modules receive their provider configurations from the root module; [...])
(https://www.terraform.io/docs/language/providers/configuration.html)

The module developer's guide goes into more detail why it does not work (and also explains the reasons that led to these design choices -- not quoted here):

Provider configurations, unlike most other concepts in Terraform, are global to an entire Terraform configuration and can be shared across module boundaries. Provider configurations can be defined only in a root Terraform module.
[...]
A module intended to be called by one or more other modules must not contain any provider blocks.
[...]
For convenience in simple configurations, a child module automatically inherits default (un-aliased) provider configurations from its parent. This means that explicit provider blocks appear only in the root module, and downstream modules can simply declare resources for that provider and have them automatically associated with the root provider configurations.
[...]
In Terraform v0.10 and earlier there was no explicit way to use different configurations of a provider in different modules in the same configuration, and so module authors commonly worked around this by writing provider blocks directly inside their modules, making the module have its own separate provider configurations separate from those declared in the root module.
[...]
Terraform v0.11 introduced the mechanisms described in earlier sections to allow passing provider configurations between modules in a structured way, and thus we explicitly recommended against writing a child module with its own provider configuration blocks. However, that legacy pattern continued to work for compatibility purposes -- though with the same drawback -- until Terraform v0.13.
[...]
To retain the backward compatibility as much as possible, Terraform v0.13 continues to support the legacy pattern for module blocks that do not use these new features, but a module with its own provider configurations is not compatible with for_each, count, or depends_on. Terraform will produce an error if you attempt to combine these features.
(https://www.terraform.io/docs/language/modules/develop/providers.html)

One might get the idea to simply move the provider block to the root module and use the data "digitalocean_kubernetes_cluster" "primary" trick with depends_on = [module.doks-cluster] and feed the attributes of the data source into the provider. However, this is also explicitly discouraged by the documentation:

You can use expressions in the values of these configuration arguments, but can only reference values that are known before the configuration is applied. This means you can safely reference input variables, but not attributes exported by resources (with an exception for resource arguments that are specified directly in the configuration).
(https://www.terraform.io/docs/language/providers/configuration.html)

The reason why this works in the example of #564, even though all providers are global and should be instantiated early in the plan phase, appears to be because the kubernetes provider does not make any HTTP calls to the cluster until it actually tries to create resources. The depends_on ensures that this does not happen too early. However, the kubernetes-alpha provider, for example, calls the cluster very early (AFAIK to learn about the resources it supports), which will fail even with the explicit depends_on:

Error: Failed to construct REST client
cannot create REST client: no client config

Hence, to save users trouble and hard-to-debug errors, I suggest to split the example code into two separate root modules: One to create the cluster using just the digitalocean provider, and one using the digitalocean and kubernetes providers to install something on the cluster. The only information that would have to be exchanged (manually or via a remote state data source) would be the cluster_name, since all other information is already shared via data "digitalocean_kubernetes_cluster" "primary".

@scotchneat
Copy link
Contributor

@devurandom Thank you for the thoughtful and thorough suggestion. We'll consider making the suggested changes.

@kriswuollett
Copy link
Contributor

Although there is a slight inconvenience to having two state sets, the suggested change above works good for me. Any update in interest in making the change, or receiving a pull request for one?

The pattern I'm currently learning to develop with adds a stateless module for the k8s "test" resources so I can reuse the module for local development using microk8s too. A ${PROJ}-${ENV}-resources module sets up the cluster and writes a json file with the k8s host, token, and cluster_ca_certificate (which makes it k8s vendor generic). Next the ${PROJ}-${ENV}-services reads in that json file to configure the kubernetes and helm providers, as well as the ingress controller (if DigitalOcean), and finally invokes the ${PROJ}-k8s-services module to do things like the deployment, service, and ingress records. Later on I'd just pass through additional configuration if needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants