Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[feature request] Allow dynamic provider aliases inside resources #3656

Closed
mtougeron opened this issue Oct 27, 2015 · 21 comments
Closed

[feature request] Allow dynamic provider aliases inside resources #3656

mtougeron opened this issue Oct 27, 2015 · 21 comments
Labels
config duplicate issue closed because another issue already tracks this problem enhancement

Comments

@mtougeron
Copy link
Contributor

I would like to be able to use a dynamic name for the provider alias inside of a resource definition. For example:

provider "openstack" {
  tenant_name = "dev"
  auth_url  = "http://myauthurl.dev:5000/v2.0"
  alias = "internal"  
}

provider "openstack" {
  tenant_name = "my-tenant"
  auth_url  = "http://rackspace:5000/v2.0"
  alias = "rackspace"  
}

provider "openstack" {
  tenant_name = "my-tenant"
  auth_url  = "http://hpcloud:5000/v2.0"
  alias = "hpcloud"  
}

resource "openstack_compute_instance_v2" "server" {
  provider = "openstack.${var.hosting}"
}

I would then call terraform like: terraform plan -var hosting=rackspace and it would use the openstack provider that is aliased as openstack.rackspace

This would allow me to easily toggle my single terraform config between multiple environments & providers.

@apparentlymart
Copy link
Contributor

This overlaps a bit with the more theoretical discussion in #1819.

@mtougeron
Copy link
Contributor Author

There is some overlap but this request is more about using the aliased providers, not with creating the aliased providers. (Though I like the ideas so far with #1819 )

@DonEstefan
Copy link

other related issues

@JamesDLD
Copy link

This will be very useful indeed.
For info, it might help or you are probably already aware of that but instead of using variable you can use a file containing your variables.
I use a file variable in the following git : https://github.com/JamesDLD/terraform

@tamsky
Copy link
Contributor

tamsky commented Jun 6, 2018

is this issue fixed by #16379 ?

@eerkunt
Copy link

eerkunt commented Jan 16, 2020

I suppose this is the first feature that I am looking for on every terraform release CHANGELOG. Since 2015, there is still no solution. :(

@qubusp
Copy link

qubusp commented Feb 20, 2020

Hitting this brick wall in 2020

@brunoscota
Copy link

also waiting for it.

@thestevenbell
Copy link

Is there some work around that the 52 of us who plus one'd 👍 this issue are missing?

@apparentlymart
Copy link
Contributor

apparentlymart commented May 1, 2020

Here is how I would do the thing in the original comment here using current Terraform language features:

variable "hosting" {
  type = string
}

locals {
  openstack_settings = tomap({
    internal = {
      tenant_name = "dev"
      auth_url    = "http://myauthurl.dev:5000/v2.0"
    }
    rackspace = {
      tenant_name = "my-tenant"
      auth_url    = "http://rackspace:5000/v2.0"
    }
    hpcloud = {
      tenant_name = "my-tenant"
      auth_url    = "http://hpcloud:5000/v2.0"
    }
  })
}

provider "openstack" {
  tenant_name = local.openstack_settings[var.hosting].tenant_name
  auth_url    = local.openstack_settings[var.hosting].auth_url
}

resource "openstack_compute_instance_v2" "server" {
}

@derekrprice
Copy link

@apparentlymart, that workaround doesn't work with my use case. I create multiple azure_kubernetes_cluster instances using for_each, then wish to use multiple kubernetes providers instantiated using certificates from the AKS instances to apply resources inside the clusters. A provider supporting for_each and a dynamic alias would do the trick. If module supported for_each, I could create a workaround that way too. Alas, Terraform supports neither solution as of version 0.12.24.

@apparentlymart
Copy link
Contributor

The key design question that needs to be answered to enable any sort of dynamic use of provider configurations (whether it be via for_each inside the provider block, for_each on a module containing a provider block, or anything else) is how Terraform can deal with the situation where a provider configuration gets removed at the same time as the resource instances it is responsible for managing.

Using the most recent comment's use-case as an example, I think you're imaging something like this:

# This is a hypothetical example. It will not work in current versions of Terraform.

variable "clusters" {
  type = map(object({
    # (some suitable cluster arguments)
  })
}

resource "azure_kubernetes_cluster" "example" {
  for_each = var.clusters

  # (arguments using each.value)
}

provider "kubernetes" {
  for_each = azure_kubernetes_cluster.example

  # (arguments using each.value from the cluster objects)
}

resource "kubernetes_pod" "example" {
  for_each = azure_kubernetes_cluster.example
  provider = provider.kubernetes[each.key]

  # ...
}

The above presents to significant challenges:

  1. When adding a new element to var.clusters with key "foo", Terraform must configure the provider.kubernetes["foo"] instance in order to plan to create kubernetes_pod.example["foo"], but it can't do so because azure_kubernetes_cluster.example["foo"] isn't created yet. This is the problem that motivated what I proposed in Partial/Progressive Configuration Changes #4149. Today, it'd require using -target='kubernetes_pod.example["foo"]' on the initial create to ensure that the cluster is created first.
  2. When removing element "bar" from var.clusters, Terraform needs to configure the provider.kubernetes["bar"] provider in order to plan and apply the destruction of kubernetes_pod.example["bar"]. However, with the configuration model as it exists today (where for_each works entirely from the configuration and not from the state) this would fail because provider.kubernetes["bar"]'s existence depends on azure_kubernetes_cluster.example["bar"]'s existence, which in turn depends on var.clusters["bar"] existing, and it doesn't anymore.

Both of these things seem solvable in principle, which is why this issue remains open rather than being closed as technically impossible, but at the same time they both involve some quite fundamental changes to how providers work in Terraform that will inevitably affect the behavior of other subsystems.

This issue remains unsolved not because the use-cases are not understood, but rather because there is no technical design for solving it that has enough detail to understand the full scope of changes required to meet those use-cases. The Terraform team can only work on a limited number of large initiatives at a time. I'm sorry that other things have been prioritized over this, but I do stand behind the prioritization decisions that our team has made.


In the meantime, I hope the example above helps some of you who have problems like the one described in the opening comment of this issue where it is the configuration itself that is dynamic, rather than the number of configurations. For those who have more complex systems where the number of provider configurations is what is dynamic, my suggested workaround would be to split your configuration into two parts. Again using the previous comment as an example:

  • The first configuration contains the variable "clusters" block and the single resource "azure_kubernetes_cluster" that uses for_each = var.clusters. This configuration will have only one default workspace, and will create all of the EKS clusters.

  • The second configuration contains a single provider "kubernetes" and a single resource "kubernetes_pod" and uses terraform.workspace as an AKS cluster name, like this:

    data "azurerm_kubernetes_cluster" "example" {
      name = terraform.workspace
      # ...
    }
    
    provider "kubernetes" {
      host = data.azurerm_kubernetes_cluster.main.kube_config[0].host
      # etc...
    }
    
    resource "kubernetes_pod" "example" {
      # ...
    }

The workflow for adding a new cluster would then be:

  • Add a new entry to var.clusters in the first configuration and run terraform apply to create the corresponding cluster.
  • In the second configuration, run terraform workspace new CLUSTERNAME to establish a new workspace for the cluster you just created, and then run terraform apply to do the Kubernetes-cluster-level configuration for it.

The workflow to remove an existing cluster would be:

  • In the second configuration, run terraform workspace select CLUSTERNAME to switch to the workspace corresponding to the cluster you want to destroy.
  • Run terraform destroy to deregister all of the Kubernetes objects from that cluster.
  • Delete the now-empty workspace using terraform workspace delete CLUSTERNAME.
  • In the first configuration, remove CLUSTERNAME from var.clusters and run terraform apply to destroy that particular AKS cluster.

I'm not suggesting this with the implication that it is an ideal or convenient solution, but rather as a potential path for those who have a similar problem today and are looking for a pragmatic way to solve it with Terraform's current featureset.

@derekrprice
Copy link

Thanks, @apparentlymart, for that very clear and detailed explanation. You've hit on exactly the config that I was trying to use. I haven't played with workspaces yet, but the workaround that I had already settled on was moving the kubernetes provider and its dependencies into a child module. This gives me some module invocations that I need to keep in sync with var.clusters, but my new add/delete workflow doesn't seem much more complex than the one that you've proposed. My config looks like this now:

variable "clusters" {
    type = map(object({
        # (some suitable cluster arguments)
    })
}

resource "azure_kubernetes_cluster" "example" {
    for_each = var.clusters

    # (arguments using each.value)
}

module "k8s-key1" {
    source = "./k8s"
    # (arguments using each.value from the key1 cluster object)
}

module "k8s-key2" {
    source = "./k8s"
    # (arguments using each.value from the key2 cluster object)
}

Looking at this again, I could have just moved everything into the child module, gotten rid of var.clusters, and maintained this as two module invocations. This makes me suspect that there is more, or maybe less, here, than meets the eye:

  1. Terraform can handle one provider dynamically configured from a dynamically created AKS instance, and its dependencies, whether adding or removing that dependency. I've maintained a configuration like this for over a year now, only this week converting from 0.11 to 0.12 and attempting to loop in an extra instance.
  2. Terraform is perfectly happy with adding and removing module configurations in my working example. It does what I would expect, despite having TWO provider configurations dynamically configured based on AKS instances, which may or may not exist.
  3. Editing the clusters variable, in our fantastic configuration, effectively, should have no more effect that adding or removing a module invocation. The cluster variable itself is a hard coded set of values, and so should be no less deterministic than my two module invocations.

Anyhow, given those factors, it seems to me that allowing providers to use loops and resources to use dynamically named providers shouldn't introduce any more problems than already exist in my multiple module invocation scenario. Maybe I'm missing some edge cases but, again, I think I can duplicate any such cases by invoking modules multiple times with the existing feature set.

@apparentlymart
Copy link
Contributor

I'm not sure I fully followed what you've been trying @derekrprice, but if you have a provider "kubernetes" block inside your ./k8s module then I think if you remove one of those module blocks after the resource instances described inside it have been created then you will encounter problem number 2 from my previous comment:

  1. When removing element "bar" from var.clusters, Terraform needs to configure the provider.kubernetes["bar"] provider in order to plan and apply the destruction of kubernetes_pod.example["bar"]. However, with the configuration model as it exists today (where for_each works entirely from the configuration and not from the state) this would fail because provider.kubernetes["bar"]'s existence depends on azure_kubernetes_cluster.example["bar"]'s existence, which in turn depends on var.clusters["bar"] existing, and it doesn't anymore.

The addressing syntax will be different in your scenario -- module.k8s-key1.provider.kubernetes instead of provider.kubernetes["bar"], for example -- but the same problem applies: there are instances in your state that belong to that provider configuration but that provider configuration is longer present in the configuration.

You aren't needing to use -target on create here (problem number 1 from my previous comment) because the kubernetes provider in particular contains a special workaround where it detects the incomplete configuration resulting from that situation and skips configuring itself in that case. A couple other providers do that too, such as mysql and postgresql. This solution doesn't generalize to all providers because it means that the provider is effectively blocked from doing any API access during planning. For mysql and postgresql that's of no consequence, but for Kubernetes in particular I've heard that this workaround is currently blocking the provider from using Kubernetes API features to make dry-run requests in order to produce an accurate plan.

I'm currently focused on an entirely separate project so I can't go any deeper on design discussion for this right now. My intent here was just to answer the earlier question about whether there were any known workarounds; I hope the two workarounds I've offered here will be useful for at least some of you.

@jurgenweber
Copy link

Could this make it into v0.14?

@kgrvamsi
Copy link

Late in the race but would like to see some solution soon for this issue......repeated code for multiple subscriptions to perform the same operation is a pain

@dynnamitt
Copy link

go on, this would would in pulumi easy!

@corticalstack
Copy link

7 years later.....

@adamsb6
Copy link

adamsb6 commented Aug 24, 2022

I just hit this while trying to refactor some of our modules that define our AWS Transit Gateways. They had hard-coded IP blocks, but these were already available from the Terraform resources we have that define VPCs and their subnets.

I spent a long time smashing my head against this limitation trying to make it so that additional regions wouldn't require any code changes, they could just be picked up from newly written Terraform state files. As far as I can tell it's not possible. There's no way to instantiate or pass a provider based on a data source. You can't for_each these resources, copypasta is required.

One way to change Terraform to accommodate this: allow provider attributes to be overridden by resources. The only reason I need to pass different providers is because I need to target different AWS regions. My problem would be solved if I could set an explicit region on the resource that the provider could use when managing the resource.

This is basically what folks do when managing AWS resources manually. They don't have to create a whole new profile, they can just set the AWS_REGION environment variable or pass the -region arg.

@apparentlymart
Copy link
Contributor

Hi all!

While doing some issue gardening today I noticed that this issue is covering a similar topic as #25244. Although this issue is older than that one, it looks like it's attracted more upvotes and has comment with a more up-to-date overview of the design challenges than we've had in this issue so far. Also, this issue seems to have started being about dynamic provider configuration assignment but later discussion is about dynamic provider configuration definition, which (as I noted in my comment over in the other issue) are two separate concerns from a technical design standpoint.

Because of all this, I'm going to close this one in favor of #25244 just to consolidate the discussion.


I also want to quickly respond to @adamsb6's comment, before I go:

It is true that the AWS provider could in principle support setting the region argument on a per-resource-instance basis, and indeed that's how the Google Cloud Platform provider is designed. In an ideal world I would like to fix it by having providers treat "location-related" settings as per-resource-instance settings which might have defaults in the provider configuration, but that isn't a very pragmatic path given how much the AWS provider (and many other providers) would need to change to make that happen.

Therefore I think in practice we're essentially stuck with the idea that region is a per-provider-configuration setting for the hashicorp/aws provider, and will need to design under that assumption.

@apparentlymart apparentlymart closed this as not planned Won't fix, can't repro, duplicate, stale Aug 30, 2022
@crw crw added the duplicate issue closed because another issue already tracks this problem label Aug 31, 2022
@github-actions
Copy link
Contributor

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 30, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
config duplicate issue closed because another issue already tracks this problem enhancement
Projects
None yet
Development

No branches or pull requests