Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provider allows mutually-exclusive configuration options #1179

Open
dak1n1 opened this issue Feb 25, 2021 · 3 comments · Fixed by #2084
Open

Provider allows mutually-exclusive configuration options #1179

dak1n1 opened this issue Feb 25, 2021 · 3 comments · Fixed by #2084

Comments

@dak1n1
Copy link
Contributor

dak1n1 commented Feb 25, 2021

With the release of Kubernetes Provider 2.0, we began requiring explicit provider configuration with the intention of preventing the wrong clusters from being targeted during apply. However, in our implementation of this, we have introduced a new bug scenario where the provider silently chooses an authentication method when more than one is supplied in the provider config. This issue was already present to some extent, but in the past it could be partially mitigated with load_config_file = false. Now that load_config_file is no longer an option, the issue has become more prominent.

We can solve this in the provider code by adding ConflictsWith to mutually-exclusive configuation options, to give users complete control over which options are being chosen to authenticate to their Kubernetetes cluster. This will throw an error when two or more conflicting options are used together.

NOTE: This change will result in configuration errors when mutually-exclusive configuration options are specified.

The configuration errors have been silent since the 2.0 release, but with this proposed change, the error will become explicit, notifying the user of the conflicting options, and failing to run the provider until a single valid configuration is chosen.

Terraform Version, Provider Version and Kubernetes Version

Terraform version:
0.14.4
Kubernetes provider version:
2.0.2

Affected Resource(s)

Provider config block.

Terraform Configuration Files

The provider currently chooses KUBE_CONFIG_PATH over the explicit configuration. This is one of the bug scenarios that will be corrected.

export KUBE_CONFIG_PATH="~/.kube/config"
provider "kubernetes" {
  host                            = data.aws_eks_cluster.default.endpoint
  cluster_ca_certificate = base64decode(data.aws_eks_cluster.default.certificate_authority[0].data)
  token                          = data.aws_eks_cluster_auth.default.token

Debug Output

Panic Output

Steps to Reproduce

  1. terraform plan using the configuration options above (though you'll have to specify a resource too, to initialize the provider).

Expected Behavior

I think we should throw an error instead, and let the user choose which configuration they would like to use.

Actual Behavior

The provider silently chooses one of the given configurations. Sometimes this leads to the wrong cluster being targeted, which is what we were aiming to fix with issue #909.

Important Factoids

References

Adding more explicit configuration settings (ConflictsWith) should help solve the following issues, which have arisen since the 2.0 release.

#1134
#1167
#1127
#1131
#1175

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@github-actions
Copy link

Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you!

@alexsomesan
Copy link
Member

this is still relevant - not stale

@github-actions
Copy link

Marking this issue as stale due to inactivity. If this issue receives no comments in the next 30 days it will automatically be closed. If this issue was automatically closed and you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. This helps our maintainers find and focus on the active issues. Maintainers may also remove the stale label at their discretion. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment