-
Notifications
You must be signed in to change notification settings - Fork 985
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v2.0.X: Revert decision on #909 #1134
Comments
Hi @guilhermeblanco, I'm sorry to hear that you're having trouble with the auth changes in v2. I'd like to dig into this a little deeper with you, if you have time available. Feel free to put something directly on my calendar at your convenience, and if nothing is available that works for you let me know. |
Cant believe you guys got rid of support for KUBECONFIG ......... just wasted 3 hours of my day tracing this down. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks! |
Description
As part of initial bootstrap of a cluster, developers advocating 12 factor apps and diligently adopting IaC might have a desire to not only trigger the creation of the EKS cluster, but also already perform operations on top of it.
Up until the release of v2.0.0, this was reachable by the flag called
load_config_file
set tofalse
. With this major release, not only the flag got removed, but it also significantly changed the way you connect with your EKS cluster, making more harm than benefit.Before it could be done using tokens pulled by
aws_eks_cluster_auth
, an operation that could easily be achieved from within AWS instances without the requirement to pull a fresh token locally from a dedicated AWS CLI account configuration. Sure, it seems to be cluster aware when the operation is inside of EKS containers, but it doesn't work well with standard EC2 instances, and you are forced to configure AWS CLI with IAM to help.Another impactful configuration change is the removal of standard configuration path to kubeconfig file. The standard location is
~/.kube/config
for the sheer majority of users, but a decision was made to increase the burden of probably 90% of provider users in benefit of the 10% that work with non-standard configuration. A big failure hit for convention over configuration.Lastly, the lack of documentation around how the new setup should be for users is another major problem. Currently, the only documentation to be found is in changelog of 2.0.0 upgrade guide, which says: "For many users this will simply mean specifying the config_path attribute in the provider block.".
There is many other elements missing to this description, such as the scenario where multiple clusters are defined in the kubeconfig file, how to match cluster name, user and authentication with the one provided (is it
config_context
,config_context_cluster
orconfig_context_auth_info
?).Also, how would this change work nicely in the case described in my first paragraph of this issue report? If we include helm provider, how would that also work?
Potential Terraform Configuration
N/A
References
Community Note
The text was updated successfully, but these errors were encountered: