-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix eksctl utils write-config breaks in kubectl 1.24 #5287
Conversation
Another PR for same issue here #5288 |
👷 Deploy request for eksctl pending review.Visit the deploys page to approve it
|
@Himangini Hi, that PR is a great contribution that changes the default API version when |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hi! Awesome work, thank you! Please include an output of some manual testing with and without kubectl installed and with the versions being tested that shows everything is working fine. :) Thanks!
@Skarlso Hi! As requested, here are the logs including output of manual testing for all cases. :)
$ aws --version
aws-cli/2.7.1 Python/3.9.11 Linux/5.4.181-109.354.amzn2int.x86_64 exe/x86_64.amzn.2 prompt/off
$ kubectl
zsh: command not found: kubectl
$ export KUBECONFIG=./kubeconfig-awscli2-without-kubectl
$ ./eksctl create cluster --name awscli2-without-kubectl --region us-east-2
2022-05-20 08:28:38 [ℹ] eksctl version 0.99.0-dev+9beeb3bc.2022-05-20T08:24:06Z
2022-05-20 08:28:38 [ℹ] using region us-east-2
2022-05-20 08:28:38 [ℹ] setting availability zones to [us-east-2a us-east-2c us-east-2b]
2022-05-20 08:28:38 [ℹ] subnets for us-east-2a - public:192.168.0.0/19 private:192.168.96.0/19
2022-05-20 08:28:38 [ℹ] nodegroup "ng-8352604c" will use "" [AmazonLinux2/1.22]
2022-05-20 08:28:38 [ℹ] using Kubernetes version 1.22
2 sequential tasks: { create cluster control plane "awscli2-without-kubectl",
2 sequential sub-tasks: {
wait for control plane to become ready,
create managed nodegroup "ng-8352604c",
}
}
2022-05-20 08:28:38 [ℹ] building cluster stack "eksctl-awscli2-without-kubectl-cluster"
2022-05-20 08:28:38 [ℹ] deploying stack "eksctl-awscli2-without-kubectl-cluster"
2022-05-20 08:44:12 [ℹ] waiting for the control plane availability...
W0520 08:44:13.459591 22576 loader.go:221] Config not found: ./kubeconfig-awscli2-without-kubectl
2022-05-20 08:44:13 [✖] kubectl not found, v1.10.0 or newer is required
2022-05-20 08:44:13 [ℹ] cluster should be functional despite missing (or misconfigured) client binaries
2022-05-20 08:44:13 [✔] EKS cluster "awscli2-without-kubectl" in "us-east-2" region is ready
$ cat $KUBECONFIG
apiVersion: v1
clusters:
- cluster:
certificate-authority-data:
name: awscli2-without-kubectl.us-east-2.eksctl.io
contexts:
- context:
cluster: awscli2-without-kubectl.us-east-2.eksctl.io
user:
name:
current-context:
kind: Config
preferences: {}
users:
- name:
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- eks
- get-token
- --cluster-name
- awscli2-without-kubectl
- --region
- us-east-2
command: aws
env:
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional
provideClusterInfo: false
$ aws --version
aws-cli/2.7.1 Python/3.9.11 Linux/5.4.181-109.354.amzn2int.x86_64 exe/x86_64.amzn.2 prompt/off
$ kubectl version --client --output=json
{
"clientVersion": {
"major": "1",
"minor": "21+",
"gitVersion": "v1.21.2-13+d2965f0db10712",
"gitCommit": "d2965f0db1071203c6f5bc662c2827c71fc8b20d",
"gitTreeState": "clean",
"buildDate": "2021-06-26T01:02:11Z",
"goVersion": "go1.16.5",
"compiler": "gc",
"platform": "linux/amd64"
}
}
$ export KUBECONFIG=./kubeconfig-awscli2-with-kubectl
$ ./eksctl create cluster --name awscli2-with-kubectl --region us-east-2
2022-05-20 11:16:05 [ℹ] eksctl version 0.99.0-dev+9beeb3bc.2022-05-20T08:24:06Z
2022-05-20 11:16:05 [ℹ] using region us-east-2
2022-05-20 11:16:05 [ℹ] setting availability zones to [us-east-2b us-east-2c us-east-2a]
2022-05-20 11:16:05 [ℹ] subnets for us-east-2b - public:192.168.0.0/19 private:192.168.96.0/19
2022-05-20 11:16:05 [ℹ] nodegroup "ng-08497837" will use "" [AmazonLinux2/1.22]
2022-05-20 11:16:05 [ℹ] using Kubernetes version 1.22
2 sequential tasks: { create cluster control plane "awscli2-with-kubectl",
2 sequential sub-tasks: {
wait for control plane to become ready,
create managed nodegroup "ng-08497837",
}
}
2022-05-20 11:16:05 [ℹ] building cluster stack "eksctl-awscli2-with-kubectl-cluster"
2022-05-20 11:32:29 [ℹ] waiting for the control plane availability...
W0520 11:32:30.508884 7934 loader.go:221] Config not found: ./kubeconfig-awscli2-with-kubectl
2022-05-20 11:32:30 [✔] saved kubeconfig as "./kubeconfig-awscli2-with-kubectl"
2022-05-20 11:32:30 [ℹ] no tasks
2022-05-20 11:32:30 [✔] all EKS cluster resources for "awscli2-with-kubectl" have been created
2022-05-20 11:32:30 [ℹ] nodegroup "ng-08497837" has 2 node(s)
2022-05-20 11:32:32 [ℹ] kubectl command should work with "./kubeconfig-awscli2-with-kubectl", try 'kubectl --kubeconfig=./kubeconfig-awscli2-with-kubectl get nodes'
2022-05-20 11:32:32 [✔] EKS cluster "awscli2-with-kubectl" in "us-east-2" region is ready
$ cat $KUBECONFIG
apiVersion: v1
clusters:
- cluster:
certificate-authority-data:
name: awscli2-with-kubectl.us-east-2.eksctl.io
contexts:
- context:
cluster: awscli2-with-kubectl.us-east-2.eksctl.io
user:
name:
current-context:
kind: Config
preferences: {}
users:
- name:
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- eks
- get-token
- --cluster-name
- awscli2-with-kubectl
- --region
- us-east-2
command: aws
env:
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional
provideClusterInfo: false
$ aws --version
aws-cli/2.6.2 Python/3.9.11 Linux/5.4.181-109.354.amzn2int.x86_64 exe/x86_64.amzn.2 prompt/off
$ kubectl version --client --output=json
{
"clientVersion": {
"major": "1",
"minor": "21+",
"gitVersion": "v1.21.2-13+d2965f0db10712",
"gitCommit": "d2965f0db1071203c6f5bc662c2827c71fc8b20d",
"gitTreeState": "clean",
"buildDate": "2021-06-26T01:02:11Z",
"goVersion": "go1.16.5",
"compiler": "gc",
"platform": "linux/amd64"
}
}
$ export KUBECONFIG=./kubeconfig-awscli2-old-with-kubectl
$ ./eksctl create cluster --name awscli2-old-with-kubectl --region us-east-2
2022-05-20 13:02:24 [ℹ] eksctl version 0.99.0-dev+9beeb3bc.2022-05-20T08:24:06Z
2022-05-20 13:02:24 [ℹ] using region us-east-2
2022-05-20 13:02:24 [ℹ] setting availability zones to [us-east-2a us-east-2b us-east-2c]
2022-05-20 13:02:24 [ℹ] subnets for us-east-2a - public:192.168.0.0/19 private:192.168.96.0/19
2022-05-20 13:02:24 [ℹ] nodegroup "ng-55b6a848" will use "" [AmazonLinux2/1.22]
2022-05-20 13:02:24 [ℹ] using Kubernetes version 1.22
2022-05-20 13:02:24 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-east-2 --cluster=awscli2-old-with-kubectl'
2022-05-20 13:02:24 [ℹ]
2 sequential tasks: { create cluster control plane "awscli2-old-with-kubectl",
2 sequential sub-tasks: {
wait for control plane to become ready,
create managed nodegroup "ng-55b6a848",
}
}
2022-05-20 13:02:24 [ℹ] building cluster stack "eksctl-awscli2-old-with-kubectl-cluster"
2022-05-20 13:02:24 [ℹ] deploying stack "eksctl-awscli2-old-with-kubectl-cluster"
2022-05-20 13:18:27 [ℹ] waiting for the control plane availability...
W0520 13:18:28.039300 8011 loader.go:221] Config not found: ./kubeconfig-awscli2-old-with-kubectl
2022-05-20 13:18:28 [✔] saved kubeconfig as "./kubeconfig-awscli2-old-with-kubectl"
2022-05-20 13:18:28 [ℹ] no tasks
2022-05-20 13:18:28 [✔] all EKS cluster resources for "awscli2-old-with-kubectl" have been created
2022-05-20 13:18:28 [ℹ] nodegroup "ng-55b6a848" has 2 node(s)
2022-05-20 13:18:28 [ℹ] node "ip-192-168-92-241.us-east-2.compute.internal" is ready
2022-05-20 13:18:29 [ℹ] kubectl command should work with "./kubeconfig-awscli2-old-with-kubectl", try 'kubectl --kubeconfig=./kubeconfig-awscli2-old-with-kubectl get nodes'
2022-05-20 13:18:29 [✔] EKS cluster "awscli2-old-with-kubectl" in "us-east-2" region is ready
cat $KUBECONFIG
apiVersion: v1
clusters:
- cluster:
certificate-authority-data:
name: awscli2-old-with-kubectl.us-east-2.eksctl.io
contexts:
- context:
cluster: awscli2-old-with-kubectl.us-east-2.eksctl.io
user:
name:
current-context:
kind: Config
preferences: {}
users:
- name:
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- eks
- get-token
- --cluster-name
- awscli2-old-with-kubectl
- --region
- us-east-2
command: aws
env:
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional
provideClusterInfo: false
$ aws --version
aws-cli/2.6.2 Python/3.9.11 Linux/5.4.181-109.354.amzn2int.x86_64 exe/x86_64.amzn.2 prompt/off
$ kubectl
zsh: command not found: kubectl
$ export KUBECONFIG=./kubeconfig-awscli2-old-without-kubectl
$ ./eksctl create cluster --name awscli2-old-without-kubectl --region us-east-2
2022-05-20 14:14:11 [ℹ] eksctl version 0.99.0-dev+9beeb3bc.2022-05-20T08:24:06Z
2022-05-20 14:14:11 [ℹ] using region us-east-2
2022-05-20 14:14:11 [ℹ] setting availability zones to [us-east-2b us-east-2a us-east-2c]
2022-05-20 14:14:11 [ℹ] nodegroup "ng-d49bc901" will use "" [AmazonLinux2/1.22]
2022-05-20 14:14:11 [ℹ] using Kubernetes version 1.22
2 sequential tasks: { create cluster control plane "awscli2-old-without-kubectl",
2 sequential sub-tasks: {
wait for control plane to become ready,
create managed nodegroup "ng-d49bc901",
}
}
2022-05-20 14:14:11 [ℹ] building cluster stack "eksctl-awscli2-old-without-kubectl-cluster"
2022-05-20 14:14:12 [ℹ] deploying stack "eksctl-awscli2-old-without-kubectl-cluster"
2022-05-20 14:29:20 [ℹ] waiting for the control plane availability...
W0520 14:29:21.574132 5788 loader.go:221] Config not found: ./kubeconfig-awscli2-old-without-kubectl
2022-05-20 14:29:21 [✔] saved kubeconfig as "./kubeconfig-awscli2-old-without-kubectl"
2022-05-20 14:29:21 [ℹ] no tasks
2022-05-20 14:29:21 [✔] all EKS cluster resources for "awscli2-old-without-kubectl" have been created
2022-05-20 14:29:21 [✖] kubectl not found, v1.10.0 or newer is required
2022-05-20 14:29:21 [ℹ] cluster should be functional despite missing (or misconfigured) client binaries
2022-05-20 14:29:21 [✔] EKS cluster "awscli2-old-without-kubectl" in "us-east-2" region is ready
cat $KUBECONFIG
apiVersion: v1
clusters:
- cluster:
certificate-authority-data:
name: awscli2-old-without-kubectl.us-east-2.eksctl.io
contexts:
- context:
cluster: awscli2-old-without-kubectl.us-east-2.eksctl.io
user:
name:
current-context:
kind: Config
preferences: {}
users:
- name:
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- eks
- get-token
- --cluster-name
- awscli2-old-without-kubectl
- --region
- us-east-2
command: aws
env:
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional
provideClusterInfo: false
$ aws --version
aws-cli/1.24.4 Python/3.7.10 Linux/5.4.181-109.354.amzn2int.x86_64 botocore/1.26.4
$ kubectl
zsh: command not found: kubectl
$ export KUBECONFIG=./kubeconfig-awscli-without-kubectl
$ ./eksctl create cluster --name awscli-without-kubectl --region us-east-2
2022-05-20 15:21:08 [ℹ] eksctl version 0.99.0-dev+9beeb3bc.2022-05-20T08:24:06Z
2022-05-20 15:21:08 [ℹ] using region us-east-2
2022-05-20 15:21:08 [ℹ] setting availability zones to [us-east-2b us-east-2a us-east-2c]
2022-05-20 15:21:08 [ℹ] subnets for us-east-2b - public:192.168.0.0/19 private:192.168.96.0/19
2022-05-20 15:21:08 [ℹ] nodegroup "ng-0fd793cd" will use "" [AmazonLinux2/1.22]
2022-05-20 15:21:08 [ℹ] using Kubernetes version 1.22
2022-05-20 15:21:08 [ℹ]
2 sequential tasks: { create cluster control plane "awscli-without-kubectl",
2 sequential sub-tasks: {
wait for control plane to become ready,
create managed nodegroup "ng-0fd793cd",
}
}
2022-05-20 15:21:08 [ℹ] building cluster stack "eksctl-awscli-without-kubectl-cluster"
2022-05-20 15:21:08 [ℹ] deploying stack "eksctl-awscli-without-kubectl-cluster"
2022-05-20 15:21:38 [ℹ] waiting for CloudFormation stack "eksctl-awscli-without-kubectl-cluster"
2022-05-20 15:37:49 [✖] kubectl not found, v1.10.0 or newer is required
2022-05-20 15:37:49 [ℹ] cluster should be functional despite missing (or misconfigured) client binaries
2022-05-20 15:37:49 [✔] EKS cluster "awscli-without-kubectl" in "us-east-2" region is ready
$ cat $KUBECONFIG
apiVersion: v1
clusters:
- cluster:
certificate-authority-data:
name: awscli-without-kubectl.us-east-2.eksctl.io
contexts:
- context:
cluster: awscli-without-kubectl.us-east-2.eksctl.io
user:
name:
current-context:
kind: Config
preferences: {}
users:
- name:
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- eks
- get-token
- --cluster-name
- awscli-without-kubectl
- --region
- us-east-2
command: aws
env:
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional
provideClusterInfo: false
$ aws --version
aws-cli/1.24.4 Python/3.7.10 Linux/5.4.181-109.354.amzn2int.x86_64 botocore/1.26.4
$ kubectl version --client --output=json
{
"clientVersion": {
"major": "1",
"minor": "21+",
"gitVersion": "v1.21.2-13+d2965f0db10712",
"gitCommit": "d2965f0db1071203c6f5bc662c2827c71fc8b20d",
"gitTreeState": "clean",
"buildDate": "2021-06-26T01:02:11Z",
"goVersion": "go1.16.5",
"compiler": "gc",
"platform": "linux/amd64"
}
}
$ export KUBECONFIG=./kubeconfig-awscli-with-kubectl
$ ./eksctl create cluster --name awscli-with-kubectl --region us-east-2
2022-05-20 16:18:19 [ℹ] eksctl version 0.99.0-dev+9beeb3bc.2022-05-20T08:24:06Z
2022-05-20 16:18:19 [ℹ] using region us-east-2
2022-05-20 16:18:20 [ℹ] setting availability zones to [us-east-2a us-east-2c us-east-2b]
2022-05-20 16:18:20 [ℹ] nodegroup "ng-bb850230" will use "" [AmazonLinux2/1.22]
2022-05-20 16:18:20 [ℹ] using Kubernetes version 1.22
2 sequential tasks: { create cluster control plane "awscli-with-kubectl",
2 sequential sub-tasks: {
wait for control plane to become ready,
create managed nodegroup "ng-bb850230",
}
}
2022-05-20 16:18:20 [ℹ] building cluster stack "eksctl-awscli-with-kubectl-cluster"
2022-05-20 16:18:20 [ℹ] deploying stack "eksctl-awscli-with-kubectl-cluster"
2022-05-20 16:35:52 [ℹ] waiting for the control plane availability...
W0520 16:35:52.946800 19029 loader.go:221] Config not found: ./kubeconfig-awscli-with-kubectl
2022-05-20 16:35:52 [✔] saved kubeconfig as "./kubeconfig-awscli-with-kubectl"
2022-05-20 16:35:52 [ℹ] no tasks
2022-05-20 16:35:52 [✔] all EKS cluster resources for "awscli-with-kubectl" have been created
2022-05-20 16:35:53 [ℹ] nodegroup "ng-bb850230" has 2 node(s)
2022-05-20 16:35:53 [ℹ] node "ip-192-168-25-242.us-east-2.compute.internal" is ready
2022-05-20 16:35:53 [ℹ] waiting for at least 2 node(s) to become ready in "ng-bb850230"
2022-05-20 16:35:53 [ℹ] nodegroup "ng-bb850230" has 2 node(s)
2022-05-20 16:35:53 [ℹ] node "ip-192-168-25-242.us-east-2.compute.internal" is ready
2022-05-20 16:35:54 [ℹ] kubectl command should work with "./kubeconfig-awscli-with-kubectl", try 'kubectl --kubeconfig=./kubeconfig-awscli-with-kubectl get nodes'
2022-05-20 16:35:54 [✔] EKS cluster "awscli-with-kubectl" in "us-east-2" region is ready
$ cat $KUBECONFIG
apiVersion: v1
clusters:
- cluster:
certificate-authority-data:
name: awscli-with-kubectl.us-east-2.eksctl.io
contexts:
- context:
cluster: awscli-with-kubectl.us-east-2.eksctl.io
user:
name:
current-context:
kind: Config
preferences: {}
users:
- name:
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- eks
- get-token
- --cluster-name
- awscli-with-kubectl
- --region
- us-east-2
command: aws
env:
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional
provideClusterInfo: false
$ aws --version
aws-cli/1.16.312 Python/3.7.10 Linux/5.4.181-109.354.amzn2int.x86_64 botocore/1.13.48
$ kubectl version --client --output=json
{
"clientVersion": {
"major": "1",
"minor": "21+",
"gitVersion": "v1.21.2-13+d2965f0db10712",
"gitCommit": "d2965f0db1071203c6f5bc662c2827c71fc8b20d",
"gitTreeState": "clean",
"buildDate": "2021-06-26T01:02:11Z",
"goVersion": "go1.16.5",
"compiler": "gc",
"platform": "linux/amd64"
}
}
$ export KUBECONFIG=./kubeconfig-awscli-old-with-kubectl
$ ./eksctl create cluster --name awscli-old-with-kubectl --region us-east-2
2022-05-20 19:28:26 [ℹ] eksctl version 0.99.0-dev+9beeb3bc.2022-05-20T08:24:06Z
2022-05-20 19:28:26 [ℹ] using region us-east-2
2022-05-20 19:28:27 [ℹ] setting availability zones to [us-east-2c us-east-2a us-east-2b]
2022-05-20 19:28:27 [ℹ] subnets for us-east-2c - public:192.168.0.0/19 private:192.168.96.0/19
2022-05-20 19:28:27 [ℹ] nodegroup "ng-ec041ddd" will use "" [AmazonLinux2/1.22]
2022-05-20 19:28:27 [ℹ] using Kubernetes version 1.22
2022-05-20 19:28:27 [ℹ]
2 sequential tasks: { create cluster control plane "awscli-old-with-kubectl",
2 sequential sub-tasks: {
wait for control plane to become ready,
create managed nodegroup "ng-ec041ddd",
}
}
2022-05-20 19:28:27 [ℹ] building cluster stack "eksctl-awscli-old-with-kubectl-cluster"
2022-05-20 19:28:27 [ℹ] deploying stack "eksctl-awscli-old-with-kubectl-cluster"
2022-05-20 19:28:57 [ℹ] waiting for CloudFormation stack "eksctl-awscli-old-with-kubectl-cluster"
2022-05-20 19:41:29 [ℹ] building managed nodegroup stack "eksctl-awscli-old-with-kubectl-nodegroup-ng-ec041ddd"
2022-05-20 19:41:29 [ℹ] deploying stack "eksctl-awscli-old-with-kubectl-nodegroup-ng-ec041ddd"
2022-05-20 19:41:29 [ℹ] waiting for CloudFormation stack "eksctl-awscli-old-with-kubectl-nodegroup-ng-ec041ddd"
2022-05-20 19:44:08 [ℹ] waiting for the control plane availability...
W0520 19:44:09.280846 22786 loader.go:221] Config not found: ./kubeconfig-awscli-old-with-kubectl
2022-05-20 19:44:09 [✔] saved kubeconfig as "./kubeconfig-awscli-old-with-kubectl"
2022-05-20 19:44:09 [ℹ] no tasks
2022-05-20 19:44:09 [✔] all EKS cluster resources for "awscli-old-with-kubectl" have been created
2022-05-20 19:44:09 [ℹ] nodegroup "ng-ec041ddd" has 2 node(s)
2022-05-20 19:44:09 [ℹ] node "ip-192-168-32-246.us-east-2.compute.internal" is ready
2022-05-20 19:44:09 [ℹ] node "ip-192-168-7-140.us-east-2.compute.internal" is ready
2022-05-20 19:44:09 [ℹ] waiting for at least 2 node(s) to become ready in "ng-ec041ddd"
2022-05-20 19:44:09 [ℹ] nodegroup "ng-ec041ddd" has 2 node(s)
2022-05-20 19:44:09 [ℹ] node "ip-192-168-32-246.us-east-2.compute.internal" is ready
2022-05-20 19:44:09 [ℹ] node "ip-192-168-7-140.us-east-2.compute.internal" is ready
2022-05-20 19:44:10 [ℹ] kubectl command should work with "./kubeconfig-awscli-old-with-kubectl", try 'kubectl --kubeconfig=./kubeconfig-awscli-old-with-kubectl get nodes'
2022-05-20 19:44:10 [✔] EKS cluster "awscli-old-with-kubectl" in "us-east-2" region is ready
$ cat $KUBECONFIG
apiVersion: v1
clusters:
- cluster:
certificate-authority-data:
name: awscli-old-with-kubectl.us-east-2.eksctl.io
contexts:
- context:
cluster: awscli-old-with-kubectl.us-east-2.eksctl.io
user:
name:
current-context:
kind: Config
preferences: {}
users:
- name:
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- eks
- get-token
- --cluster-name
- awscli-old-with-kubectl
- --region
- us-east-2
command: aws
env:
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional
provideClusterInfo: false
$ aws --version
aws-cli/1.16.312 Python/3.7.10 Linux/5.4.181-109.354.amzn2int.x86_64 botocore/1.13.48
$ kubectl version
zsh: command not found: kubectl
$ export KUBECONFIG=./kubeconfig-awscli-old-without-kubectl
$ ./eksctl create cluster --name awscli-old-without-kubectl --region us-east-2
2022-05-20 19:49:55 [ℹ] eksctl version 0.99.0-dev+9beeb3bc.2022-05-20T08:24:06Z
2022-05-20 19:49:55 [ℹ] using region us-east-2
2022-05-20 19:49:55 [ℹ] setting availability zones to [us-east-2a us-east-2b us-east-2c]
2022-05-20 19:49:55 [ℹ] subnets for us-east-2a - public:192.168.0.0/19 private:192.168.96.0/19
2022-05-20 19:49:55 [ℹ] subnets for us-east-2b - public:192.168.32.0/19 private:192.168.128.0/19
2022-05-20 19:49:55 [ℹ] subnets for us-east-2c - public:192.168.64.0/19 private:192.168.160.0/19
2022-05-20 19:49:55 [ℹ] nodegroup "ng-eebc3f00" will use "" [AmazonLinux2/1.22]
2022-05-20 19:49:55 [ℹ] using Kubernetes version 1.22
2022-05-20 19:49:55 [ℹ] creating EKS cluster "awscli-old-without-kubectl" in "us-east-2" region with managed nodes
2022-05-20 19:49:55 [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
2022-05-20 19:49:55 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-2 --cluster=awscli-old-without-kubectl'
2022-05-20 19:49:55 [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "awscli-old-without-kubectl" in "us-east-2"
2022-05-20 19:49:55 [ℹ] CloudWatch logging will not be enabled for cluster "awscli-old-without-kubectl" in "us-east-2"
2022-05-20 19:49:55 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-east-2 --cluster=awscli-old-without-kubectl'
2022-05-20 19:49:55 [ℹ]
2 sequential tasks: { create cluster control plane "awscli-old-without-kubectl",
2 sequential sub-tasks: {
wait for control plane to become ready,
create managed nodegroup "ng-eebc3f00",
}
}
2022-05-20 19:49:55 [ℹ] building cluster stack "eksctl-awscli-old-without-kubectl-cluster"
2022-05-20 19:49:55 [ℹ] deploying stack "eksctl-awscli-old-without-kubectl-cluster"
2022-05-20 19:50:25 [ℹ] waiting for CloudFormation stack "eksctl-awscli-old-without-kubectl-cluster"
2022-05-20 20:00:56 [ℹ] waiting for CloudFormation stack "eksctl-awscli-old-without-kubectl-cluster"
2022-05-20 20:02:57 [ℹ] building managed nodegroup stack "eksctl-awscli-old-without-kubectl-nodegroup-ng-eebc3f00"
2022-05-20 20:02:58 [ℹ] deploying stack "eksctl-awscli-old-without-kubectl-nodegroup-ng-eebc3f00"
2022-05-20 20:02:58 [ℹ] waiting for CloudFormation stack "eksctl-awscli-old-without-kubectl-nodegroup-ng-eebc3f00"
2022-05-20 20:06:24 [ℹ] waiting for the control plane availability...
W0520 20:06:24.960718 8819 loader.go:221] Config not found: ./kubeconfig-awscli-old-without-kubectl
2022-05-20 20:06:24 [✔] saved kubeconfig as "./kubeconfig-awscli-old-without-kubectl"
2022-05-20 20:06:24 [ℹ] no tasks
2022-05-20 20:06:24 [✔] all EKS cluster resources for "awscli-old-without-kubectl" have been created
2022-05-20 20:06:25 [ℹ] nodegroup "ng-eebc3f00" has 2 node(s)
2022-05-20 20:06:25 [ℹ] node "ip-192-168-27-38.us-east-2.compute.internal" is ready
2022-05-20 20:06:25 [ℹ] node "ip-192-168-38-235.us-east-2.compute.internal" is ready
2022-05-20 20:06:25 [ℹ] waiting for at least 2 node(s) to become ready in "ng-eebc3f00"
2022-05-20 20:06:25 [ℹ] nodegroup "ng-eebc3f00" has 2 node(s)
2022-05-20 20:06:25 [ℹ] node "ip-192-168-27-38.us-east-2.compute.internal" is ready
2022-05-20 20:06:25 [ℹ] node "ip-192-168-38-235.us-east-2.compute.internal" is ready
2022-05-20 20:06:25 [✖] kubectl not found, v1.10.0 or newer is required
2022-05-20 20:06:25 [ℹ] cluster should be functional despite missing (or misconfigured) client binaries
2022-05-20 20:06:25 [✔] EKS cluster "awscli-old-without-kubectl" in "us-east-2" region is ready
$ cat $KUBECONFIG
apiVersion: v1
clusters:
- cluster:
certificate-authority-data:
server: https://7B95D4ABBE7D33AA5524388B89C189D4.gr7.us-east-2.eks.amazonaws.com
name: awscli-old-without-kubectl.us-east-2.eksctl.io
contexts:
- context:
cluster: awscli-old-without-kubectl.us-east-2.eksctl.io
user:
name:
current-context:
kind: Config
preferences: {}
users:
- name:
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- eks
- get-token
- --cluster-name
- awscli-old-without-kubectl
- --region
- us-east-2
command: aws
env:
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional
provideClusterInfo: false |
Thanks! Well done! What about existing clusters? They won't break or do you just have to regenerate the kube config? |
So not just create but also try utils write kube config please. 🙏 |
@Skarlso As requested! Thanks for taking a look.
$ aws --version
aws-cli/1.16.312 Python/3.7.10 Linux/5.4.181-109.354.amzn2int.x86_64 botocore/1.13.48
$ export KUBECONFIG=./kubeconfig-aws-migration-old
$ ./eksctl create cluster --name kubeconfig-aws-migration-old --region us-east-2
2022-05-23 15:12:28 [ℹ] eksctl version 0.100.0-dev+ca0193069.2022-05-23T15:07:56Z
2022-05-23 15:12:28 [ℹ] using region us-east-2
2022-05-23 15:12:28 [ℹ] setting availability zones to [us-east-2b us-east-2c us-east-2a]
2022-05-23 15:12:28 [ℹ] subnets for us-east-2b - public:192.168.0.0/19 private:192.168.96.0/19
2022-05-23 15:12:28 [ℹ] subnets for us-east-2c - public:192.168.32.0/19 private:192.168.128.0/19
2022-05-23 15:12:28 [ℹ] subnets for us-east-2a - public:192.168.64.0/19 private:192.168.160.0/19
2022-05-23 15:12:28 [ℹ] nodegroup "ng-870757ef" will use "" [AmazonLinux2/1.22]
2022-05-23 15:12:28 [ℹ] using Kubernetes version 1.22
2022-05-23 15:12:28 [ℹ] creating EKS cluster "kubeconfig-aws-migration-old" in "us-east-2" region with managed nodes
2022-05-23 15:12:28 [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
2022-05-23 15:12:28 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-2 --cluster=kubeconfig-aws-migration-old'
2022-05-23 15:12:28 [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "kubeconfig-aws-migration-old" in "us-east-2"
2022-05-23 15:12:28 [ℹ] CloudWatch logging will not be enabled for cluster "kubeconfig-aws-migration-old" in "us-east-2"
2022-05-23 15:12:28 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-east-2 --cluster=kubeconfig-aws-migration-old'
2022-05-23 15:12:28 [ℹ]
2 sequential tasks: { create cluster control plane "kubeconfig-aws-migration-old",
2 sequential sub-tasks: {
wait for control plane to become ready,
create managed nodegroup "ng-870757ef",
}
}
2022-05-23 15:12:28 [ℹ] building cluster stack "eksctl-kubeconfig-aws-migration-old-cluster"
2022-05-23 15:12:28 [ℹ] deploying stack "eksctl-kubeconfig-aws-migration-old-cluster"
2022-05-23 15:12:58 [ℹ] waiting for CloudFormation stack "eksctl-kubeconfig-aws-migration-old-cluster"
2022-05-23 15:24:30 [ℹ] building managed nodegroup stack "eksctl-kubeconfig-aws-migration-old-nodegroup-ng-870757ef"
2022-05-23 15:24:31 [ℹ] deploying stack "eksctl-kubeconfig-aws-migration-old-nodegroup-ng-870757ef"
2022-05-23 15:28:20 [ℹ] waiting for the control plane availability...
W0523 15:28:20.381395 3708 loader.go:221] Config not found: ./kubeconfig-aws-migration-old
2022-05-23 15:28:20 [✔] saved kubeconfig as "./kubeconfig-aws-migration-old"
2022-05-23 15:28:20 [ℹ] no tasks
2022-05-23 15:28:20 [✔] all EKS cluster resources for "kubeconfig-aws-migration-old" have been created
2022-05-23 15:28:20 [ℹ] nodegroup "ng-870757ef" has 2 node(s)
2022-05-23 15:28:20 [ℹ] waiting for at least 2 node(s) to become ready in "ng-870757ef"
2022-05-23 15:28:20 [ℹ] nodegroup "ng-870757ef" has 2 node(s)
2022-05-23 15:28:21 [ℹ] kubectl command should work with "./kubeconfig-aws-migration-old", try 'kubectl --kubeconfig=./kubeconfig-aws-migration-old get nodes'
2022-05-23 15:28:21 [✔] EKS cluster "kubeconfig-aws-migration-old" in "us-east-2" region is ready
$ cat $KUBECONFIG
apiVersion: v1
clusters:
- cluster:
certificate-authority-data:
server:
name: kubeconfig-aws-migration-old.us-east-2.eksctl.io
contexts:
- context:
cluster: kubeconfig-aws-migration-old.us-east-2.eksctl.io
user:
name:
current-context:
kind: Config
preferences: {}
users:
- name:
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- eks
- get-token
- --cluster-name
- kubeconfig-aws-migration-old
- --region
- us-east-2
command: aws
env:
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional
provideClusterInfo: false
//Upgrade
$ aws --version
aws-cli/1.24.5 Python/3.7.10 Linux/5.4.181-109.354.amzn2int.x86_64 botocore/1.26.5
$ kubectl version --client --output=json
{
"clientVersion": {
"major": "1",
"minor": "24",
"gitVersion": "v1.24.0",
"gitCommit": "4ce5a8954017644c5420bae81d72b09b735c21f0",
"gitTreeState": "clean",
"buildDate": "2022-05-03T13:46:05Z",
"goVersion": "go1.18.1",
"compiler": "gc",
"platform": "linux/amd64"
},
"kustomizeVersion": "v4.5.4"
}
$ kubectl get nodes
error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"
$ eksctl utils write-kubeconfig --cluster kubeconfig-aws-migration-old --region us-east-2
2022-05-23 16:20:40 [✔] saved kubeconfig as "./kubeconfig-aws-migration-old"
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-59-125.us-east-2.compute.internal Ready <none> 54m v1.22.6-eks-7d68063
ip-192-168-65-174.us-east-2.compute.internal Ready <none> 54m v1.22.6-eks-7d68063
$ aws --version
aws-cli/2.0.30 Python/3.7.3 Linux/5.4.181-109.354.amzn2int.x86_64 botocore/2.0.0dev34
$ export KUBECONFIG=./kubeconfig-aws-migration-new
$ ./eksctl create cluster --name kubeconfig-aws-migration-new --region us-east-2
2022-05-23 21:59:31 [ℹ] eksctl version 0.100.0-dev+ca0193069.2022-05-23T15:07:56Z
2022-05-23 21:59:31 [ℹ] using region us-east-2
2022-05-23 21:59:31 [ℹ] setting availability zones to [us-east-2a us-east-2c us-east-2b]
2022-05-23 21:59:31 [ℹ] subnets for us-east-2a - public:192.168.0.0/19 private:192.168.96.0/19
2022-05-23 21:59:31 [ℹ] subnets for us-east-2c - public:192.168.32.0/19 private:192.168.128.0/19
2022-05-23 21:59:31 [ℹ] subnets for us-east-2b - public:192.168.64.0/19 private:192.168.160.0/19
2022-05-23 21:59:31 [ℹ] nodegroup "ng-6aefed23" will use "" [AmazonLinux2/1.22]
2022-05-23 21:59:31 [ℹ] using Kubernetes version 1.22
2022-05-23 21:59:31 [ℹ] creating EKS cluster "kubeconfig-aws-migration-new" in "us-east-2" region with managed nodes
2022-05-23 21:59:31 [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
2022-05-23 21:59:31 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-2 --cluster=kubeconfig-aws-migration-new'
2022-05-23 21:59:31 [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "kubeconfig-aws-migration-new" in "us-east-2"
2022-05-23 21:59:31 [ℹ] CloudWatch logging will not be enabled for cluster "kubeconfig-aws-migration-new" in "us-east-2"
2022-05-23 21:59:31 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-east-2 --cluster=kubeconfig-aws-migration-new'
2022-05-23 21:59:31 [ℹ]
2 sequential tasks: { create cluster control plane "kubeconfig-aws-migration-new",
2 sequential sub-tasks: {
wait for control plane to become ready,
create managed nodegroup "ng-6aefed23",
}
}
2022-05-23 21:59:31 [ℹ] building cluster stack "eksctl-kubeconfig-aws-migration-new-cluster"
2022-05-23 21:59:31 [ℹ] deploying stack "eksctl-kubeconfig-aws-migration-new-cluster"
2022-05-23 22:00:01 [ℹ] waiting for CloudFormation stack "eksctl-kubeconfig-aws-migration-new-cluster"
2022-05-23 22:15:58 [ℹ] waiting for the control plane availability...
2022-05-23 22:15:58 [✔] saved kubeconfig as "./kubeconfig-aws-migration-new"
2022-05-23 22:15:58 [ℹ] no tasks
2022-05-23 22:15:58 [✔] all EKS cluster resources for "kubeconfig-aws-migration-new" have been created
2022-05-23 22:15:58 [ℹ] nodegroup "ng-6aefed23" has 2 node(s)
2022-05-23 22:15:58 [ℹ] waiting for at least 2 node(s) to become ready in "ng-6aefed23"
2022-05-23 22:15:58 [ℹ] nodegroup "ng-6aefed23" has 2 node(s)
2022-05-23 22:15:59 [ℹ] kubectl command should work with "./kubeconfig-aws-migration-new", try 'kubectl --kubeconfig=./kubeconfig-aws-migration-new get nodes'
2022-05-23 22:15:59 [✔] EKS cluster "kubeconfig-aws-migration-new" in "us-east-2" region is ready
$ cat $KUBECONFIG
apiVersion: v1
clusters:
- cluster:
certificate-authority-data:
server:
name: kubeconfig-aws-migration-new.us-east-2.eksctl.io
contexts:
- context:
cluster: kubeconfig-aws-migration-new.us-east-2.eksctl.io
user:
name:
current-context:
kind: Config
preferences: {}
users:
- name:
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- eks
- get-token
- --cluster-name
- kubeconfig-aws-migration-new
- --region
- us-east-2
command: aws
env:
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional
provideClusterInfo: false
// Upgrade
$ aws --version
aws-cli/2.7.2 Python/3.9.11 Linux/5.4.181-109.354.amzn2int.x86_64 exe/x86_64.amzn.2 prompt/off
$ kubectl version --client --output=json
{
"clientVersion": {
"major": "1",
"minor": "24",
"gitVersion": "v1.24.0",
"gitCommit": "4ce5a8954017644c5420bae81d72b09b735c21f0",
"gitTreeState": "clean",
"buildDate": "2022-05-03T13:46:05Z",
"goVersion": "go1.18.1",
"compiler": "gc",
"platform": "linux/amd64"
},
"kustomizeVersion": "v4.5.4"
}
$ kubectl get nodes
error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"
$ eksctl utils write-kubeconfig --cluster kubeconfig-aws-migration-new --region us-east-2
2022-05-23 22:34:59 [✔] saved kubeconfig as "./kubeconfig-aws-migration-new"
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-192-168-47-217.us-east-2.compute.internal Ready <none> 20m v1.22.6-eks-7d68063
ip-192-168-71-187.us-east-2.compute.internal Ready <none> 20m v1.22.6-eks-7d68063 |
Signed-off-by: Eddie Torres <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Lovely fix. :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍🏻 thanks for your contribution ✨
eksctl version before v0.100.0 generated kubeconfig with apiVersion "v1alpha1", needed to upgrade it to 0.100.0 or later to have "v1beta1" generated. for more info, please check for the links below: - https://github.com/weaveworks/eksctl/releases - https://github.com/weaveworks/eksctl/releases/tag/v0.100.0 - eksctl-io/eksctl#5288 - eksctl-io/eksctl#5287
* Add clusterName as debug info for troubleshooting * Bump eksctl to v0.100.0 for fixing apiVersion changes eksctl version before v0.100.0 generated kubeconfig with apiVersion "v1alpha1", needed to upgrade it to 0.100.0 or later to have "v1beta1" generated. for more info, please check for the links below: - https://github.com/weaveworks/eksctl/releases - https://github.com/weaveworks/eksctl/releases/tag/v0.100.0 - eksctl-io/eksctl#5288 - eksctl-io/eksctl#5287
* Add clusterName as debug info for troubleshooting * Bump eksctl to v0.100.0 for fixing apiVersion changes eksctl version before v0.100.0 generated kubeconfig with apiVersion "v1alpha1", needed to upgrade it to 0.100.0 or later to have "v1beta1" generated. for more info, please check for the links below: - https://github.com/weaveworks/eksctl/releases - https://github.com/weaveworks/eksctl/releases/tag/v0.100.0 - eksctl-io/eksctl#5288 - eksctl-io/eksctl#5287
Signed-off-by: Eddie Torres [email protected]
Description
eksctl utils write-config
breaks inkubectl
1.24 whenaws-iam-authenticator
absent #5257authenticatorIsBetaVersion
check as seen here: https://github.com/weaveworks/eksctl/blob/637d454ef59305b4af6633ff939674eff1f408c2/pkg/utils/kubeconfig/kubeconfig.go#L161-L166 tocase AWSEKSAuthenticator
:https://github.com/weaveworks/eksctl/blob/637d454ef59305b4af6633ff939674eff1f408c2/pkg/utils/kubeconfig/kubeconfig.go#L185
Checklist
README.md
, or theuserdocs
directory)area/nodegroup
) and kind (e.g.kind/improvement
)