Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix eksctl utils write-config breaks in kubectl 1.24 #5287

Merged
merged 3 commits into from
May 26, 2022

Conversation

torredil
Copy link
Contributor

Signed-off-by: Eddie Torres [email protected]

Description

Checklist

  • Added tests that cover your change (if possible)
  • Added/modified documentation as required (such as the README.md, or the userdocs directory)
  • Manually tested
  • Made sure the title of the PR is a good description that can go into the release notes
  • (Core team) Added labels for change area (e.g. area/nodegroup) and kind (e.g. kind/improvement)

@Himangini
Copy link
Contributor

Another PR for same issue here #5288

@netlify
Copy link

netlify bot commented May 19, 2022

👷 Deploy request for eksctl pending review.

Visit the deploys page to approve it

Name Link
🔨 Latest commit 05abe26

@torredil
Copy link
Contributor Author

@Himangini Hi, that PR is a great contribution that changes the default API version when kubectl 1.24.0 or above is detected. However, this does not account for other cases, such as when eksctl is run without kubectl present, and thus this PR is still needed to fix the underlying issue.

Copy link
Contributor

@Skarlso Skarlso left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hi! Awesome work, thank you! Please include an output of some manual testing with and without kubectl installed and with the versions being tested that shows everything is working fine. :) Thanks!

@torredil
Copy link
Contributor Author

@Skarlso Hi! As requested, here are the logs including output of manual testing for all cases. :)
Thanks for the quick turnaround, really appreciate it!

  • aws-cli v2 (NEW) without kubectl

$ aws --version                                                                                           
aws-cli/2.7.1 Python/3.9.11 Linux/5.4.181-109.354.amzn2int.x86_64 exe/x86_64.amzn.2 prompt/off

$ kubectl                                                                                               
zsh: command not found: kubectl   

$ export KUBECONFIG=./kubeconfig-awscli2-without-kubectl                                                      

$ ./eksctl create cluster --name awscli2-without-kubectl --region us-east-2                                  
2022-05-20 08:28:38 [ℹ]  eksctl version 0.99.0-dev+9beeb3bc.2022-05-20T08:24:06Z
2022-05-20 08:28:38 [ℹ]  using region us-east-2
2022-05-20 08:28:38 [ℹ]  setting availability zones to [us-east-2a us-east-2c us-east-2b]
2022-05-20 08:28:38 [ℹ]  subnets for us-east-2a - public:192.168.0.0/19 private:192.168.96.0/19
2022-05-20 08:28:38 [ℹ]  nodegroup "ng-8352604c" will use "" [AmazonLinux2/1.22]
2022-05-20 08:28:38 [ℹ]  using Kubernetes version 1.22
2 sequential tasks: { create cluster control plane "awscli2-without-kubectl",
    2 sequential sub-tasks: {
        wait for control plane to become ready,
        create managed nodegroup "ng-8352604c",
    }
}
2022-05-20 08:28:38 [ℹ]  building cluster stack "eksctl-awscli2-without-kubectl-cluster"
2022-05-20 08:28:38 [ℹ]  deploying stack "eksctl-awscli2-without-kubectl-cluster"
2022-05-20 08:44:12 [ℹ]  waiting for the control plane availability...
W0520 08:44:13.459591   22576 loader.go:221] Config not found: ./kubeconfig-awscli2-without-kubectl
2022-05-20 08:44:13 [✖]  kubectl not found, v1.10.0 or newer is required
2022-05-20 08:44:13 [ℹ]  cluster should be functional despite missing (or misconfigured) client binaries
2022-05-20 08:44:13 [✔]  EKS cluster "awscli2-without-kubectl" in "us-east-2" region is ready

$ cat $KUBECONFIG                                                                            
apiVersion: v1
clusters:
- cluster:
  certificate-authority-data:
  name: awscli2-without-kubectl.us-east-2.eksctl.io
contexts:
- context:
    cluster: awscli2-without-kubectl.us-east-2.eksctl.io
    user:
  name:
current-context: 
kind: Config
preferences: {}
users:
- name:
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - eks
      - get-token
      - --cluster-name
      - awscli2-without-kubectl
      - --region
      - us-east-2
      command: aws
      env:
      - name: AWS_STS_REGIONAL_ENDPOINTS
        value: regional
      provideClusterInfo: false
  • aws-cli v2 (NEW) with kubectl

$ aws --version                                                                                           
aws-cli/2.7.1 Python/3.9.11 Linux/5.4.181-109.354.amzn2int.x86_64 exe/x86_64.amzn.2 prompt/off

$ kubectl version --client --output=json                                                                      
{
  "clientVersion": {
    "major": "1",
    "minor": "21+",
    "gitVersion": "v1.21.2-13+d2965f0db10712",
    "gitCommit": "d2965f0db1071203c6f5bc662c2827c71fc8b20d",
    "gitTreeState": "clean",
    "buildDate": "2021-06-26T01:02:11Z",
    "goVersion": "go1.16.5",
    "compiler": "gc",
    "platform": "linux/amd64"
  }
}

$ export KUBECONFIG=./kubeconfig-awscli2-with-kubectl  

$ ./eksctl create cluster --name awscli2-with-kubectl --region us-east-2                                     
2022-05-20 11:16:05 [ℹ]  eksctl version 0.99.0-dev+9beeb3bc.2022-05-20T08:24:06Z
2022-05-20 11:16:05 [ℹ]  using region us-east-2
2022-05-20 11:16:05 [ℹ]  setting availability zones to [us-east-2b us-east-2c us-east-2a]
2022-05-20 11:16:05 [ℹ]  subnets for us-east-2b - public:192.168.0.0/19 private:192.168.96.0/19
2022-05-20 11:16:05 [ℹ]  nodegroup "ng-08497837" will use "" [AmazonLinux2/1.22]
2022-05-20 11:16:05 [ℹ]  using Kubernetes version 1.22
2 sequential tasks: { create cluster control plane "awscli2-with-kubectl",
    2 sequential sub-tasks: {
        wait for control plane to become ready,
        create managed nodegroup "ng-08497837",
    }
}
2022-05-20 11:16:05 [ℹ]  building cluster stack "eksctl-awscli2-with-kubectl-cluster"
2022-05-20 11:32:29 [ℹ]  waiting for the control plane availability...
W0520 11:32:30.508884    7934 loader.go:221] Config not found: ./kubeconfig-awscli2-with-kubectl
2022-05-20 11:32:30 [✔]  saved kubeconfig as "./kubeconfig-awscli2-with-kubectl"
2022-05-20 11:32:30 [ℹ]  no tasks
2022-05-20 11:32:30 [✔]  all EKS cluster resources for "awscli2-with-kubectl" have been created
2022-05-20 11:32:30 [ℹ]  nodegroup "ng-08497837" has 2 node(s)
2022-05-20 11:32:32 [ℹ]  kubectl command should work with "./kubeconfig-awscli2-with-kubectl", try 'kubectl --kubeconfig=./kubeconfig-awscli2-with-kubectl get nodes'
2022-05-20 11:32:32 [✔]  EKS cluster "awscli2-with-kubectl" in "us-east-2" region is ready                                    

$ cat $KUBECONFIG                                                                                            
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data:
  name: awscli2-with-kubectl.us-east-2.eksctl.io
contexts:
- context:
    cluster: awscli2-with-kubectl.us-east-2.eksctl.io
    user: 
  name: 
current-context:
kind: Config
preferences: {}
users:
- name: 
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - eks
      - get-token
      - --cluster-name
      - awscli2-with-kubectl
      - --region
      - us-east-2
      command: aws
      env:
      - name: AWS_STS_REGIONAL_ENDPOINTS
        value: regional
      provideClusterInfo: false
  • aws-cli v2 (OLD) with kubectl

$ aws --version
aws-cli/2.6.2 Python/3.9.11 Linux/5.4.181-109.354.amzn2int.x86_64 exe/x86_64.amzn.2 prompt/off

$ kubectl version --client --output=json                                                                      
{
  "clientVersion": {
    "major": "1",
    "minor": "21+",
    "gitVersion": "v1.21.2-13+d2965f0db10712",
    "gitCommit": "d2965f0db1071203c6f5bc662c2827c71fc8b20d",
    "gitTreeState": "clean",
    "buildDate": "2021-06-26T01:02:11Z",
    "goVersion": "go1.16.5",
    "compiler": "gc",
    "platform": "linux/amd64"
  }
}

$ export KUBECONFIG=./kubeconfig-awscli2-old-with-kubectl  

$ ./eksctl create cluster --name awscli2-old-with-kubectl --region us-east-2                                                                          
2022-05-20 13:02:24 [ℹ]  eksctl version 0.99.0-dev+9beeb3bc.2022-05-20T08:24:06Z
2022-05-20 13:02:24 [ℹ]  using region us-east-2
2022-05-20 13:02:24 [ℹ]  setting availability zones to [us-east-2a us-east-2b us-east-2c]
2022-05-20 13:02:24 [ℹ]  subnets for us-east-2a - public:192.168.0.0/19 private:192.168.96.0/19
2022-05-20 13:02:24 [ℹ]  nodegroup "ng-55b6a848" will use "" [AmazonLinux2/1.22]
2022-05-20 13:02:24 [ℹ]  using Kubernetes version 1.22
2022-05-20 13:02:24 [ℹ]  you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-east-2 --cluster=awscli2-old-with-kubectl'
2022-05-20 13:02:24 [ℹ]
2 sequential tasks: { create cluster control plane "awscli2-old-with-kubectl",
    2 sequential sub-tasks: {
        wait for control plane to become ready,
        create managed nodegroup "ng-55b6a848",
    }
}
2022-05-20 13:02:24 [ℹ]  building cluster stack "eksctl-awscli2-old-with-kubectl-cluster"
2022-05-20 13:02:24 [ℹ]  deploying stack "eksctl-awscli2-old-with-kubectl-cluster"
2022-05-20 13:18:27 [ℹ]  waiting for the control plane availability...
W0520 13:18:28.039300    8011 loader.go:221] Config not found: ./kubeconfig-awscli2-old-with-kubectl
2022-05-20 13:18:28 [✔]  saved kubeconfig as "./kubeconfig-awscli2-old-with-kubectl"
2022-05-20 13:18:28 [ℹ]  no tasks
2022-05-20 13:18:28 [✔]  all EKS cluster resources for "awscli2-old-with-kubectl" have been created
2022-05-20 13:18:28 [ℹ]  nodegroup "ng-55b6a848" has 2 node(s)
2022-05-20 13:18:28 [ℹ]  node "ip-192-168-92-241.us-east-2.compute.internal" is ready
2022-05-20 13:18:29 [ℹ]  kubectl command should work with "./kubeconfig-awscli2-old-with-kubectl", try 'kubectl --kubeconfig=./kubeconfig-awscli2-old-with-kubectl get nodes'
2022-05-20 13:18:29 [✔]  EKS cluster "awscli2-old-with-kubectl" in "us-east-2" region is ready

cat $KUBECONFIG                                                                                          
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data:
  name: awscli2-old-with-kubectl.us-east-2.eksctl.io
contexts:
- context:
    cluster: awscli2-old-with-kubectl.us-east-2.eksctl.io
    user:
  name: 
current-context: 
kind: Config
preferences: {}
users:
- name: 
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - eks
      - get-token
      - --cluster-name
      - awscli2-old-with-kubectl
      - --region
      - us-east-2
      command: aws
      env:
      - name: AWS_STS_REGIONAL_ENDPOINTS
        value: regional
      provideClusterInfo: false
  • aws-cli v2 (OLD) without kubectl

$ aws --version
aws-cli/2.6.2 Python/3.9.11 Linux/5.4.181-109.354.amzn2int.x86_64 exe/x86_64.amzn.2 prompt/off

$ kubectl                                                                           
zsh: command not found: kubectl 

$ export KUBECONFIG=./kubeconfig-awscli2-old-without-kubectl  

$ ./eksctl create cluster --name awscli2-old-without-kubectl --region us-east-2                            
2022-05-20 14:14:11 [ℹ]  eksctl version 0.99.0-dev+9beeb3bc.2022-05-20T08:24:06Z
2022-05-20 14:14:11 [ℹ]  using region us-east-2
2022-05-20 14:14:11 [ℹ]  setting availability zones to [us-east-2b us-east-2a us-east-2c]
2022-05-20 14:14:11 [ℹ]  nodegroup "ng-d49bc901" will use "" [AmazonLinux2/1.22]
2022-05-20 14:14:11 [ℹ]  using Kubernetes version 1.22
2 sequential tasks: { create cluster control plane "awscli2-old-without-kubectl",
    2 sequential sub-tasks: {
        wait for control plane to become ready,
        create managed nodegroup "ng-d49bc901",
    }
}
2022-05-20 14:14:11 [ℹ]  building cluster stack "eksctl-awscli2-old-without-kubectl-cluster"
2022-05-20 14:14:12 [ℹ]  deploying stack "eksctl-awscli2-old-without-kubectl-cluster"
2022-05-20 14:29:20 [ℹ]  waiting for the control plane availability...
W0520 14:29:21.574132    5788 loader.go:221] Config not found: ./kubeconfig-awscli2-old-without-kubectl
2022-05-20 14:29:21 [✔]  saved kubeconfig as "./kubeconfig-awscli2-old-without-kubectl"
2022-05-20 14:29:21 [ℹ]  no tasks
2022-05-20 14:29:21 [✔]  all EKS cluster resources for "awscli2-old-without-kubectl" have been created
2022-05-20 14:29:21 [✖]  kubectl not found, v1.10.0 or newer is required
2022-05-20 14:29:21 [ℹ]  cluster should be functional despite missing (or misconfigured) client binaries
2022-05-20 14:29:21 [✔]  EKS cluster "awscli2-old-without-kubectl" in "us-east-2" region is ready

cat $KUBECONFIG
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: 
  name: awscli2-old-without-kubectl.us-east-2.eksctl.io
contexts:
- context:
    cluster: awscli2-old-without-kubectl.us-east-2.eksctl.io
    user: 
  name:
current-context:
kind: Config
preferences: {}
users:
- name: 
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - eks
      - get-token
      - --cluster-name
      - awscli2-old-without-kubectl
      - --region
      - us-east-2
      command: aws
      env:
      - name: AWS_STS_REGIONAL_ENDPOINTS
        value: regional
      provideClusterInfo: false
  • aws-cli v1 (NEW) without kubectl

$ aws --version                                                                               
aws-cli/1.24.4 Python/3.7.10 Linux/5.4.181-109.354.amzn2int.x86_64 botocore/1.26.4

$ kubectl                                                                                                    
zsh: command not found: kubectl

$ export KUBECONFIG=./kubeconfig-awscli-without-kubectl                                                   

$ ./eksctl create cluster --name awscli-without-kubectl --region us-east-2                                   
2022-05-20 15:21:08 [ℹ]  eksctl version 0.99.0-dev+9beeb3bc.2022-05-20T08:24:06Z
2022-05-20 15:21:08 [ℹ]  using region us-east-2
2022-05-20 15:21:08 [ℹ]  setting availability zones to [us-east-2b us-east-2a us-east-2c]
2022-05-20 15:21:08 [ℹ]  subnets for us-east-2b - public:192.168.0.0/19 private:192.168.96.0/19
2022-05-20 15:21:08 [ℹ]  nodegroup "ng-0fd793cd" will use "" [AmazonLinux2/1.22]
2022-05-20 15:21:08 [ℹ]  using Kubernetes version 1.22
2022-05-20 15:21:08 [ℹ]
2 sequential tasks: { create cluster control plane "awscli-without-kubectl",
    2 sequential sub-tasks: {
        wait for control plane to become ready,
        create managed nodegroup "ng-0fd793cd",
    }
}
2022-05-20 15:21:08 [ℹ]  building cluster stack "eksctl-awscli-without-kubectl-cluster"
2022-05-20 15:21:08 [ℹ]  deploying stack "eksctl-awscli-without-kubectl-cluster"
2022-05-20 15:21:38 [ℹ]  waiting for CloudFormation stack "eksctl-awscli-without-kubectl-cluster"
2022-05-20 15:37:49 [✖]  kubectl not found, v1.10.0 or newer is required
2022-05-20 15:37:49 [ℹ]  cluster should be functional despite missing (or misconfigured) client binaries
2022-05-20 15:37:49 [✔]  EKS cluster "awscli-without-kubectl" in "us-east-2" region is ready

$ cat $KUBECONFIG                                                                    
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: 
  name: awscli-without-kubectl.us-east-2.eksctl.io
contexts:
- context:
    cluster: awscli-without-kubectl.us-east-2.eksctl.io
    user:
  name: 
current-context: 
kind: Config
preferences: {}
users:
- name: 
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - eks
      - get-token
      - --cluster-name
      - awscli-without-kubectl
      - --region
      - us-east-2
      command: aws
      env:
      - name: AWS_STS_REGIONAL_ENDPOINTS
        value: regional
      provideClusterInfo: false
  • aws-cli v1 (NEW) with kubectl

$ aws --version                                                                                              
aws-cli/1.24.4 Python/3.7.10 Linux/5.4.181-109.354.amzn2int.x86_64 botocore/1.26.4

$ kubectl version --client --output=json                                                                      
{
  "clientVersion": {
    "major": "1",
    "minor": "21+",
    "gitVersion": "v1.21.2-13+d2965f0db10712",
    "gitCommit": "d2965f0db1071203c6f5bc662c2827c71fc8b20d",
    "gitTreeState": "clean",
    "buildDate": "2021-06-26T01:02:11Z",
    "goVersion": "go1.16.5",
    "compiler": "gc",
    "platform": "linux/amd64"
  }
}

$ export KUBECONFIG=./kubeconfig-awscli-with-kubectl

$ ./eksctl create cluster --name awscli-with-kubectl --region us-east-2                                     
2022-05-20 16:18:19 [ℹ]  eksctl version 0.99.0-dev+9beeb3bc.2022-05-20T08:24:06Z
2022-05-20 16:18:19 [ℹ]  using region us-east-2
2022-05-20 16:18:20 [ℹ]  setting availability zones to [us-east-2a us-east-2c us-east-2b]
2022-05-20 16:18:20 [ℹ]  nodegroup "ng-bb850230" will use "" [AmazonLinux2/1.22]
2022-05-20 16:18:20 [ℹ]  using Kubernetes version 1.22
2 sequential tasks: { create cluster control plane "awscli-with-kubectl",
    2 sequential sub-tasks: {
        wait for control plane to become ready,
        create managed nodegroup "ng-bb850230",
    }
}
2022-05-20 16:18:20 [ℹ]  building cluster stack "eksctl-awscli-with-kubectl-cluster"
2022-05-20 16:18:20 [ℹ]  deploying stack "eksctl-awscli-with-kubectl-cluster"
2022-05-20 16:35:52 [ℹ]  waiting for the control plane availability...
W0520 16:35:52.946800   19029 loader.go:221] Config not found: ./kubeconfig-awscli-with-kubectl
2022-05-20 16:35:52 [✔]  saved kubeconfig as "./kubeconfig-awscli-with-kubectl"
2022-05-20 16:35:52 [ℹ]  no tasks
2022-05-20 16:35:52 [✔]  all EKS cluster resources for "awscli-with-kubectl" have been created
2022-05-20 16:35:53 [ℹ]  nodegroup "ng-bb850230" has 2 node(s)
2022-05-20 16:35:53 [ℹ]  node "ip-192-168-25-242.us-east-2.compute.internal" is ready
2022-05-20 16:35:53 [ℹ]  waiting for at least 2 node(s) to become ready in "ng-bb850230"
2022-05-20 16:35:53 [ℹ]  nodegroup "ng-bb850230" has 2 node(s)
2022-05-20 16:35:53 [ℹ]  node "ip-192-168-25-242.us-east-2.compute.internal" is ready
2022-05-20 16:35:54 [ℹ]  kubectl command should work with "./kubeconfig-awscli-with-kubectl", try 'kubectl --kubeconfig=./kubeconfig-awscli-with-kubectl get nodes'
2022-05-20 16:35:54 [✔]  EKS cluster "awscli-with-kubectl" in "us-east-2" region is ready

$ cat $KUBECONFIG                                                                            
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: 
  name: awscli-with-kubectl.us-east-2.eksctl.io
contexts:
- context:
    cluster: awscli-with-kubectl.us-east-2.eksctl.io
    user: 
  name:
current-context: 
kind: Config
preferences: {}
users:
- name: 
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - eks
      - get-token
      - --cluster-name
      - awscli-with-kubectl
      - --region
      - us-east-2
      command: aws
      env:
      - name: AWS_STS_REGIONAL_ENDPOINTS
        value: regional
      provideClusterInfo: false
  • aws-cli v1 (OLD) with kubectl

$ aws --version                                                                                              
aws-cli/1.16.312 Python/3.7.10 Linux/5.4.181-109.354.amzn2int.x86_64 botocore/1.13.48

$ kubectl version --client --output=json                                                                      
{
  "clientVersion": {
    "major": "1",
    "minor": "21+",
    "gitVersion": "v1.21.2-13+d2965f0db10712",
    "gitCommit": "d2965f0db1071203c6f5bc662c2827c71fc8b20d",
    "gitTreeState": "clean",
    "buildDate": "2021-06-26T01:02:11Z",
    "goVersion": "go1.16.5",
    "compiler": "gc",
    "platform": "linux/amd64"
  }
}

$ export KUBECONFIG=./kubeconfig-awscli-old-with-kubectl

$ ./eksctl create cluster --name awscli-old-with-kubectl --region us-east-2                                     
2022-05-20 19:28:26 [ℹ]  eksctl version 0.99.0-dev+9beeb3bc.2022-05-20T08:24:06Z
2022-05-20 19:28:26 [ℹ]  using region us-east-2
2022-05-20 19:28:27 [ℹ]  setting availability zones to [us-east-2c us-east-2a us-east-2b]
2022-05-20 19:28:27 [ℹ]  subnets for us-east-2c - public:192.168.0.0/19 private:192.168.96.0/19
2022-05-20 19:28:27 [ℹ]  nodegroup "ng-ec041ddd" will use "" [AmazonLinux2/1.22]
2022-05-20 19:28:27 [ℹ]  using Kubernetes version 1.22
2022-05-20 19:28:27 [ℹ]
2 sequential tasks: { create cluster control plane "awscli-old-with-kubectl",
    2 sequential sub-tasks: {
        wait for control plane to become ready,
        create managed nodegroup "ng-ec041ddd",
    }
}
2022-05-20 19:28:27 [ℹ]  building cluster stack "eksctl-awscli-old-with-kubectl-cluster"
2022-05-20 19:28:27 [ℹ]  deploying stack "eksctl-awscli-old-with-kubectl-cluster"
2022-05-20 19:28:57 [ℹ]  waiting for CloudFormation stack "eksctl-awscli-old-with-kubectl-cluster"
2022-05-20 19:41:29 [ℹ]  building managed nodegroup stack "eksctl-awscli-old-with-kubectl-nodegroup-ng-ec041ddd"
2022-05-20 19:41:29 [ℹ]  deploying stack "eksctl-awscli-old-with-kubectl-nodegroup-ng-ec041ddd"
2022-05-20 19:41:29 [ℹ]  waiting for CloudFormation stack "eksctl-awscli-old-with-kubectl-nodegroup-ng-ec041ddd"
2022-05-20 19:44:08 [ℹ]  waiting for the control plane availability...
W0520 19:44:09.280846   22786 loader.go:221] Config not found: ./kubeconfig-awscli-old-with-kubectl
2022-05-20 19:44:09 [✔]  saved kubeconfig as "./kubeconfig-awscli-old-with-kubectl"
2022-05-20 19:44:09 [ℹ]  no tasks
2022-05-20 19:44:09 [✔]  all EKS cluster resources for "awscli-old-with-kubectl" have been created
2022-05-20 19:44:09 [ℹ]  nodegroup "ng-ec041ddd" has 2 node(s)
2022-05-20 19:44:09 [ℹ]  node "ip-192-168-32-246.us-east-2.compute.internal" is ready
2022-05-20 19:44:09 [ℹ]  node "ip-192-168-7-140.us-east-2.compute.internal" is ready
2022-05-20 19:44:09 [ℹ]  waiting for at least 2 node(s) to become ready in "ng-ec041ddd"
2022-05-20 19:44:09 [ℹ]  nodegroup "ng-ec041ddd" has 2 node(s)
2022-05-20 19:44:09 [ℹ]  node "ip-192-168-32-246.us-east-2.compute.internal" is ready
2022-05-20 19:44:09 [ℹ]  node "ip-192-168-7-140.us-east-2.compute.internal" is ready
2022-05-20 19:44:10 [ℹ]  kubectl command should work with "./kubeconfig-awscli-old-with-kubectl", try 'kubectl --kubeconfig=./kubeconfig-awscli-old-with-kubectl get nodes'
2022-05-20 19:44:10 [✔]  EKS cluster "awscli-old-with-kubectl" in "us-east-2" region is ready


$ cat $KUBECONFIG 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data:
  name: awscli-old-with-kubectl.us-east-2.eksctl.io
contexts:
- context:
    cluster: awscli-old-with-kubectl.us-east-2.eksctl.io
    user:
  name:
current-context:
kind: Config
preferences: {}
users:
- name:
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - eks
      - get-token
      - --cluster-name
      - awscli-old-with-kubectl
      - --region
      - us-east-2
      command: aws
      env:
      - name: AWS_STS_REGIONAL_ENDPOINTS
        value: regional
      provideClusterInfo: false
  • aws-cli v1 (OLD) without kubectl

$ aws --version                                                                                              
aws-cli/1.16.312 Python/3.7.10 Linux/5.4.181-109.354.amzn2int.x86_64 botocore/1.13.48

$ kubectl version                                                                                         
zsh: command not found: kubectl

$ export KUBECONFIG=./kubeconfig-awscli-old-without-kubectl

$ ./eksctl create cluster --name awscli-old-without-kubectl --region us-east-2

2022-05-20 19:49:55 [ℹ]  eksctl version 0.99.0-dev+9beeb3bc.2022-05-20T08:24:06Z
2022-05-20 19:49:55 [ℹ]  using region us-east-2
2022-05-20 19:49:55 [ℹ]  setting availability zones to [us-east-2a us-east-2b us-east-2c]
2022-05-20 19:49:55 [ℹ]  subnets for us-east-2a - public:192.168.0.0/19 private:192.168.96.0/19
2022-05-20 19:49:55 [ℹ]  subnets for us-east-2b - public:192.168.32.0/19 private:192.168.128.0/19
2022-05-20 19:49:55 [ℹ]  subnets for us-east-2c - public:192.168.64.0/19 private:192.168.160.0/19
2022-05-20 19:49:55 [ℹ]  nodegroup "ng-eebc3f00" will use "" [AmazonLinux2/1.22]
2022-05-20 19:49:55 [ℹ]  using Kubernetes version 1.22
2022-05-20 19:49:55 [ℹ]  creating EKS cluster "awscli-old-without-kubectl" in "us-east-2" region with managed nodes
2022-05-20 19:49:55 [ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
2022-05-20 19:49:55 [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-2 --cluster=awscli-old-without-kubectl'
2022-05-20 19:49:55 [ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "awscli-old-without-kubectl" in "us-east-2"
2022-05-20 19:49:55 [ℹ]  CloudWatch logging will not be enabled for cluster "awscli-old-without-kubectl" in "us-east-2"
2022-05-20 19:49:55 [ℹ]  you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-east-2 --cluster=awscli-old-without-kubectl'
2022-05-20 19:49:55 [ℹ]
2 sequential tasks: { create cluster control plane "awscli-old-without-kubectl",
    2 sequential sub-tasks: {
        wait for control plane to become ready,
        create managed nodegroup "ng-eebc3f00",
    }
}
2022-05-20 19:49:55 [ℹ]  building cluster stack "eksctl-awscli-old-without-kubectl-cluster"
2022-05-20 19:49:55 [ℹ]  deploying stack "eksctl-awscli-old-without-kubectl-cluster"
2022-05-20 19:50:25 [ℹ]  waiting for CloudFormation stack "eksctl-awscli-old-without-kubectl-cluster"
2022-05-20 20:00:56 [ℹ]  waiting for CloudFormation stack "eksctl-awscli-old-without-kubectl-cluster"
2022-05-20 20:02:57 [ℹ]  building managed nodegroup stack "eksctl-awscli-old-without-kubectl-nodegroup-ng-eebc3f00"
2022-05-20 20:02:58 [ℹ]  deploying stack "eksctl-awscli-old-without-kubectl-nodegroup-ng-eebc3f00"
2022-05-20 20:02:58 [ℹ]  waiting for CloudFormation stack "eksctl-awscli-old-without-kubectl-nodegroup-ng-eebc3f00"
2022-05-20 20:06:24 [ℹ]  waiting for the control plane availability...
W0520 20:06:24.960718    8819 loader.go:221] Config not found: ./kubeconfig-awscli-old-without-kubectl
2022-05-20 20:06:24 [✔]  saved kubeconfig as "./kubeconfig-awscli-old-without-kubectl"
2022-05-20 20:06:24 [ℹ]  no tasks
2022-05-20 20:06:24 [✔]  all EKS cluster resources for "awscli-old-without-kubectl" have been created
2022-05-20 20:06:25 [ℹ]  nodegroup "ng-eebc3f00" has 2 node(s)
2022-05-20 20:06:25 [ℹ]  node "ip-192-168-27-38.us-east-2.compute.internal" is ready
2022-05-20 20:06:25 [ℹ]  node "ip-192-168-38-235.us-east-2.compute.internal" is ready
2022-05-20 20:06:25 [ℹ]  waiting for at least 2 node(s) to become ready in "ng-eebc3f00"
2022-05-20 20:06:25 [ℹ]  nodegroup "ng-eebc3f00" has 2 node(s)
2022-05-20 20:06:25 [ℹ]  node "ip-192-168-27-38.us-east-2.compute.internal" is ready
2022-05-20 20:06:25 [ℹ]  node "ip-192-168-38-235.us-east-2.compute.internal" is ready
2022-05-20 20:06:25 [✖]  kubectl not found, v1.10.0 or newer is required
2022-05-20 20:06:25 [ℹ]  cluster should be functional despite missing (or misconfigured) client binaries
2022-05-20 20:06:25 [✔]  EKS cluster "awscli-old-without-kubectl" in "us-east-2" region is ready

$ cat $KUBECONFIG                                                                  
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: 
    server: https://7B95D4ABBE7D33AA5524388B89C189D4.gr7.us-east-2.eks.amazonaws.com
  name: awscli-old-without-kubectl.us-east-2.eksctl.io
contexts:
- context:
    cluster: awscli-old-without-kubectl.us-east-2.eksctl.io
    user:
  name: 
current-context: 
kind: Config
preferences: {}
users:
- name: 
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - eks
      - get-token
      - --cluster-name
      - awscli-old-without-kubectl
      - --region
      - us-east-2
      command: aws
      env:
      - name: AWS_STS_REGIONAL_ENDPOINTS
        value: regional
      provideClusterInfo: false

@Skarlso
Copy link
Contributor

Skarlso commented May 21, 2022

Thanks! Well done! What about existing clusters? They won't break or do you just have to regenerate the kube config?

@Skarlso
Copy link
Contributor

Skarlso commented May 21, 2022

So not just create but also try utils write kube config please. 🙏☺️

@torredil
Copy link
Contributor Author

@Skarlso As requested! Thanks for taking a look.

  • Existing clusters using the v1alpha1 API version will break when the user upgrades to kubectl version 1.24.0 or higher (irrespective of this PR, this is because the alpha API was removed in that version). Users that run into this issue (if they upgrade to kubectl 1.24.0 early, or once EKS starts supporting 1.24) can regenerate their kubeconfig to a working config with eksctl write kubeconfig. Here are examples of that behavior with AWS cli 1 and 2.

  • aws-cli v1

$ aws --version                                                                                           
aws-cli/1.16.312 Python/3.7.10 Linux/5.4.181-109.354.amzn2int.x86_64 botocore/1.13.48

$ export KUBECONFIG=./kubeconfig-aws-migration-old                                               

$ ./eksctl create cluster --name kubeconfig-aws-migration-old --region us-east-2
2022-05-23 15:12:28 [ℹ]  eksctl version 0.100.0-dev+ca0193069.2022-05-23T15:07:56Z
2022-05-23 15:12:28 [ℹ]  using region us-east-2
2022-05-23 15:12:28 [ℹ]  setting availability zones to [us-east-2b us-east-2c us-east-2a]
2022-05-23 15:12:28 [ℹ]  subnets for us-east-2b - public:192.168.0.0/19 private:192.168.96.0/19
2022-05-23 15:12:28 [ℹ]  subnets for us-east-2c - public:192.168.32.0/19 private:192.168.128.0/19
2022-05-23 15:12:28 [ℹ]  subnets for us-east-2a - public:192.168.64.0/19 private:192.168.160.0/19
2022-05-23 15:12:28 [ℹ]  nodegroup "ng-870757ef" will use "" [AmazonLinux2/1.22]
2022-05-23 15:12:28 [ℹ]  using Kubernetes version 1.22
2022-05-23 15:12:28 [ℹ]  creating EKS cluster "kubeconfig-aws-migration-old" in "us-east-2" region with managed nodes
2022-05-23 15:12:28 [ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
2022-05-23 15:12:28 [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-2 --cluster=kubeconfig-aws-migration-old'
2022-05-23 15:12:28 [ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "kubeconfig-aws-migration-old" in "us-east-2"
2022-05-23 15:12:28 [ℹ]  CloudWatch logging will not be enabled for cluster "kubeconfig-aws-migration-old" in "us-east-2"
2022-05-23 15:12:28 [ℹ]  you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-east-2 --cluster=kubeconfig-aws-migration-old'
2022-05-23 15:12:28 [ℹ]
2 sequential tasks: { create cluster control plane "kubeconfig-aws-migration-old",
    2 sequential sub-tasks: {
        wait for control plane to become ready,
        create managed nodegroup "ng-870757ef",
    }
}
2022-05-23 15:12:28 [ℹ]  building cluster stack "eksctl-kubeconfig-aws-migration-old-cluster"
2022-05-23 15:12:28 [ℹ]  deploying stack "eksctl-kubeconfig-aws-migration-old-cluster"
2022-05-23 15:12:58 [ℹ]  waiting for CloudFormation stack "eksctl-kubeconfig-aws-migration-old-cluster"
2022-05-23 15:24:30 [ℹ]  building managed nodegroup stack "eksctl-kubeconfig-aws-migration-old-nodegroup-ng-870757ef"
2022-05-23 15:24:31 [ℹ]  deploying stack "eksctl-kubeconfig-aws-migration-old-nodegroup-ng-870757ef"
2022-05-23 15:28:20 [ℹ]  waiting for the control plane availability...
W0523 15:28:20.381395    3708 loader.go:221] Config not found: ./kubeconfig-aws-migration-old
2022-05-23 15:28:20 [✔]  saved kubeconfig as "./kubeconfig-aws-migration-old"
2022-05-23 15:28:20 [ℹ]  no tasks
2022-05-23 15:28:20 [✔]  all EKS cluster resources for "kubeconfig-aws-migration-old" have been created
2022-05-23 15:28:20 [ℹ]  nodegroup "ng-870757ef" has 2 node(s)
2022-05-23 15:28:20 [ℹ]  waiting for at least 2 node(s) to become ready in "ng-870757ef"
2022-05-23 15:28:20 [ℹ]  nodegroup "ng-870757ef" has 2 node(s)
2022-05-23 15:28:21 [ℹ]  kubectl command should work with "./kubeconfig-aws-migration-old", try 'kubectl --kubeconfig=./kubeconfig-aws-migration-old get nodes'
2022-05-23 15:28:21 [✔]  EKS cluster "kubeconfig-aws-migration-old" in "us-east-2" region is ready

$ cat $KUBECONFIG                                                                            
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data:
    server:
  name: kubeconfig-aws-migration-old.us-east-2.eksctl.io
contexts:
- context:
    cluster: kubeconfig-aws-migration-old.us-east-2.eksctl.io
    user:
  name:
current-context:
kind: Config
preferences: {}
users:
- name: 
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - eks
      - get-token
      - --cluster-name
      - kubeconfig-aws-migration-old
      - --region
      - us-east-2
      command: aws
      env:
      - name: AWS_STS_REGIONAL_ENDPOINTS
        value: regional
      provideClusterInfo: false

//Upgrade
$ aws --version                                                                                                 
aws-cli/1.24.5 Python/3.7.10 Linux/5.4.181-109.354.amzn2int.x86_64 botocore/1.26.5

$ kubectl version --client --output=json                                                                                     
{
  "clientVersion": {
    "major": "1",
    "minor": "24",
    "gitVersion": "v1.24.0",
    "gitCommit": "4ce5a8954017644c5420bae81d72b09b735c21f0",
    "gitTreeState": "clean",
    "buildDate": "2022-05-03T13:46:05Z",
    "goVersion": "go1.18.1",
    "compiler": "gc",
    "platform": "linux/amd64"
  },
  "kustomizeVersion": "v4.5.4"
}

$ kubectl get nodes                                                    
error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"

$ eksctl utils write-kubeconfig --cluster kubeconfig-aws-migration-old --region us-east-2  
2022-05-23 16:20:40 [✔]  saved kubeconfig as "./kubeconfig-aws-migration-old"

$ kubectl get nodes                                                    
NAME                                           STATUS   ROLES    AGE   VERSION
ip-192-168-59-125.us-east-2.compute.internal   Ready    <none>   54m   v1.22.6-eks-7d68063
ip-192-168-65-174.us-east-2.compute.internal   Ready    <none>   54m   v1.22.6-eks-7d68063
  • aws-cli v2

$ aws --version                                       
aws-cli/2.0.30 Python/3.7.3 Linux/5.4.181-109.354.amzn2int.x86_64 botocore/2.0.0dev34

$ export KUBECONFIG=./kubeconfig-aws-migration-new                                      

$ ./eksctl create cluster --name kubeconfig-aws-migration-new --region us-east-2      
2022-05-23 21:59:31 [ℹ]  eksctl version 0.100.0-dev+ca0193069.2022-05-23T15:07:56Z
2022-05-23 21:59:31 [ℹ]  using region us-east-2
2022-05-23 21:59:31 [ℹ]  setting availability zones to [us-east-2a us-east-2c us-east-2b]
2022-05-23 21:59:31 [ℹ]  subnets for us-east-2a - public:192.168.0.0/19 private:192.168.96.0/19
2022-05-23 21:59:31 [ℹ]  subnets for us-east-2c - public:192.168.32.0/19 private:192.168.128.0/19
2022-05-23 21:59:31 [ℹ]  subnets for us-east-2b - public:192.168.64.0/19 private:192.168.160.0/19
2022-05-23 21:59:31 [ℹ]  nodegroup "ng-6aefed23" will use "" [AmazonLinux2/1.22]
2022-05-23 21:59:31 [ℹ]  using Kubernetes version 1.22
2022-05-23 21:59:31 [ℹ]  creating EKS cluster "kubeconfig-aws-migration-new" in "us-east-2" region with managed nodes
2022-05-23 21:59:31 [ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
2022-05-23 21:59:31 [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-2 --cluster=kubeconfig-aws-migration-new'
2022-05-23 21:59:31 [ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "kubeconfig-aws-migration-new" in "us-east-2"
2022-05-23 21:59:31 [ℹ]  CloudWatch logging will not be enabled for cluster "kubeconfig-aws-migration-new" in "us-east-2"
2022-05-23 21:59:31 [ℹ]  you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-east-2 --cluster=kubeconfig-aws-migration-new'
2022-05-23 21:59:31 [ℹ]
2 sequential tasks: { create cluster control plane "kubeconfig-aws-migration-new",
    2 sequential sub-tasks: {
        wait for control plane to become ready,
        create managed nodegroup "ng-6aefed23",
    }
}
2022-05-23 21:59:31 [ℹ]  building cluster stack "eksctl-kubeconfig-aws-migration-new-cluster"
2022-05-23 21:59:31 [ℹ]  deploying stack "eksctl-kubeconfig-aws-migration-new-cluster"
2022-05-23 22:00:01 [ℹ]  waiting for CloudFormation stack "eksctl-kubeconfig-aws-migration-new-cluster"
2022-05-23 22:15:58 [ℹ]  waiting for the control plane availability...
2022-05-23 22:15:58 [✔]  saved kubeconfig as "./kubeconfig-aws-migration-new"
2022-05-23 22:15:58 [ℹ]  no tasks
2022-05-23 22:15:58 [✔]  all EKS cluster resources for "kubeconfig-aws-migration-new" have been created
2022-05-23 22:15:58 [ℹ]  nodegroup "ng-6aefed23" has 2 node(s)
2022-05-23 22:15:58 [ℹ]  waiting for at least 2 node(s) to become ready in "ng-6aefed23"
2022-05-23 22:15:58 [ℹ]  nodegroup "ng-6aefed23" has 2 node(s)
2022-05-23 22:15:59 [ℹ]  kubectl command should work with "./kubeconfig-aws-migration-new", try 'kubectl --kubeconfig=./kubeconfig-aws-migration-new get nodes'
2022-05-23 22:15:59 [✔]  EKS cluster "kubeconfig-aws-migration-new" in "us-east-2" region is ready

$ cat $KUBECONFIG                                               
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: 
    server:
  name: kubeconfig-aws-migration-new.us-east-2.eksctl.io
contexts:
- context:
    cluster: kubeconfig-aws-migration-new.us-east-2.eksctl.io
    user: 
  name:
current-context:
kind: Config
preferences: {}
users:
- name: 
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - eks
      - get-token
      - --cluster-name
      - kubeconfig-aws-migration-new
      - --region
      - us-east-2
      command: aws
      env:
      - name: AWS_STS_REGIONAL_ENDPOINTS
        value: regional
      provideClusterInfo: false

// Upgrade
$ aws --version                                                                                                 
aws-cli/2.7.2 Python/3.9.11 Linux/5.4.181-109.354.amzn2int.x86_64 exe/x86_64.amzn.2 prompt/off

$ kubectl version --client --output=json                                                                                     
{
  "clientVersion": {
    "major": "1",
    "minor": "24",
    "gitVersion": "v1.24.0",
    "gitCommit": "4ce5a8954017644c5420bae81d72b09b735c21f0",
    "gitTreeState": "clean",
    "buildDate": "2022-05-03T13:46:05Z",
    "goVersion": "go1.18.1",
    "compiler": "gc",
    "platform": "linux/amd64"
  },
  "kustomizeVersion": "v4.5.4"
}

$ kubectl get nodes                                                   
error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"

$ eksctl utils write-kubeconfig --cluster kubeconfig-aws-migration-new --region us-east-2       
2022-05-23 22:34:59 [✔]  saved kubeconfig as "./kubeconfig-aws-migration-new"

$ kubectl get nodes     
NAME                                           STATUS   ROLES    AGE   VERSION
ip-192-168-47-217.us-east-2.compute.internal   Ready    <none>   20m   v1.22.6-eks-7d68063
ip-192-168-71-187.us-east-2.compute.internal   Ready    <none>   20m   v1.22.6-eks-7d68063

@torredil torredil reopened this May 24, 2022
Copy link
Contributor

@Skarlso Skarlso left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Lovely fix. :)

Copy link
Contributor

@Himangini Himangini left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍🏻 thanks for your contribution ✨

@Skarlso Skarlso enabled auto-merge (squash) May 26, 2022 10:26
@Skarlso Skarlso merged commit fe536ce into eksctl-io:main May 26, 2022
guessi added a commit to guessi/aws-load-balancer-controller that referenced this pull request Jun 17, 2022
eksctl version before v0.100.0 generated kubeconfig with apiVersion "v1alpha1",
needed to upgrade it to 0.100.0 or later to have "v1beta1" generated.

for more info, please check for the links below:

- https://github.com/weaveworks/eksctl/releases
- https://github.com/weaveworks/eksctl/releases/tag/v0.100.0
- eksctl-io/eksctl#5288
- eksctl-io/eksctl#5287
k8s-ci-robot pushed a commit to kubernetes-sigs/aws-load-balancer-controller that referenced this pull request Jun 24, 2022
* Add clusterName as debug info for troubleshooting

* Bump eksctl to v0.100.0 for fixing apiVersion changes

eksctl version before v0.100.0 generated kubeconfig with apiVersion "v1alpha1",
needed to upgrade it to 0.100.0 or later to have "v1beta1" generated.

for more info, please check for the links below:

- https://github.com/weaveworks/eksctl/releases
- https://github.com/weaveworks/eksctl/releases/tag/v0.100.0
- eksctl-io/eksctl#5288
- eksctl-io/eksctl#5287
Timothy-Dougherty pushed a commit to adammw/aws-load-balancer-controller that referenced this pull request Nov 9, 2023
* Add clusterName as debug info for troubleshooting

* Bump eksctl to v0.100.0 for fixing apiVersion changes

eksctl version before v0.100.0 generated kubeconfig with apiVersion "v1alpha1",
needed to upgrade it to 0.100.0 or later to have "v1beta1" generated.

for more info, please check for the links below:

- https://github.com/weaveworks/eksctl/releases
- https://github.com/weaveworks/eksctl/releases/tag/v0.100.0
- eksctl-io/eksctl#5288
- eksctl-io/eksctl#5287
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug] eksctl utils write-config breaks in kubectl 1.24 when aws-iam-authenticator absent
3 participants