Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] propagateASGTags option not working #5420

Closed
YDKK opened this issue Jun 14, 2022 · 29 comments · Fixed by #5574
Closed

[Bug] propagateASGTags option not working #5420

YDKK opened this issue Jun 14, 2022 · 29 comments · Fixed by #5574
Assignees
Labels
kind/bug needs-investigation priority/important-longterm Important over the long term, but may not be currently staffed and/or may require multiple releases

Comments

@YDKK
Copy link

YDKK commented Jun 14, 2022

What were you trying to accomplish?

Attempted to automatically apply label and taint tags to ASGs using the propagateASGTags option.
I expected the following tags to be created to the ASGs automatically:

  • k8s.io/cluster-autoscaler/node-template/label/my-cool-label: pizza
  • k8s.io/cluster-autoscaler/node-template/taint/feaster: "true:NoExecute"

What happened?

The ASGs created are not tagged for labels and taints.
image

How to reproduce it?

Create a cluster using the following configuration and eksctl create cluster -f test.yaml -v 4 command.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: test-cluster2
  region: ap-northeast-1
  version: "1.22"

managedNodeGroups:
  - name: test-group
    amiFamily: Bottlerocket
    instanceType: t3.small
    desiredCapacity: 0
    minSize: 0
    maxSize: 1
    iam:
      withAddonPolicies:
        autoScaler: true
    labels:
      my-cool-label: pizza
    taints:
      - key: feaster
        value: "true"
        effect: NoExecute
    propagateASGTags: true

Logs

2022-06-14 16:04:57 [▶]  role ARN for the current session is "arn:aws:iam::***:user/***"
2022-06-14 16:04:57 [ℹ]  eksctl version 0.101.0
2022-06-14 16:04:57 [ℹ]  using region ap-northeast-1
2022-06-14 16:04:57 [▶]  determining availability zones
2022-06-14 16:04:57 [ℹ]  setting availability zones to [ap-northeast-1d ap-northeast-1c ap-northeast-1a]
2022-06-14 16:04:57 [▶]  VPC CIDR (192.168.0.0/16) was divided into 8 subnets [192.168.0.0/19 192.168.32.0/19 192.168.64.0/19 192.168.96.0/19 192.168.128.0/19 192.168.160.0/19 192.168.192.0/19 192.168.224.0/19]
2022-06-14 16:04:57 [ℹ]  subnets for ap-northeast-1d - public:192.168.0.0/19 private:192.168.96.0/19
2022-06-14 16:04:57 [ℹ]  subnets for ap-northeast-1c - public:192.168.32.0/19 private:192.168.128.0/19
2022-06-14 16:04:57 [ℹ]  subnets for ap-northeast-1a - public:192.168.64.0/19 private:192.168.160.0/19
2022-06-14 16:04:57 [ℹ]  nodegroup "test-group" will use "" [Bottlerocket/1.22]
2022-06-14 16:04:57 [ℹ]  using Kubernetes version 1.22
2022-06-14 16:04:57 [ℹ]  creating EKS cluster "test-cluster2" in "ap-northeast-1" region with managed nodes
2022-06-14 16:04:57 [▶]  cfg.json = \
{
    "kind": "ClusterConfig",
    "apiVersion": "eksctl.io/v1alpha5",
    "metadata": {
        "name": "test-cluster2",
        "region": "ap-northeast-1",
        "version": "1.22"
    },
    "iam": {
        "withOIDC": false,
        "vpcResourceControllerPolicy": true
    },
    "vpc": {
        "cidr": "192.168.0.0/16",
        "subnets": {
            "private": {
                "ap-northeast-1a": {
                    "az": "ap-northeast-1a",
                    "cidr": "192.168.160.0/19"
                },
                "ap-northeast-1c": {
                    "az": "ap-northeast-1c",
                    "cidr": "192.168.128.0/19"
                },
                "ap-northeast-1d": {
                    "az": "ap-northeast-1d",
                    "cidr": "192.168.96.0/19"
                }
            },
            "public": {
                "ap-northeast-1a": {
                    "az": "ap-northeast-1a",
                    "cidr": "192.168.64.0/19"
                },
                "ap-northeast-1c": {
                    "az": "ap-northeast-1c",
                    "cidr": "192.168.32.0/19"
                },
                "ap-northeast-1d": {
                    "az": "ap-northeast-1d",
                    "cidr": "192.168.0.0/19"
                }
            }
        },
        "manageSharedNodeSecurityGroupRules": true,
        "autoAllocateIPv6": false,
        "nat": {
            "gateway": "Single"
        },
        "clusterEndpoints": {
            "privateAccess": false,
            "publicAccess": true
        }
    },
    "privateCluster": {
        "enabled": false,
        "skipEndpointCreation": false
    },
    "managedNodeGroups": [
        {
            "name": "test-group",
            "amiFamily": "Bottlerocket",
            "instanceType": "t3.small",
            "desiredCapacity": 0,
            "minSize": 0,
            "maxSize": 1,
            "volumeSize": 80,
            "ssh": {
                "allow": false
            },
            "labels": {
                "alpha.eksctl.io/cluster-name": "test-cluster2",
                "alpha.eksctl.io/nodegroup-name": "test-group",
                "my-cool-label": "pizza"
            },
            "privateNetworking": false,
            "tags": {
                "alpha.eksctl.io/nodegroup-name": "test-group",
                "alpha.eksctl.io/nodegroup-type": "managed"
            },
            "iam": {
                "withAddonPolicies": {
                    "imageBuilder": false,
                    "autoScaler": true,
                    "externalDNS": false,
                    "certManager": false,
                    "appMesh": null,
                    "appMeshPreview": null,
                    "ebs": false,
                    "fsx": false,
                    "efs": false,
                    "awsLoadBalancerController": false,
                    "albIngress": false,
                    "xRay": false,
                    "cloudWatch": false
                }
            },
            "securityGroups": {
                "withShared": null,
                "withLocal": null
            },
            "volumeType": "gp3",
            "volumeName": "/dev/xvdb",
            "volumeIOPS": 3000,
            "volumeThroughput": 125,
            "propagateASGTags": true,
            "disableIMDSv1": false,
            "disablePodIMDS": false,
            "instanceSelector": {},
            "bottlerocket": {
                "settings": {}
            },
            "taints": [
                {
                    "key": "feaster",
                    "value": "true",
                    "effect": "NoExecute"
                }
            ],
            "releaseVersion": ""
        }
    ],
    "availabilityZones": [
        "ap-northeast-1d",
        "ap-northeast-1c",
        "ap-northeast-1a"
    ]
}

2022-06-14 16:04:57 [ℹ]  1 nodegroup (test-group) was included (based on the include/exclude rules)
2022-06-14 16:04:57 [ℹ]  will create a CloudFormation stack for cluster itself and 0 nodegroup stack(s)
2022-06-14 16:04:57 [ℹ]  will create a CloudFormation stack for cluster itself and 1 managed nodegroup stack(s)
2022-06-14 16:04:57 [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=ap-northeast-1 --cluster=test-cluster2'
2022-06-14 16:04:57 [ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "test-cluster2" in "ap-northeast-1"
2022-06-14 16:04:57 [ℹ]  CloudWatch logging will not be enabled for cluster "test-cluster2" in "ap-northeast-1"
2022-06-14 16:04:57 [ℹ]  you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=ap-northeast-1 --cluster=test-cluster2'
2022-06-14 16:04:57 [ℹ]
2 sequential tasks: { create cluster control plane "test-cluster2",
    2 sequential sub-tasks: {
        wait for control plane to become ready,
        2 sequential sub-tasks: {
            create managed nodegroup "test-group",
            propagate tags to ASG for managed nodegroup "test-group",
        },
    }
}
2022-06-14 16:04:57 [▶]  started task: create cluster control plane "test-cluster2"
2022-06-14 16:04:57 [ℹ]  building cluster stack "eksctl-test-cluster2-cluster"
2022-06-14 16:04:57 [▶]  CreateStackInput = &cloudformation.CreateStackInput{StackName:(*string)(0xc000849a30), Capabilities:[]types.Capability{"CAPABILITY_IAM"}, ClientRequestToken:(*string)(nil), DisableRollback:(*bool)(0xc000bf6b50), EnableTerminationProtection:(*bool)(nil), NotificationARNs:[]string(nil), OnFailure:"", Parameters:[]types.Parameter(nil), ResourceTypes:[]string(nil), RoleARN:(*string)(nil), RollbackConfiguration:(*types.RollbackConfiguration)(nil), StackPolicyBody:(*string)(nil), StackPolicyURL:(*string)(nil), Tags:[]types.Tag{types.Tag{Key:(*string)(0xc000122e10), Value:(*string)(0xc000122e20), noSmithyDocumentSerde:document.NoSerde{}}, types.Tag{Key:(*string)(0xc000122e40), Value:(*string)(0xc000122e50), noSmithyDocumentSerde:document.NoSerde{}}, types.Tag{Key:(*string)(0xc000122e70), Value:(*string)(0xc000122e80), noSmithyDocumentSerde:document.NoSerde{}}}, TemplateBody:(*string)(0xc00088f390), TemplateURL:(*string)(nil), TimeoutInMinutes:(*int32)(nil), noSmithyDocumentSerde:document.NoSerde{}}
2022-06-14 16:04:58 [ℹ]  deploying stack "eksctl-test-cluster2-cluster"
2022-06-14 16:05:28 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster2-cluster"
2022-06-14 16:05:58 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster2-cluster"
2022-06-14 16:06:58 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster2-cluster"
2022-06-14 16:07:58 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster2-cluster"
2022-06-14 16:08:58 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster2-cluster"
2022-06-14 16:09:58 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster2-cluster"
2022-06-14 16:10:58 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster2-cluster"
2022-06-14 16:11:58 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster2-cluster"
2022-06-14 16:12:59 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster2-cluster"
2022-06-14 16:13:59 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster2-cluster"
2022-06-14 16:14:59 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster2-cluster"
2022-06-14 16:15:59 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster2-cluster"
2022-06-14 16:15:59 [▶]  processing stack outputs
2022-06-14 16:15:59 [▶]  completed task: create cluster control plane "test-cluster2"
2022-06-14 16:15:59 [▶]  started task:
    2 sequential sub-tasks: {
        wait for control plane to become ready,
        2 sequential sub-tasks: {
            create managed nodegroup "test-group",
            propagate tags to ASG for managed nodegroup "test-group",
        },
    }

2022-06-14 16:15:59 [▶]  started task: wait for control plane to become ready
2022-06-14 16:15:59 [▶]  started task: wait for control plane to become ready
2022-06-14 16:18:00 [▶]  cluster = &types.Cluster{Arn:(*string)(0xc000abbc40), CertificateAuthority:(*types.Certificate)(0xc000abbc20), ClientRequestToken:(*string)(nil), ConnectorConfig:(*types.ConnectorConfigResponse)(nil), CreatedAt:time.Date(2022, time.June, 14, 7, 5, 43, 440000000, time.UTC), EncryptionConfig:[]types.EncryptionConfig(nil), Endpoint:(*string)(0xc000abbc60), Identity:(*types.Identity)(0xc000abbbf0), KubernetesNetworkConfig:(*types.KubernetesNetworkConfigResponse)(0xc000b30600), Logging:(*types.Logging)(0xc000ace7a0), Name:(*string)(0xc000abbc80), PlatformVersion:(*string)(0xc000abbcc0), ResourcesVpcConfig:(*types.VpcConfigResponse)(0xc000a30700), RoleArn:(*string)(0xc000abbca0), Status:"ACTIVE", Tags:map[string]string{"Name":"eksctl-test-cluster2-cluster/ControlPlane", "alpha.eksctl.io/cluster-name":"test-cluster2", "alpha.eksctl.io/eksctl-version":"0.101.0", "aws:cloudformation:logical-id":"ControlPlane", "aws:cloudformation:stack-id":"arn:aws:cloudformation:ap-northeast-1:***:stack/eksctl-test-cluster2-cluster/4c5a23b0-ebb0-11ec-ad66-06bf774ae855", "aws:cloudformation:stack-name":"eksctl-test-cluster2-cluster", "eksctl.cluster.k8s.io/v1alpha1/cluster-name":"test-cluster2"}, Version:(*string)(0xc000abbc50), noSmithyDocumentSerde:document.NoSerde{}}
2022-06-14 16:18:00 [▶]  completed task: wait for control plane to become ready
2022-06-14 16:18:00 [▶]  completed task: wait for control plane to become ready
2022-06-14 16:18:00 [▶]  started task:
    2 sequential sub-tasks: {
        create managed nodegroup "test-group",
        propagate tags to ASG for managed nodegroup "test-group",
    }
2022-06-14 16:18:00 [▶]  waiting for 1 parallel tasks to complete
2022-06-14 16:18:00 [▶]  started task:
    2 sequential sub-tasks: {
        create managed nodegroup "test-group",
        propagate tags to ASG for managed nodegroup "test-group",
    }
2022-06-14 16:18:00 [▶]  waiting for 1 parallel tasks to complete
2022-06-14 16:18:00 [▶]  started task:
    2 sequential sub-tasks: {
        create managed nodegroup "test-group",
        propagate tags to ASG for managed nodegroup "test-group",
    }

2022-06-14 16:18:00 [▶]  started task: create managed nodegroup "test-group"
2022-06-14 16:18:00 [ℹ]  building managed nodegroup stack "eksctl-test-cluster2-nodegroup-test-group"
2022-06-14 16:18:00 [▶]  CreateStackInput = &cloudformation.CreateStackInput{StackName:(*string)(0xc0009d4560), Capabilities:[]types.Capability{"CAPABILITY_IAM"}, ClientRequestToken:(*string)(nil), DisableRollback:(*bool)(0xc0006ec1a8), EnableTerminationProtection:(*bool)(nil), NotificationARNs:[]string(nil), OnFailure:"", Parameters:[]types.Parameter(nil), ResourceTypes:[]string(nil), RoleARN:(*string)(nil), RollbackConfiguration:(*types.RollbackConfiguration)(nil), StackPolicyBody:(*string)(nil), StackPolicyURL:(*string)(nil), Tags:[]types.Tag{types.Tag{Key:(*string)(0xc000122e10), Value:(*string)(0xc000122e20), noSmithyDocumentSerde:document.NoSerde{}}, types.Tag{Key:(*string)(0xc000122e40), Value:(*string)(0xc000122e50), noSmithyDocumentSerde:document.NoSerde{}}, types.Tag{Key:(*string)(0xc000122e70), Value:(*string)(0xc000122e80), noSmithyDocumentSerde:document.NoSerde{}}, types.Tag{Key:(*string)(0xc000a79180), Value:(*string)(0xc000a791b0), noSmithyDocumentSerde:document.NoSerde{}}, types.Tag{Key:(*string)(0xc000a79270), Value:(*string)(0xc000a792b0), noSmithyDocumentSerde:document.NoSerde{}}}, TemplateBody:(*string)(0xc000a792c0), TemplateURL:(*string)(nil), TimeoutInMinutes:(*int32)(nil), noSmithyDocumentSerde:document.NoSerde{}}
2022-06-14 16:18:00 [ℹ]  deploying stack "eksctl-test-cluster2-nodegroup-test-group"
2022-06-14 16:18:00 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster2-nodegroup-test-group"
2022-06-14 16:18:31 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster2-nodegroup-test-group"
2022-06-14 16:19:15 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster2-nodegroup-test-group"
2022-06-14 16:19:55 [ℹ]  waiting for CloudFormation stack "eksctl-test-cluster2-nodegroup-test-group"
2022-06-14 16:19:55 [▶]  processing stack outputs
2022-06-14 16:19:55 [▶]  completed task: create managed nodegroup "test-group"
2022-06-14 16:19:55 [▶]  started task: propagate tags to ASG for managed nodegroup "test-group"
2022-06-14 16:19:55 [▶]  completed task: propagate tags to ASG for managed nodegroup "test-group"
2022-06-14 16:19:55 [▶]  completed task:
    2 sequential sub-tasks: {
        create managed nodegroup "test-group",
        propagate tags to ASG for managed nodegroup "test-group",
    }

2022-06-14 16:19:55 [▶]  completed task:
    2 sequential sub-tasks: {
        create managed nodegroup "test-group",
        propagate tags to ASG for managed nodegroup "test-group",
    }
2022-06-14 16:19:55 [▶]  completed task:
    2 sequential sub-tasks: {
        create managed nodegroup "test-group",
        propagate tags to ASG for managed nodegroup "test-group",
    }
2022-06-14 16:19:55 [▶]  completed task:
    2 sequential sub-tasks: {
        wait for control plane to become ready,
        2 sequential sub-tasks: {
            create managed nodegroup "test-group",
            propagate tags to ASG for managed nodegroup "test-group",
        },
    }

2022-06-14 16:19:55 [ℹ]  waiting for the control plane availability...
2022-06-14 16:19:56 [▶]  merging kubeconfig files
2022-06-14 16:19:56 [▶]  setting current-context to ***@test-cluster2.ap-northeast-1.eksctl.io
2022-06-14 16:19:56 [✔]  saved kubeconfig as "C:\\Users\\***\\.kube\\config"
2022-06-14 16:19:56 [ℹ]  no tasks
2022-06-14 16:19:56 [▶]  no actual tasks
2022-06-14 16:19:56 [✔]  all EKS cluster resources for "test-cluster2" have been created
2022-06-14 16:19:56 [▶]  kubectl: "C:\\Program Files\\Docker\\Docker\\resources\\bin\\kubectl.exe"
2022-06-14 16:19:57 [▶]  kubectl version: v1.24.0
2022-06-14 16:19:57 [▶]  found authenticator: aws
2022-06-14 16:19:59 [ℹ]  kubectl command should work with "C:\\Users\\***\\.kube\\config", try 'kubectl get nodes'
2022-06-14 16:19:59 [✔]  EKS cluster "test-cluster2" in "ap-northeast-1" region is ready
2022-06-14 16:19:59 [▶]  cfg.json = \
{
    "kind": "ClusterConfig",
    "apiVersion": "eksctl.io/v1alpha5",
    "metadata": {
        "name": "test-cluster2",
        "region": "ap-northeast-1",
        "version": "1.22"
    },
    "iam": {
        "serviceRoleARN": "arn:aws:iam::***:role/eksctl-test-cluster2-cluster-ServiceRole-FCY3X3FAG8P5",
        "withOIDC": false,
        "vpcResourceControllerPolicy": true
    },
    "vpc": {
        "id": "vpc-0079f4c110b29aefb",
        "cidr": "192.168.0.0/16",
        "securityGroup": "sg-0ffb0a8612d4c3625",
        "subnets": {
            "private": {
                "ap-northeast-1a": {
                    "id": "subnet-0f748e1dae8099529",
                    "az": "ap-northeast-1a",
                    "cidr": "192.168.160.0/19"
                },
                "ap-northeast-1c": {
                    "id": "subnet-04513eab92f87cbe8",
                    "az": "ap-northeast-1c",
                    "cidr": "192.168.128.0/19"
                },
                "ap-northeast-1d": {
                    "id": "subnet-0299b6adadd4eb5ec",
                    "az": "ap-northeast-1d",
                    "cidr": "192.168.96.0/19"
                }
            },
            "public": {
                "ap-northeast-1a": {
                    "id": "subnet-00e12906014cb7f63",
                    "az": "ap-northeast-1a",
                    "cidr": "192.168.64.0/19"
                },
                "ap-northeast-1c": {
                    "id": "subnet-02a97fb688448ddb8",
                    "az": "ap-northeast-1c",
                    "cidr": "192.168.32.0/19"
                },
                "ap-northeast-1d": {
                    "id": "subnet-016923d7d216c2449",
                    "az": "ap-northeast-1d",
                    "cidr": "192.168.0.0/19"
                }
            }
        },
        "sharedNodeSecurityGroup": "sg-0fa8b3fe6164d3312",
        "manageSharedNodeSecurityGroupRules": true,
        "autoAllocateIPv6": false,
        "nat": {
            "gateway": "Disable"
        },
        "clusterEndpoints": {
            "privateAccess": false,
            "publicAccess": true
        }
    },
    "privateCluster": {
        "enabled": false,
        "skipEndpointCreation": false
    },
    "managedNodeGroups": [
        {
            "name": "test-group",
            "amiFamily": "Bottlerocket",
            "instanceType": "t3.small",
            "desiredCapacity": 0,
            "minSize": 0,
            "maxSize": 1,
            "volumeSize": 80,
            "ssh": {
                "allow": false
            },
            "labels": {
                "alpha.eksctl.io/cluster-name": "test-cluster2",
                "alpha.eksctl.io/nodegroup-name": "test-group",
                "my-cool-label": "pizza"
            },
            "privateNetworking": false,
            "tags": {
                "alpha.eksctl.io/nodegroup-name": "test-group",
                "alpha.eksctl.io/nodegroup-type": "managed"
            },
            "iam": {
                "withAddonPolicies": {
                    "imageBuilder": false,
                    "autoScaler": true,
                    "externalDNS": false,
                    "certManager": false,
                    "appMesh": null,
                    "appMeshPreview": null,
                    "ebs": false,
                    "fsx": false,
                    "efs": false,
                    "awsLoadBalancerController": false,
                    "albIngress": false,
                    "xRay": false,
                    "cloudWatch": false
                }
            },
            "securityGroups": {
                "withShared": null,
                "withLocal": null
            },
            "volumeType": "gp3",
            "volumeName": "/dev/xvdb",
            "volumeIOPS": 3000,
            "volumeThroughput": 125,
            "propagateASGTags": true,
            "disableIMDSv1": false,
            "disablePodIMDS": false,
            "instanceSelector": {},
            "bottlerocket": {
                "settings": {
                    "kubernetes": {}
                }
            },
            "taints": [
                {
                    "key": "feaster",
                    "value": "true",
                    "effect": "NoExecute"
                }
            ],
            "releaseVersion": ""
        }
    ],
    "availabilityZones": [
        "ap-northeast-1d",
        "ap-northeast-1c",
        "ap-northeast-1a"
    ]
}

Anything else we need to know?

Windows 11 Pro 21H2 Build 22000.675
Used scoop command for install eksctl.
Default AWS profile.

Versions

$ eksctl info
eksctl version: 0.101.0
kubectl version: v1.24.0
OS: windows
@YDKK YDKK added the kind/bug label Jun 14, 2022
@github-actions
Copy link
Contributor

Hello YDKK 👋 Thank you for opening an issue in eksctl project. The team will review the issue and aim to respond within 1-3 business days. Meanwhile, please read about the Contribution and Code of Conduct guidelines here. You can find out more information about eksctl on our website

@youwalther65
Copy link

I faced the same issue
$ eksctl info
eksctl version: 0.101.0
kubectl version: v1.23.5
OS: linux

on AWS EKS 1.22

@Skarlso Skarlso self-assigned this Jun 16, 2022
@Skarlso
Copy link
Contributor

Skarlso commented Jun 16, 2022

Thanks for the issue. Will verify.

@matti
Copy link
Contributor

matti commented Jun 16, 2022

same here with:

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: 'test-1'
  region: 'eu-north-1'

managedNodeGroups:
  - name: 'test-16-32-6-2022-06-16-14-12-45'
    labels:
      test: 'yes'
    volumeType: 'gp3'
    volumeSize: 6
    volumeIOPS: 3000
    volumeThroughput: 125
    minSize: 0
    maxSize: 450
    spot: true
    tags:
      'k8s.io/cluster-autoscaler/node-template/label/test': 'yes'
    iam:
      withAddonPolicies:
        autoScaler: true
        imageBuilder: true
        ebs: true
        albIngress: true
    instanceTypes:
      - c5.4xlarge
      - c5a.4xlarge
    ami: 'ami-0078ce758e0c53482'

I don't really understand the codebase, but it looks like https://github.com/weaveworks/eksctl/pull/5286/files this doesn't address the propagation which is now broken/missing

@matti
Copy link
Contributor

matti commented Jun 16, 2022

so workaround like this:

nodegroup=$1

# fetch nodegroups asg
asg=$(
  aws eks describe-nodegroup --region $REGION --cluster-name $CLUSTER_NAME \
    --nodegroup-name $nodegroup \
    --query "nodegroup.resources.autoScalingGroups[0].name" \
    --output text
)

# fetch all labels from nodegroup
labels=$(
  aws eks describe-nodegroup --region $REGION --cluster-name $CLUSTER_NAME \
    --nodegroup-name $nodegroup \
    --query "nodegroup.labels" \
    --output=json | jq -r 'to_entries[] | .key + "=" + .value'
)


# copy every label except those which are `alpha.eksctl.io...`
for label in $labels; do
  label_key=${label%=*}
  label_value=${label#*=}

  case $label_key in
    alpha.eksctl.io*)
      echo "skip: $label"
      continue
    ;;
  esac

  echo "tag: $label"

  aws autoscaling create-or-update-tags --region $REGION \
    --tags "ResourceId=${asg},ResourceType=auto-scaling-group,Key=k8s.io/cluster-autoscaler/node-template/label/${label_key},Value=${label_value},PropagateAtLaunch=true"
done

@Skarlso
Copy link
Contributor

Skarlso commented Jun 17, 2022

The code propagates TAGS. Not labels or taints. So if you want tags to be propagated to the ASG you need to define tags.

  tags:
    k8s.io/cluster-autoscaler/node-template/label/my-cool-label: pizza

@Skarlso
Copy link
Contributor

Skarlso commented Jun 17, 2022

Screenshot 2022-06-17 at 12 23 19

Please close if this answers your question. Thanks!

@Skarlso Skarlso closed this as completed Jun 17, 2022
@Skarlso Skarlso reopened this Jun 17, 2022
@matti
Copy link
Contributor

matti commented Jun 17, 2022

the problem is that it doesn't propagate these tags to the underlying auto scaling group - they only get correctly added to the eks tags listing what is shown in the screenshot above.

So the tags are kinda worthless because they are not pushed to the auto scaling group

this was already asked in #4965 which was closed, please the comment #4965 (comment)

which says "So yeah, this ticket isn't a duplicate, it's what #3793 (comment) was actually requesting."

So it was also requested in that #3793, but closed.

so to reiterate: eks nodegroup tags need to be propagated all the way to the ASG, now they are only set to eks nodegroups, but not to the underlying ASGs. please see my script above how I workaround this.

@Skarlso
Copy link
Contributor

Skarlso commented Jun 17, 2022

That screenshot is from the ASG.

@matti
Copy link
Contributor

matti commented Jun 17, 2022

okay, I'll try to make my own screenshots soon

@youwalther65
Copy link

ok, so this extension of eksctl doesn't deliver what we hoped, right?

But I run into another thing when trying to follow the docs not sure what I am doing wrong:

...
managedNodeGroups:
  - name: ng-spot
    amiFamily: AmazonLinux2
    instanceSelector:
      vCPUs: 2
      memory: "4" # 4 GiB, unit defaults to GiB
      cpuArchitecture: x86_64 # default value
    minSize: 1
    desiredCapacity: 2
    maxSize: 2
    volumeSize: 100
    volumeType: gp3
    volumeEncrypted: true
    privateNetworking: true
    iam:
      attachPolicyARNs:
        - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
        - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
        - arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
        - arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore
    spot: true
    labels:
      my-cool-label: pizza
      owner: waltju
    taints:
      feaster: "true:NoSchedule"
    tags:
      k8s.io/cluster-autoscaler/node-template/label/owner: "waltju"
      k8s.io/cluster-autoscaler/node-template/taint/feaster: "true:NoSchedule"
      auto-delete: no
    propagateASGTags: true

When I run:
$ eksctl create nodegroup -f eksctl-tags.yaml
Error: couldn't create node group filter from command line options: loading config file "eksctl-tags.yaml": error unmarshaling JSON: while decoding JSON: json: cannot unmarshal bool into Go struct field ManagedNodeGroup.managedNodeGroups.tags of type string

@Skarlso
Copy link
Contributor

Skarlso commented Jun 17, 2022

ok, so this extension of eksctl doesn't deliver what we hoped, right?

What do you mean? It's exactly what it's supposed to do.:)

Propagate tags to the ASG. As the value suggests. progapageASGTags.

The thing you are looking for is for unmanaged nodegroups. Managed nodegroups are just tagged with set tags.

Read more here: #1571

@Skarlso
Copy link
Contributor

Skarlso commented Jun 17, 2022

But I run into another thing when trying to follow the docs not sure what I am doing wrong:

Please open a separate ticket.

@matti
Copy link
Contributor

matti commented Jun 17, 2022

@youwalther65 you have the yaml wrong

auto-delete: no

^-- yaml turns no and yes to booleans unless they are inside of "

so

auto-delete: "no"

@youwalther65
Copy link

youwalther65 commented Jun 17, 2022

Now I am using:...

managedNodeGroups:
    tags:
      k8s.io/cluster-autoscaler/node-template/label/owner: "waltju"
      k8s.io/cluster-autoscaler/node-template/taint/feaster: "true:NoSchedule"
      auto-delete: "no"
    propagateASGTags: true

and the tags are attched to ASG which is great but with "Tag new instances = No" :
image

I expected "Tag new instances = Yes" so I can use it for cost related tagging at the EC2 instance level as well! Is there another option or should I raise a feature request for this?

@Skarlso
Copy link
Contributor

Skarlso commented Jun 17, 2022

The is disabled for a purpose. We didn't want random tags to be attached on to the EC2. If you have a case for it to happen, please open a separate issue and we can triage it. Thanks. :)

@YDKK
Copy link
Author

YDKK commented Jun 17, 2022

The code propagates TAGS. Not labels or taints. So if you want tags to be propagated to the ASG you need to define tags.

Ok, I understand that automatic tagging for labels and taints works only with unmanaged nodegroups.

However, the documentation for this feature refers to both managed and unmanaged nodegroups, which is very misleading and needs to be corrected.

https://github.com/weaveworks/eksctl/blob/4c1b1421b73e80a6bbe4572602e206f5690ece6e/userdocs/src/usage/autoscaling.md?plain=1#L47-L58

@Skarlso
Copy link
Contributor

Skarlso commented Jun 17, 2022

@YDKK That's true. Thanks for bringing that up. We'll fix that oversight.

@youwalther65
Copy link

youwalther65 commented Jun 17, 2022

The is disabled for a purpose. We didn't want random tags to be attached on to the EC2. If you have a case for it to happen, please open a separate issue and we can triage it. Thanks. :)

I just opened this feature request:
#5443

@Himangini Himangini added area/docs priority/important-longterm Important over the long term, but may not be currently staffed and/or may require multiple releases labels Jul 5, 2022
@Tolsto
Copy link
Contributor

Tolsto commented Jul 12, 2022

I also just noticed that tags set in the configuration file only get applied to the autoscaling group if you set propagateASGTags to true. However, the schema description states for tags: Applied to the Autoscaling Group, the EKS Nodegroup resource and to the EC2 instances. Therefore, I'd expect tags to get propagated to the ASG even if propagateASGTags is set to false.

@pkit
Copy link

pkit commented Jul 14, 2022

@Skarlso it's not propagating.
See below:
ASG:
Screenshot from 2022-07-14 15-30-09
Launch Template:
Screenshot from 2022-07-14 15-31-40
As you can see tag from the launch template k8s.io/cluster-autoscaler/node-template/label/runtime is not propagated.
cluster.yaml:

  - name: ng-2
    instanceType: m6i.12xlarge
    desiredCapacity: 0
    minSize: 0
    maxSize: 4
    availabilityZones: ["us-east-1b"]
    tags:
      k8s.io/cluster-autoscaler/node-template/label/runtime: caspiandb-containerd
    overrideBootstrapCommand: |
      #!/bin/bash
      set -e
      /etc/eks/bootstrap.sh dev --container-runtime containerd --kubelet-extra-args '--node-labels=runtime=caspiandb-containerd'

In our case, it means that AWS cluster-autoscaler is totally broken.
As we can see here cluster-autoscaler relies on ASG only: https://github.com/kubernetes/autoscaler/blob/46d7964132ba7c252663b23283a9e17a15dccc09/cluster-autoscaler/cloudprovider/aws/aws_manager.go#L481

@Skarlso
Copy link
Contributor

Skarlso commented Jul 14, 2022

The above config doesn't have propagateAsgTags set to true?

@pkit
Copy link

pkit commented Jul 14, 2022

It was the default...

@Skarlso
Copy link
Contributor

Skarlso commented Jul 14, 2022

Sorry, not following. propagateASGTags is false by default.

@pkit
Copy link

pkit commented Jul 14, 2022

It was true by default not a long time ago.

@Skarlso
Copy link
Contributor

Skarlso commented Jul 14, 2022

I don't recall. :) You might be right, I worked on a lot of other things. However, right now, it's false by default. :) I can't see any changes to the doc saying otherwise. But I might be wrong. :))

@Himangini
Copy link
Collaborator

We will investigate this issue and also check the default setting for propagateAsgTags, and if there are documentation changes required we will add those accordingly.

@nikimanoledaki nikimanoledaki self-assigned this Jul 26, 2022
@nikimanoledaki
Copy link
Contributor

nikimanoledaki commented Aug 4, 2022

I just tested all these scenarios with eksctl 0.106.0. Here are the results:

Unmanaged Nodegroups

  1. tags set, propagateASGTags not set
    The tags were propagated as ASG tags.

  2. tags and propagateASGTags set
    The tags were propagated as ASG tags.

  3. labels and taints set, tags not set, and propagateASGTags set
    The labels and taints were propagated as ASG tags.

Managed Nodegroups

  1. tags set, propagateASGTags not set
    The tags were NOT propagated as ASG tags.

  2. tags and propagateASGTags set
    The tags were propagated as ASG tags.

  3. labels and taints set, tags not set, and propagateASGTags set
    The labels and taints were NOT propagated as ASG tags.

Conclusions

  • propagateASGTags propagates tags, labels, and taints for unmanaged nodegroups, and only tags for managed nodegroups.
  • tags alone propagate to the ASG by default for unmanaged nodegroups, NOT for managed nodegroups.

Apologies for any confusion. We'll be updating the documentation to reflect the current code as clearly as possible. I'm looking into why we have not added support for this for managed nodegroups as well given the previous requests.

@nikimanoledaki
Copy link
Contributor

Added a PR to update the docs to reflect the current state of eksctl. Hopefully this makes things clearer regarding the difference between managed vs unmanaged nodegroups. It also clears up the behaviour of tags, which are propagated by default for unmanaged nodegroups but NOT for managed nodegroups. Thanks @YDKK for pointing this out and everyone for your feedback!

Lastly, I opened an issue so that propagateASGTags can propagate labels+taints as ASG tags for managed nodegroups as well once and for all 😄 🔜

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug needs-investigation priority/important-longterm Important over the long term, but may not be currently staffed and/or may require multiple releases
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants