Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ExistingEKSCluster: unable to deploy awsnative-observability-pattern due to error: serviceaccounts "aws-load-balancer-controller" already exists #172

Closed
JonVDB opened this issue Jul 22, 2024 · 6 comments

Comments

@JonVDB
Copy link

JonVDB commented Jul 22, 2024

Describe the bug
When trying to deploy existing-eks-awsnative-observability-pattern using the steps in EKS Blueprints for CDK Workshop: AWS Native Observability Pattern for an EKS Blueprints stack that has already deployed the AwsLoadBalancerControllerAddOn(), the deployment returns the error: error when creating "/tmp/manifest.yaml": serviceaccounts "aws-load-balancer-controller" already exists.

I don't understand why it tries to deploy a second version of the LoadBalancerController?

To Reproduce
Steps to reproduce the behavior:

  1. Follow the steps in EKS Blueprints for CDK Workshop: AWS Native Observability Pattern.
  2. Deploy an EKS Blueprints for CDK cluster that includes AwsLoadBalancerControllerAddOn() without any special configurations.
    (This stack also includes the nodeRole, mngProps and clusterProvider as instructed in EKS Blueprints for CDK Workshop: AWS Native Observability Pattern.)
  3. The deployment of the main stack succeeds.
  4. Deploy the existing-eks-awsnative-observability-pattern stack, but with XrayAddOn() replaced with CloudWatchLogsAddon() and enableControlPlaneLogging() added as instructed in EKS Blueprints for CDK Workshop: AWS Native Observability Pattern.
  5. The deployment fails with the error: error when creating "/tmp/manifest.yaml": serviceaccounts "aws-load-balancer-controller" already exists.

Expected behavior
The ObservabilityBuilder can deploy the existing-eks-awsnative-observability-pattern without issue.

Screenshots
No screenshots, but I will provide code snippets & error logs below.

My main EKS stack in lib:

details
// lib/my-eks-blueprints-stack.ts
import * as cdk from 'aws-cdk-lib';
import { Construct } from 'constructs';
import {
  EksBlueprint,
  ClusterAddOn,
  AwsLoadBalancerControllerAddOn,
  CreateRoleProvider,
  MngClusterProvider,
  MngClusterProviderProps,
  getNamedResource,

} from '@aws-quickstart/eks-blueprints';

export default class ClusterConstruct extends Construct {
  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    super(scope, id);

    const account = props?.env?.account!;
    const region = props?.env?.region!;

    const nodeRole = new CreateRoleProvider("blueprint-node-role", new cdk.aws_iam.ServicePrincipal("ec2.amazonaws.com"),
    [
        cdk.aws_iam.ManagedPolicy.fromAwsManagedPolicyName("AmazonEKSWorkerNodePolicy"),
        cdk.aws_iam.ManagedPolicy.fromAwsManagedPolicyName("AmazonEC2ContainerRegistryReadOnly"),
        cdk.aws_iam.ManagedPolicy.fromAwsManagedPolicyName("AmazonSSMManagedInstanceCore"),
        cdk.aws_iam.ManagedPolicy.fromAwsManagedPolicyName("AmazonEKS_CNI_Policy"),
        cdk.aws_iam.ManagedPolicy.fromAwsManagedPolicyName("CloudWatchAgentServerPolicy"),
    ]);

    const mngProps: MngClusterProviderProps = {
      version: cdk.aws_eks.KubernetesVersion.of('auto'),
      instanceTypes: [new cdk.aws_ec2.InstanceType("m5.xlarge")],
      amiType: cdk.aws_eks.NodegroupAmiType.AL2_X86_64,
      nodeRole: getNamedResource("node-role") as cdk.aws_iam.Role,
      desiredSize: 1,
      maxSize: 2, 
    };

    const addOns: Array<ClusterAddOn> = [
      new AwsLoadBalancerControllerAddOn(),
    ];

    // Define, initialize and build the EKS Blueprints cluster with all addOns, teams, etc...
    const blueprint = EksBlueprint.builder()
      .version('auto')
      .account(account)
      .region(region)
      .clusterProvider(new MngClusterProvider(mngProps))
      .resourceProvider("node-role", nodeRole)
      .addOns(...addOns)
      .teams()
      .build(scope, id + '-eks-cdk-blueprint');
  }
}

My Main EKS stack in bin:

details
// bin/my-eks-blueprints.ts
import * as cdk from 'aws-cdk-lib';
import ClusterConstruct from '../lib/eks-blueprints-cdk-stack';
import ExistingEksAwsNativeObservabilityPattern from '../lib/existing-eks-awsnative-observability-pattern';
import { errorHandler } from '../lib/common/construct-utils';

const app = new cdk.App();
const account = process.env.CDK_DEFAULT_ACCOUNT!;
const region = process.env.CDK_DEFAULT_REGION;
const env = { account, region }

new ClusterConstruct(app, 'dev-', { env });

new ExistingEksAwsNativeObservabilityPattern().buildAsync(app, 'dev--eks-e-').catch((error) => {
    errorHandler(app, "Existing Cluster Pattern is missing information of existing cluster: " + error);
});

Error:
image

Additional context
N/A

CDK version used
2.133.0 (build dcc1e75)

EKS Blueprints Version
1.15.1

Node.js Version
v20.11.0

Environment details (OS name and version, etc.)
Win11Pro22H2

@JonVDB
Copy link
Author

JonVDB commented Jul 22, 2024

I have also confirmed that deployment of all of the above works fine when AwsLoadBalancerControllerAddOn() is not added in my own EKS Cluster.

@elamaran11
Copy link
Contributor

@JonVDB First question is how did you create your EKS Cluster before following the existing cluster pattern? Is your EKS cluster created via blueprints? If so or not so, did you have AwsLoadBalancerControllerAddOn already added your cluster before using existing cluster pattern if so you are expected to run thru this issue because your deploying the same addon twice. .enableControlPlaneLogging() will take care of adding AwsLoadBalancerControllerAddOn so basically you dont need to add that addon.

@JonVDB
Copy link
Author

JonVDB commented Jul 23, 2024

@elamaran11 I created my EKS Cluster using EKS Blueprints for CDK. The specific code can be found above in the spoilers ("details"). That code does include the AwsLoadBalancerControllerAddOn. That Blueprints AddOn was already deployed to my Cluster as well. After that I tried deploying the existing cluster pattern from this repo.

I wasn't aware that enableControlPlaneLogging() also deploys an the AwsLoadBalancerControllerAddOn. This conflict is slightly inconvenient because my Blueprints EKS Cluster stack also has an ArgoCD bootstrap repo which bootstraps a few apps onto my cluster that depend on the AwsLoadBalancerControllerAddOn being there for my k8s Ingress resources. However, I assume the solution to this is to just use the new-cluster pattern instead, and combine that into the rest of my CDK stack so it all gets deployed together using the ObservabilityBuilder.builder() from the Observability Accelerator instead of the EksBlueprint.builder()?

Thank you for your swift response by the way, I do appreciate it!

@elamaran11
Copy link
Contributor

Hey @JonVDB Yes in your case, either remove the ALB and Argo Addon from previous cluster blueprint and add those addons to CDK stack with observability to existing cluster which has ObservabilityBuilder or create a new one. Basically the idea here is to demonstrate to users how to setup observability on top of existing clusters. Thankyou for trying these out for your implementation.

@JonVDB
Copy link
Author

JonVDB commented Jul 23, 2024

@elamaran11 Perfect, thank you for explaining!

@elamaran11
Copy link
Contributor

@JonVDB Closing the issue. Based on our above interaction. Please reachout for any questions in future.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants