Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(eks): managed nodegroup support #6759

Merged
merged 36 commits into from
Mar 25, 2020
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
1686081
initial nodegroup support
pahud Mar 17, 2020
d28b856
- update README
pahud Mar 17, 2020
0fd3398
update integ test
pahud Mar 17, 2020
17a34ac
Update packages/@aws-cdk/aws-eks/lib/cluster.ts
pahud Mar 17, 2020
7662429
Update packages/@aws-cdk/aws-eks/lib/managed-nodegroup.ts
pahud Mar 17, 2020
3075be0
Update packages/@aws-cdk/aws-eks/lib/managed-nodegroup.ts
pahud Mar 17, 2020
055a918
Update packages/@aws-cdk/aws-eks/lib/managed-nodegroup.ts
pahud Mar 17, 2020
29d5907
Update packages/@aws-cdk/aws-eks/lib/managed-nodegroup.ts
pahud Mar 17, 2020
f060878
Update packages/@aws-cdk/aws-eks/lib/managed-nodegroup.ts
pahud Mar 17, 2020
c5e77b6
Update packages/@aws-cdk/aws-eks/lib/managed-nodegroup.ts
pahud Mar 17, 2020
f52094d
Merge branch 'master' into managed-ng
pahud Mar 17, 2020
433ce36
minor fix
pahud Mar 18, 2020
df44c52
remove duplicate
pahud Mar 18, 2020
a098055
add unit tests
pahud Mar 18, 2020
77e7871
update integ json
pahud Mar 18, 2020
bd6ae41
Merge branch 'master' into managed-ng
pahud Mar 18, 2020
1f11ccc
- create a nodegroup as the defaultCapacity instead of an autoscaling…
pahud Mar 19, 2020
694613c
Merge branch 'managed-ng' of github.com:pahud/aws-cdk into managed-ng
pahud Mar 19, 2020
4d9ac49
Merge branch 'master' into managed-ng
pahud Mar 19, 2020
cbbc499
- the cluster default capacity is nodegroup now
pahud Mar 21, 2020
22e0619
fix unit tests error
pahud Mar 22, 2020
edc50ee
Merge branch 'master' into managed-ng
pahud Mar 22, 2020
a982289
fix trailing space
pahud Mar 22, 2020
75aefae
Merge branch 'managed-ng' of github.com:pahud/aws-cdk into managed-ng
pahud Mar 22, 2020
36a3eaa
support multiple instance types
pahud Mar 22, 2020
bff7265
minor fix on the multiple instance types support
pahud Mar 22, 2020
b3abaac
Revert "support multiple instance types"
pahud Mar 23, 2020
b1a3b5f
Revert "minor fix on the multiple instance types support"
pahud Mar 23, 2020
36eae5d
revert
pahud Mar 23, 2020
257b028
update README with additional notes of NodeGroup and default capacity
pahud Mar 23, 2020
a3ba7ca
minor fix
pahud Mar 23, 2020
c267cb1
Merge branch 'master' into managed-ng
pahud Mar 25, 2020
70471fd
Add `defaultCapacityType` swith
pahud Mar 25, 2020
5c6a74b
merge branch 'managed-ng' of github.com:pahud/aws-cdk into managed-ng
pahud Mar 25, 2020
e13286d
add defaultNodegroup support and update the test and README
pahud Mar 25, 2020
9f7df9a
Merge branch 'master' into managed-ng
mergify[bot] Mar 25, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 19 additions & 0 deletions packages/@aws-cdk/aws-eks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,25 @@ cluster.addCapacity('frontend-nodes', {
});
```

### Managed Node Groups

Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances)
for Amazon EKS Kubernetes clusters. By default, `eks.Nodegroup` create a nodegroup with x2 `t3.medium` instances.

```ts
new eks.Nodegroup(stack, 'nodegroup', { cluster });
pahud marked this conversation as resolved.
Show resolved Hide resolved
```

You can add customized node group through `cluster.addNodegroup()`:

```ts
cluster.addNodegroup('nodegroup', {
instanceTypes: new ec2.InstanceType('m5.large'),
minSize: 4,
});
```


### Fargate

AWS Fargate is a technology that provides on-demand, right-sized compute
Expand Down
17 changes: 17 additions & 0 deletions packages/@aws-cdk/aws-eks/lib/cluster.ts
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ import { FargateProfile, FargateProfileOptions } from './fargate-profile';
import { HelmChart, HelmChartOptions } from './helm-chart';
import { KubernetesPatch } from './k8s-patch';
import { KubernetesResource } from './k8s-resource';
import { Nodegroup, NodegroupOps } from './managed-nodegroup';
import { spotInterruptHandler } from './spot-interrupt-handler';
import { renderUserData } from './user-data';

Expand Down Expand Up @@ -470,6 +471,22 @@ export class Cluster extends Resource implements ICluster {
return asg;
}

/**
* Add managed nodegroup to this Amazon EKS cluster
*
* This method will create a new managed nodegroup and add into the capacity.
*
* @see https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html
* @param id The ID of the nodegroup
* @param options options for creating a new nodegroup
*/
public addNodegroup(id: string, options?: NodegroupOps): Nodegroup {
pahud marked this conversation as resolved.
Show resolved Hide resolved
return new Nodegroup(this, id, {
pahud marked this conversation as resolved.
Show resolved Hide resolved
cluster: this,
...options,
});
}

/**
* Add compute capacity to this EKS cluster in the form of an AutoScalingGroup
*
Expand Down
3 changes: 2 additions & 1 deletion packages/@aws-cdk/aws-eks/lib/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,5 @@ export * from './eks.generated';
export * from './fargate-profile';
export * from './helm-chart';
export * from './k8s-resource';
export * from './fargate-cluster';
export * from './fargate-cluster';
export * from './managed-nodegroup';
294 changes: 294 additions & 0 deletions packages/@aws-cdk/aws-eks/lib/managed-nodegroup.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,294 @@
import { InstanceType, SubnetSelection } from '@aws-cdk/aws-ec2';
import { IRole, ManagedPolicy, Role, ServicePrincipal } from '@aws-cdk/aws-iam';
import { CfnOutput, Construct, IResource, Resource } from '@aws-cdk/core';
import { Cluster } from './cluster';
import { CfnNodegroup } from './eks.generated';

/**
* NodeGroup interface
*/
export interface INodegroup extends IResource {
/**
* Name of the nodegroup
* @attribute
*/
readonly nodegroupName: string
}

/**
* Nodegroup attributes used for the import
*/
export interface NodegroupAttributes {
/**
* Name of the nodegroup
*/
readonly nodegroupName: string;
}
pahud marked this conversation as resolved.
Show resolved Hide resolved

/**
* The AMI type for your node group. GPU instance types should use the `AL2_x86_64_GPU` AMI type, which uses the Amazon EKS-optimized
pahud marked this conversation as resolved.
Show resolved Hide resolved
* Linux AMI with GPU support. Non-GPU instances should use the `AL2_x86_64` AMI type, which uses the Amazon EKS-optimized Linux AMI.
*/
export enum AmiType {
pahud marked this conversation as resolved.
Show resolved Hide resolved
/**
* Amazon EKS-optimized Linux AMI
*/
AL2_X86_64 = 'AL2_x86_64',
/**
* Amazon EKS-optimized Linux AMI with GPU support
*/
AL2_X86_64_GPU = 'AL2_x86_64_GPU'
}

/**
* The remote access (SSH) configuration to use with your node group.
*
* @see https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-eks-nodegroup-remoteaccess.html
*/
export interface RemoteAccess {
pahud marked this conversation as resolved.
Show resolved Hide resolved
/**
* The Amazon EC2 SSH key that provides access for SSH communication with the worker nodes in the managed node group.
*/
readonly ec2SshKey: string,
pahud marked this conversation as resolved.
Show resolved Hide resolved
/**
* The security groups that are allowed SSH access (port 22) to the worker nodes. If you specify an Amazon EC2 SSH key but do not
* specify a source security group when you create a managed node group, then port 22 on the worker nodes is opened to the internet (0.0.0.0/0).
*
* @default - port 22 on the worker nodes is opened to the internet (0.0.0.0/0)
*/
readonly sourceSecurityGroups?: string[],
pahud marked this conversation as resolved.
Show resolved Hide resolved
}

/**
* The NodeGroupOps for addNodeGroup() method
*/
export interface NodegroupOps {
/**
* Name of the Nodegroup
*
* @default - resource ID
*/
readonly nodegroupName?: string;
/**
* The subnets to use for the Auto Scaling group that is created for your node group. These subnets must have the tag key
* `kubernetes.io/cluster/CLUSTER_NAME` with a value of `shared`, where `CLUSTER_NAME` is replaced with the name of your cluster.
pahud marked this conversation as resolved.
Show resolved Hide resolved
*
* @default - private subnets
*/
readonly subnets?: SubnetSelection;
/**
* The AMI type for your node group. GPU instance types should use the `AL2_x86_64_GPU` AMI type, which uses the Amazon EKS-optimized
pahud marked this conversation as resolved.
Show resolved Hide resolved
* Linux AMI with GPU support. Non-GPU instances should use the `AL2_x86_64` AMI type, which uses the Amazon EKS-optimized Linux AMI.
*
* @default - AL2_x86_64
*/
readonly amiType?: AmiType;
/**
* The root device disk size (in GiB) for your node group instances.
*
* @default - 20
*/
readonly diskSize?: number;
/**
* The current number of worker nodes that the managed node group should maintain. If not specified, the nodewgroup will initially
* create `minSize` instances.
*
* @default - 2
*/
readonly desiredSize?: number,
pahud marked this conversation as resolved.
Show resolved Hide resolved
/**
* The maximum number of worker nodes that the managed node group can scale out to. Managed node groups can support up to 100 nodes by default.
*
* @default - desiredSize
*/
readonly maxSize?: number,
/**
* The minimum number of worker nodes that the managed node group can scale in to. This number must be greater than zero.
*
* @default - 1
*/
readonly minSize?: number
/**
* Force the update if the existing node group's pods are unable to be drained due to a pod disruption budget issue. If an update fails
* because pods could not be drained, you can force the update after it fails to terminate the old node whether or not any pods are
* running on the node.
*
* @default - true
pahud marked this conversation as resolved.
Show resolved Hide resolved
*/
readonly forceUpdateEnabled?: boolean;
pahud marked this conversation as resolved.
Show resolved Hide resolved
/**
* The instance type to use for your node group. Currently, you can specify a single instance type for a node group. The default value
* for this parameter is `t3.medium`. If you choose a GPU instance type, be sure to specify the `AL2_x86_64_GPU` with the amiType parameter.
*
* @default - t3.medium
*/
readonly instanceType?: InstanceType;
/**
* The Kubernetes labels to be applied to the nodes in the node group when they are created.
*
* @default - None
*/
readonly labels?: any;
/**
* The IAM role to associate with your node group. The Amazon EKS worker node kubelet daemon
* makes calls to AWS APIs on your behalf. Worker nodes receive permissions for these API calls through an IAM instance profile
* and associated policies. Before you can launch worker nodes and register them into a cluster, you must create an IAM role for
* those worker nodes to use when they are launched.
*
* @default - None. Auto-generated if not specified.
*/
readonly nodeRole?: IRole;
/**
* The AMI version of the Amazon EKS-optimized AMI to use with your node group (for example, `1.14.7-YYYYMMDD`).
*
* @default - The latest available AMI version for the node group's current Kubernetes version is used.
*/
readonly releaseVersion?: string;
/**
* The remote access (SSH) configuration to use with your node group. Disabled by default, however, if you specify an Amazon EC2
* SSH key but do not specify a source security group when you create a managed node group, then port 22 on the worker nodes is
* opened to the internet (0.0.0.0/0)
*
* @default - disabled
*/
readonly remoteAccess?: RemoteAccess;
/**
* The metadata to apply to the node group to assist with categorization and organization. Each tag consists of a key and an optional
* value, both of which you define. Node group tags do not propagate to any other resources associated with the node group, such as the
* Amazon EC2 instances or subnets.
*
* @default - None
*/
readonly tags?: any;
pahud marked this conversation as resolved.
Show resolved Hide resolved
}

/**
* NodeGroup properties interface
*/
export interface NodegroupProps extends NodegroupOps {
/**
* Cluster resource
* [disable-awslint:ref-via-interface]"
*/
readonly cluster: Cluster;
}

/**
* The Nodegroup resource class
*/
export class Nodegroup extends Resource implements INodegroup {
/**
* Import the Nodegroup from attributes
*/
public static fromNodegroupAttributes(scope: Construct, id: string, attrs: NodegroupAttributes): INodegroup {
pahud marked this conversation as resolved.
Show resolved Hide resolved
class Import extends Resource implements INodegroup {
public readonly nodegroupName = attrs.nodegroupName;
}
return new Import(scope, id);
}
/**
* ARN of the nodegroup
*
* @attribute
*/
public readonly nodegroupArn: string;
/**
* Nodegroup name
*
* @attribute
*/
public readonly nodegroupName: string;
/**
* the Amazon EKS cluster resource
*
* @attribute ClusterName
*/
public readonly cluster: Cluster;
/**
* IAM role of the instance profile for the nodegroup
*/
public readonly role: IRole;

private readonly desiredSize: number;
private readonly maxSize: number;
private readonly minSize: number;

constructor(scope: Construct, id: string, props: NodegroupProps ) {
super(scope, id);
pahud marked this conversation as resolved.
Show resolved Hide resolved

this.cluster = props.cluster;

this.desiredSize = props.desiredSize ?? props.minSize ?? 2;
this.maxSize = props.maxSize ?? this.desiredSize;
this.minSize = props.minSize ?? 1;

if (this.desiredSize > this.maxSize) {
throw new Error(`desired capacity ${this.desiredSize} can't be greater than max size ${this.maxSize}`);
}
if (this.desiredSize < this.minSize) {
throw new Error(`Minimum capacity ${this.minSize} can't be greater than desired size ${this.desiredSize}`);
}

const ngRole = new Role(this, 'NodeGroupRole', {
assumedBy: new ServicePrincipal('ec2.amazonaws.com')
});

ngRole.addManagedPolicy(ManagedPolicy.fromAwsManagedPolicyName('AmazonEKSWorkerNodePolicy'));
ngRole.addManagedPolicy(ManagedPolicy.fromAwsManagedPolicyName('AmazonEKS_CNI_Policy'));
ngRole.addManagedPolicy(ManagedPolicy.fromAwsManagedPolicyName('AmazonEC2ContainerRegistryReadOnly'));

const resource = new CfnNodegroup(this, id, {
pahud marked this conversation as resolved.
Show resolved Hide resolved
clusterName: this.cluster.clusterName,
nodegroupName: props.nodegroupName,
nodeRole: ngRole.roleArn,
subnets: this.cluster.vpc.selectSubnets(props.subnets).subnetIds,
amiType: props.amiType,
diskSize: props.diskSize,
forceUpdateEnabled: props.forceUpdateEnabled ?? true,
instanceTypes: props.instanceType ? [props.instanceType?.toString()] : undefined,
labels: props.labels,
releaseVersion: props.releaseVersion,
remoteAccess: props.remoteAccess,
scalingConfig: {
desiredSize: this.desiredSize,
maxSize: this.maxSize,
minSize: this.minSize
},
tags: props.tags
});

// do not attempt to map the role if `kubectl` is not enabled for this
// cluster or if `mapRole` is set to false. By default this should happen.
// const mapRole = options.mapRole === undefined ? true : options.mapRole;
// if (mapRole && this.kubectlEnabled) {
pahud marked this conversation as resolved.
Show resolved Hide resolved
if (this.cluster.kubectlEnabled) {
// see https://docs.aws.amazon.com/en_us/eks/latest/userguide/add-user-role.html
this.cluster.awsAuth.addRoleMapping(ngRole, {
username: 'system:node:{{EC2PrivateDNSName}}',
groups: [
'system:bootstrappers',
'system:nodes'
]
});
} else {
// since we are not mapping the instance role to RBAC, synthesize an
// output so it can be pasted into `aws-auth-cm.yaml`
new CfnOutput(this, 'InstanceRoleARN', {
pahud marked this conversation as resolved.
Show resolved Hide resolved
value: ngRole.roleArn
});
}

this.cluster.awsAuth.addRoleMapping(ngRole, {
username: 'system:node:{{EC2PrivateDNSName}}',
groups: [
'system:bootstrappers',
'system:nodes'
]
});
pahud marked this conversation as resolved.
Show resolved Hide resolved

this.nodegroupArn = resource.attrArn;
pahud marked this conversation as resolved.
Show resolved Hide resolved
this.nodegroupName = resource.ref;
this.role = ngRole;
}

}
Loading