Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Schedule and binpack #1561

Merged
merged 8 commits into from
Mar 28, 2022
Merged

Schedule and binpack #1561

merged 8 commits into from
Mar 28, 2022

Conversation

tzneal
Copy link
Contributor

@tzneal tzneal commented Mar 23, 2022

1. Issue, if available:
N/A

2. Description of changes:

Merge bin-packing and scheduling to prepare for pod affinity work.

3. How was this change tested?

Unit test & live on EKS

4. Does this change impact docs?

  • Yes, PR includes docs updates
  • Yes, issue opened: link to issue
  • No

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

Prices are for us-west-2 from https://aws.amazon.com/ec2/pricing/on-demand/

Inflate = 20 replicas

Karpenter (combined scheduling + bin-packing) - m5zn.6xlarge $1.982 24 96 GiB EBS Only 50 Gigabit

This first tried to allocate a c5a.8xlarge and a c5a.12xlarge which failed due to availability before creating an m5zn.6xlarge .

c5a.8xlarge $1.232 32 64 GiB EBS Only 10 Gigabit
c5a.12xlarge $1.848 48 96 GiB EBS Only 12 Gigabit

karpenter-68d9d9b44f-9vjpk controller 2022-03-23T15:05:26.623Z	INFO	controller.provisioning	Batched 20 pods in 1.489207963s	{"commit": "754893a", "provisioner": "default"}
karpenter-68d9d9b44f-9vjpk controller 2022-03-23T15:05:26.769Z	INFO	controller.provisioning	scheduled 20 pods onto 1 nodes in 143.719252ms	{"commit": "754893a", "provisioner": "default"}
karpenter-68d9d9b44f-9vjpk controller 2022-03-23T15:05:26.780Z	INFO	controller.provisioning	creating 1 node(s) with instance type option(s) [c3.8xlarge inf1.6xlarge cc2.8xlarge c4.8xlarge c6a.8xlarge c6i.8xlarge c5ad.8xlarge c5a.8xlarge c5d.9xlarge c5.9xlarge g2.8xlarge m5zn.6xlarge c5n.9xlarge c5d.12xlarge c5ad.12xlarge c5.12xlarge c5a.12xlarge c6a.12xlarge c6i.12xlarge m5d.8xlarge m5n.8xlarge m6i.8xlarge m6a.8xlarge m5dn.8xlarge m5ad.8xlarge m5a.8xlarge m5.8xlarge g4dn.8xlarge g5.8xlarge g4ad.8xlarge c5a.16xlarge c6i.16xlarge c6a.16xlarge c5ad.16xlarge m4.10xlarge c5d.18xlarge i3en.6xlarge c5.18xlarge m6i.12xlarge m5ad.12xlarge m6a.12xlarge m5zn.12xlarge m5d.12xlarge m5dn.12xlarge m5.12xlarge m5a.12xlarge m5n.12xlarge g5.12xlarge g4dn.12xlarge c5n.18xlarge r3.8xlarge i3.8xlarge r4.8xlarge g3.8xlarge c5.24xlarge r6i.8xlarge c5ad.24xlarge r5a.8xlarge c5d.24xlarge r5d.8xlarge r5ad.8xlarge c6i.24xlarge r5b.8xlarge r5n.8xlarge c6a.24xlarge r5.8xlarge c5a.24xlarge r5dn.8xlarge p3.8xlarge m4.16xlarge m5a.16xlarge m5.16xlarge m6a.16xlarge m5n.16xlarge m5d.16xlarge m6i.16xlarge m5ad.16xlarge m5dn.16xlarge g5.16xlarge g4dn.16xlarge g4ad.16xlarge inf1.24xlarge c6i.32xlarge c6a.32xlarge r6i.12xlarge r5.12xlarge r5d.12xlarge r5ad.12xlarge r5n.12xlarge r5a.12xlarge r5b.12xlarge i3en.12xlarge r5dn.12xlarge m5n.24xlarge m5dn.24xlarge m5d.24xlarge m5.24xlarge m5a.24xlarge m6a.24xlarge m5ad.24xlarge m6i.24xlarge g5.24xlarge i3.16xlarge r4.16xlarge p2.8xlarge g3.16xlarge r6i.16xlarge r5a.16xlarge r5dn.16xlarge r5.16xlarge c6a.48xlarge r5d.16xlarge r5ad.16xlarge r5b.16xlarge r5n.16xlarge p3.16xlarge m6a.32xlarge m6i.32xlarge r5.24xlarge r5dn.24xlarge r5a.24xlarge i3en.24xlarge r5b.24xlarge r5ad.24xlarge r5n.24xlarge r6i.24xlarge r5d.24xlarge p2.16xlarge p3dn.24xlarge m6a.48xlarge g5.48xlarge r6i.32xlarge p4d.24xlarge]	{"commit": "754893a", "provisioner": "default"}

Karpenter v0.7.0 - m5zn.6xlarge $1.982 24 96 GiB EBS Only 50 Gigabit

karpenter-6799b465c5-xtncf controller 2022-03-23T15:01:52.464Z	INFO	controller.provisioning	Batched 20 pods in 1.398172152s	{"commit": "f78fa16", "provisioner": "default"}
karpenter-6799b465c5-xtncf controller 2022-03-23T15:01:52.471Z	DEBUG	controller.provisioning	Excluding instance type t3.nano because there are not enough resources for kubelet and system overhead	{"commit": "f78fa16", "provisioner": "default"}
karpenter-6799b465c5-xtncf controller 2022-03-23T15:01:52.476Z	DEBUG	controller.provisioning	Excluding instance type t3a.nano because there are not enough resources for kubelet and system overhead	{"commit": "f78fa16", "provisioner": "default"}
karpenter-6799b465c5-xtncf controller 2022-03-23T15:01:52.485Z	INFO	controller.provisioning	Computed packing of 1 node(s) for 20 pod(s) with instance type option(s) [m5zn.6xlarge i3en.6xlarge m6i.8xlarge m5.8xlarge m5ad.8xlarge m5dn.8xlarge m5a.8xlarge m6a.8xlarge m5d.8xlarge m5n.8xlarge i3.8xlarge r4.8xlarge r3.8xlarge r5n.8xlarge]	{"commit": "f78fa16", "provisioner": "default"}

Inflate = 8 replicas

Karpenter (combined scheduling + bin-packing) c6a.4xlarge $0.612 16 32 GiB EBS Only Up to 12500 Megabit

karpenter-68d9d9b44f-8hrbc controller 2022-03-23T14:40:44.550Z	INFO	controller.provisioning	Batched 8 pods in 1.133935324s	{"commit": "754893a", "provisioner": "default"}
karpenter-68d9d9b44f-8hrbc controller 2022-03-23T14:40:44.573Z	INFO	controller.provisioning	scheduled 8 pods onto 1 nodes in 20.439671ms	{"commit": "754893a", "provisioner": "default"}
karpenter-68d9d9b44f-8hrbc controller 2022-03-23T14:40:44.590Z	INFO	controller.provisioning	creating 1 node(s) with instance type option(s) [c4.4xlarge c3.4xlarge c6i.4xlarge c5ad.4xlarge c5.4xlarge c6a.4xlarge c5d.4xlarge c5a.4xlarge c5n.4xlarge m5zn.3xlarge m6a.4xlarge m6i.4xlarge m4.4xlarge m5a.4xlarge m5ad.4xlarge m5d.4xlarge m5n.4xlarge m5.4xlarge m5dn.4xlarge g5.4xlarge g4ad.4xlarge g4dn.4xlarge inf1.6xlarge c3.8xlarge cc2.8xlarge c6i.8xlarge c5ad.8xlarge c5a.8xlarge c6a.8xlarge c4.8xlarge c5d.9xlarge i3en.3xlarge c5.9xlarge g2.8xlarge m5zn.6xlarge c5n.9xlarge i3.4xlarge r3.4xlarge r4.4xlarge g3.4xlarge r5d.4xlarge c5.12xlarge r5a.4xlarge r5dn.4xlarge r5n.4xlarge c5a.12xlarge r5.4xlarge c5d.12xlarge c5ad.12xlarge r6i.4xlarge c6i.12xlarge c6a.12xlarge r5b.4xlarge r5ad.4xlarge m6i.8xlarge m6a.8xlarge m5dn.8xlarge m5d.8xlarge m5n.8xlarge m5ad.8xlarge m5.8xlarge m5a.8xlarge g4dn.8xlarge g5.8xlarge g4ad.8xlarge c6a.16xlarge c6i.16xlarge c5ad.16xlarge c5a.16xlarge m4.10xlarge i3en.6xlarge c5d.18xlarge c5.18xlarge m6a.12xlarge m6i.12xlarge m5d.12xlarge m5ad.12xlarge m5zn.12xlarge m5a.12xlarge m5dn.12xlarge m5.12xlarge m5n.12xlarge g5.12xlarge g4dn.12xlarge c5n.18xlarge r4.8xlarge i3.8xlarge r3.8xlarge g3.8xlarge r5ad.8xlarge r5d.8xlarge r5b.8xlarge c5d.24xlarge r6i.8xlarge c5a.24xlarge c5.24xlarge c6a.24xlarge c5ad.24xlarge r5dn.8xlarge r5a.8xlarge c6i.24xlarge r5n.8xlarge r5.8xlarge p3.8xlarge m4.16xlarge m5n.16xlarge m5d.16xlarge m5ad.16xlarge m5.16xlarge m5dn.16xlarge m5a.16xlarge m6a.16xlarge m6i.16xlarge g4dn.16xlarge g5.16xlarge g4ad.16xlarge inf1.24xlarge c6a.32xlarge c6i.32xlarge r5d.12xlarge r5.12xlarge r5ad.12xlarge i3en.12xlarge r5n.12xlarge r6i.12xlarge r5a.12xlarge r5b.12xlarge r5dn.12xlarge m5dn.24xlarge m5ad.24xlarge m5n.24xlarge m6a.24xlarge m6i.24xlarge m5.24xlarge m5d.24xlarge m5a.24xlarge g5.24xlarge r4.16xlarge i3.16xlarge p2.8xlarge g3.16xlarge r5.16xlarge r5b.16xlarge r5a.16xlarge r5ad.16xlarge r6i.16xlarge r5dn.16xlarge c6a.48xlarge r5d.16xlarge r5n.16xlarge p3.16xlarge m6i.32xlarge m6a.32xlarge r5d.24xlarge r5a.24xlarge r6i.24xlarge i3en.24xlarge r5b.24xlarge r5ad.24xlarge r5dn.24xlarge r5.24xlarge r5n.24xlarge p2.16xlarge p3dn.24xlarge m6a.48xlarge g5.48xlarge r6i.32xlarge p4d.24xlarge]	{"commit": "754893a", "provisioner": "default"}

Karpenter v0.7.0 - m5a.4xlarge $0.688 16 64 GiB EBS Only Up to 10 Gigabit

karpenter-6799b465c5-xtncf controller 2022-03-23T14:47:08.241Z	INFO	controller.provisioning	Batched 8 pods in 1.191242569s	{"commit": "f78fa16", "provisioner": "default"}
karpenter-6799b465c5-xtncf controller 2022-03-23T14:47:08.349Z	DEBUG	controller.provisioning	Excluding instance type t3.nano because there are not enough resources for kubelet and system overhead	{"commit": "f78fa16", "provisioner": "default"}
karpenter-6799b465c5-xtncf controller 2022-03-23T14:47:08.350Z	DEBUG	controller.provisioning	Excluding instance type t3a.nano because there are not enough resources for kubelet and system overhead	{"commit": "f78fa16", "provisioner": "default"}
karpenter-6799b465c5-xtncf controller 2022-03-23T14:47:08.357Z	INFO	controller.provisioning	Computed packing of 1 node(s) for 8 pod(s) with instance type option(s) [m5zn.3xlarge m6i.4xlarge m4.4xlarge m5dn.4xlarge m5ad.4xlarge m5d.4xlarge m5n.4xlarge m6a.4xlarge m5.4xlarge m5a.4xlarge]	{"commit": "f78fa16", "provisioner": "default"}

Inflate = 4 replicas

Karpenter (combined scheduling + bin-packing) - c5d.2xlarge $0.384 8 16 GiB 1 x 200 NVMe SSD Up to 10 Gigabit

This points out a flaw in our current pricing calculation, we thought that this would be cheaper than the t3a.2xlarge due to it having lower memory.

karpenter-68d9d9b44f-9vjpk controller 2022-03-23T15:11:40.172Z	INFO	controller.provisioning	Batched 4 pods in 1.077198989s	{"commit": "754893a", "provisioner": "default"}
karpenter-68d9d9b44f-9vjpk controller 2022-03-23T15:11:40.188Z	INFO	controller.provisioning	scheduled 4 pods onto 1 nodes in 12.09605ms	{"commit": "754893a", "provisioner": "default"}
karpenter-68d9d9b44f-9vjpk controller 2022-03-23T15:11:40.204Z	INFO	controller.provisioning	creating 1 node(s) with instance type option(s) [c1.xlarge c4.2xlarge c3.2xlarge c5a.2xlarge c6i.2xlarge c5d.2xlarge c5.2xlarge c6a.2xlarge c5ad.2xlarge g2.2xlarge c5n.2xlarge inf1.2xlarge m3.2xlarge m5zn.2xlarge m5n.2xlarge m6a.2xlarge m4.2xlarge m5ad.2xlarge m5.2xlarge m5a.2xlarge m5dn.2xlarge m6i.2xlarge t3.2xlarge t3a.2xlarge m5d.2xlarge g4ad.2xlarge g4dn.2xlarge g5.2xlarge c4.4xlarge c3.4xlarge c5a.4xlarge c6a.4xlarge c6i.4xlarge c5.4xlarge c5d.4xlarge c5ad.4xlarge c5n.4xlarge m5zn.3xlarge i3.2xlarge r4.2xlarge r3.2xlarge r5ad.2xlarge r5.2xlarge r5dn.2xlarge r6i.2xlarge r5n.2xlarge i3en.2xlarge r5b.2xlarge r5d.2xlarge r5a.2xlarge p3.2xlarge m2.4xlarge m5a.4xlarge m4.4xlarge m5.4xlarge m6a.4xlarge m5d.4xlarge m5ad.4xlarge m6i.4xlarge m5n.4xlarge m5dn.4xlarge g4ad.4xlarge g5.4xlarge g4dn.4xlarge inf1.6xlarge c3.8xlarge cc2.8xlarge c6i.8xlarge c5a.8xlarge c6a.8xlarge c5ad.8xlarge c4.8xlarge i3en.3xlarge c5d.9xlarge c5.9xlarge g2.8xlarge m5zn.6xlarge c5n.9xlarge r3.4xlarge i3.4xlarge r4.4xlarge g3.4xlarge c6a.12xlarge r5d.4xlarge r5b.4xlarge c5ad.12xlarge r5.4xlarge c5a.12xlarge c5.12xlarge c6i.12xlarge r5dn.4xlarge r5n.4xlarge r6i.4xlarge r5a.4xlarge r5ad.4xlarge c5d.12xlarge m5n.8xlarge m5dn.8xlarge m5d.8xlarge m5ad.8xlarge m6i.8xlarge m5.8xlarge m6a.8xlarge m5a.8xlarge g4dn.8xlarge g5.8xlarge g4ad.8xlarge c5a.16xlarge c6i.16xlarge c5ad.16xlarge c6a.16xlarge m4.10xlarge c5d.18xlarge i3en.6xlarge c5.18xlarge m5.12xlarge m5ad.12xlarge m5a.12xlarge m5d.12xlarge m5dn.12xlarge m5n.12xlarge m5zn.12xlarge m6i.12xlarge m6a.12xlarge g5.12xlarge g4dn.12xlarge c5n.18xlarge r3.8xlarge i3.8xlarge r4.8xlarge g3.8xlarge r5dn.8xlarge c5d.24xlarge c6a.24xlarge r5n.8xlarge r5ad.8xlarge r5.8xlarge c5ad.24xlarge r6i.8xlarge c6i.24xlarge r5d.8xlarge c5a.24xlarge r5a.8xlarge r5b.8xlarge c5.24xlarge p3.8xlarge m4.16xlarge m5ad.16xlarge m5n.16xlarge m5a.16xlarge m6a.16xlarge m5d.16xlarge m5.16xlarge m5dn.16xlarge m6i.16xlarge g4dn.16xlarge g5.16xlarge g4ad.16xlarge inf1.24xlarge c6a.32xlarge c6i.32xlarge r5d.12xlarge i3en.12xlarge r5dn.12xlarge r5ad.12xlarge r5n.12xlarge r5a.12xlarge r5b.12xlarge r6i.12xlarge r5.12xlarge m5a.24xlarge m5dn.24xlarge m6a.24xlarge m5n.24xlarge m5ad.24xlarge m5.24xlarge m6i.24xlarge m5d.24xlarge g5.24xlarge i3.16xlarge r4.16xlarge p2.8xlarge g3.16xlarge r5dn.16xlarge r5.16xlarge r5b.16xlarge c6a.48xlarge r5n.16xlarge r5ad.16xlarge r5a.16xlarge r6i.16xlarge r5d.16xlarge p3.16xlarge m6i.32xlarge m6a.32xlarge r5a.24xlarge r5n.24xlarge r5ad.24xlarge i3en.24xlarge r5dn.24xlarge r5.24xlarge r6i.24xlarge r5d.24xlarge r5b.24xlarge p2.16xlarge p3dn.24xlarge m6a.48xlarge g5.48xlarge r6i.32xlarge p4d.24xlarge]	{"commit": "754893a", "provisioner": "default"}

Karpenter v0.7.0 - t3a.2xlarge $0.3008 8 32 GiB EBS Only Up to 5 Gigabit

karpenter-6799b465c5-xtncf controller 2022-03-23T14:56:28.577Z	INFO	controller.provisioning	Batched 4 pods in 1.037288597s	{"commit": "f78fa16", "provisioner": "default"}
karpenter-6799b465c5-xtncf controller 2022-03-23T14:56:28.583Z	DEBUG	controller.provisioning	Excluding instance type t3a.nano because there are not enough resources for kubelet and system overhead	{"commit": "f78fa16", "provisioner": "default"}
karpenter-6799b465c5-xtncf controller 2022-03-23T14:56:28.583Z	DEBUG	controller.provisioning	Excluding instance type t3.nano because there are not enough resources for kubelet and system overhead	{"commit": "f78fa16", "provisioner": "default"}
karpenter-6799b465c5-xtncf controller 2022-03-23T14:56:28.591Z	INFO	controller.provisioning	Computed packing of 1 node(s) for 4 pod(s) with instance type option(s) [c1.xlarge c4.2xlarge c3.2xlarge c5a.2xlarge c6i.2xlarge c5.2xlarge c5d.2xlarge c5ad.2xlarge c6a.2xlarge c5n.2xlarge m3.2xlarge m5d.2xlarge m5dn.2xlarge m4.2xlarge m5ad.2xlarge m6i.2xlarge m5a.2xlarge t3.2xlarge m5.2xlarge t3a.2xlarge]	{"commit": "f78fa16", "provisioner": "default"}

Inflate = 1 replica

Karpenter (combined scheduling + bin-packing) - t3a.micro $0.0094 2 1 GiB EBS Only Up to 5 Gigabit

karpenter-68d9d9b44f-9vjpk controller 2022-03-23T15:15:10.429Z	INFO	controller.provisioning	Batched 1 pods in 1.001060771s	{"commit": "754893a", "provisioner": "default"}
karpenter-68d9d9b44f-9vjpk controller 2022-03-23T15:15:10.439Z	INFO	controller.provisioning	scheduled 1 pods onto 1 nodes in 5.455513ms	{"commit": "754893a", "provisioner": "default"}
karpenter-68d9d9b44f-9vjpk controller 2022-03-23T15:15:10.462Z	INFO	controller.provisioning	creating 1 node(s) with instance type option(s) [t3.micro t3a.micro c1.medium t3a.small t3.small c4.large c3.large c6a.large t3a.medium c5d.large c6i.large c5ad.large t3.medium c5a.large c5.large c5n.large m1.large m3.large m5d.large m6a.large m5zn.large t3.large m5dn.large m5ad.large m5n.large t3a.large m5a.large m6i.large m4.large m5.large c4.xlarge c3.xlarge c6i.xlarge c5ad.xlarge c5.xlarge c5d.xlarge c6a.xlarge c5a.xlarge c5n.xlarge c1.xlarge r3.large inf1.xlarge i3.large r4.large i3en.large r5d.large r5a.large r5.large r5n.large r6i.large r5b.large r5ad.large r5dn.large m3.xlarge m1.xlarge m2.xlarge m5zn.xlarge m6a.xlarge t3.xlarge m5dn.xlarge m5n.xlarge m6i.xlarge t3a.xlarge m5a.xlarge m5ad.xlarge m5.xlarge m5d.xlarge m4.xlarge c4.2xlarge c3.2xlarge c6i.2xlarge c5a.2xlarge c6a.2xlarge c5.2xlarge c5d.2xlarge c5ad.2xlarge g5.xlarge g4dn.xlarge g4ad.xlarge g2.2xlarge c5n.2xlarge inf1.2xlarge r4.xlarge i3.xlarge r3.xlarge r5dn.xlarge r5a.xlarge r5n.xlarge r5.xlarge i3en.xlarge r5b.xlarge r6i.xlarge r5d.xlarge r5ad.xlarge m3.2xlarge m2.2xlarge g3s.xlarge m4.2xlarge m5.2xlarge m5ad.2xlarge m6i.2xlarge m5dn.2xlarge m5d.2xlarge m5n.2xlarge t3.2xlarge m5zn.2xlarge m5a.2xlarge t3a.2xlarge m6a.2xlarge g4dn.2xlarge g4ad.2xlarge g5.2xlarge c4.4xlarge c3.4xlarge c5a.4xlarge c5.4xlarge c5d.4xlarge c5ad.4xlarge c6i.4xlarge c6a.4xlarge c5n.4xlarge m5zn.3xlarge r4.2xlarge i3.2xlarge r3.2xlarge p2.xlarge r5ad.2xlarge i3en.2xlarge r5b.2xlarge r6i.2xlarge r5n.2xlarge r5a.2xlarge r5.2xlarge r5d.2xlarge r5dn.2xlarge p3.2xlarge m2.4xlarge m5d.4xlarge m6a.4xlarge m5ad.4xlarge m5dn.4xlarge m5n.4xlarge m6i.4xlarge m5.4xlarge m4.4xlarge m5a.4xlarge g4dn.4xlarge g5.4xlarge g4ad.4xlarge inf1.6xlarge c3.8xlarge cc2.8xlarge c6a.8xlarge c6i.8xlarge c5a.8xlarge c5ad.8xlarge c4.8xlarge i3en.3xlarge c5d.9xlarge c5.9xlarge g2.8xlarge m5zn.6xlarge c5n.9xlarge r3.4xlarge i3.4xlarge r4.4xlarge g3.4xlarge r5ad.4xlarge c5ad.12xlarge r5d.4xlarge c6i.12xlarge c5d.12xlarge c5.12xlarge r5dn.4xlarge c6a.12xlarge r5n.4xlarge c5a.12xlarge r5.4xlarge r5a.4xlarge r6i.4xlarge r5b.4xlarge m5d.8xlarge m5dn.8xlarge m5.8xlarge m6i.8xlarge m6a.8xlarge m5ad.8xlarge m5a.8xlarge m5n.8xlarge g4dn.8xlarge g5.8xlarge g4ad.8xlarge c5a.16xlarge c6a.16xlarge c6i.16xlarge c5ad.16xlarge m4.10xlarge c5.18xlarge c5d.18xlarge i3en.6xlarge m6a.12xlarge m6i.12xlarge m5.12xlarge m5n.12xlarge m5ad.12xlarge m5zn.12xlarge m5d.12xlarge m5a.12xlarge m5dn.12xlarge g5.12xlarge g4dn.12xlarge c5n.18xlarge i3.8xlarge r3.8xlarge r4.8xlarge g3.8xlarge c5d.24xlarge c6a.24xlarge r5n.8xlarge c5a.24xlarge c5.24xlarge c6i.24xlarge c5ad.24xlarge r5a.8xlarge r5ad.8xlarge r5.8xlarge r5d.8xlarge r5b.8xlarge r6i.8xlarge r5dn.8xlarge p3.8xlarge m5.16xlarge m5n.16xlarge m4.16xlarge m5dn.16xlarge m5ad.16xlarge m5d.16xlarge m5a.16xlarge m6i.16xlarge m6a.16xlarge g4dn.16xlarge g5.16xlarge g4ad.16xlarge inf1.24xlarge c6a.32xlarge c6i.32xlarge r5a.12xlarge r5b.12xlarge r5d.12xlarge r5dn.12xlarge r5.12xlarge r6i.12xlarge i3en.12xlarge r5ad.12xlarge r5n.12xlarge m5n.24xlarge m5a.24xlarge m5.24xlarge m6i.24xlarge m6a.24xlarge m5dn.24xlarge m5d.24xlarge m5ad.24xlarge g5.24xlarge r4.16xlarge i3.16xlarge p2.8xlarge g3.16xlarge r5b.16xlarge r5.16xlarge r5ad.16xlarge r5n.16xlarge c6a.48xlarge r5a.16xlarge r5d.16xlarge r6i.16xlarge r5dn.16xlarge p3.16xlarge m6a.32xlarge m6i.32xlarge r6i.24xlarge r5a.24xlarge r5d.24xlarge i3en.24xlarge r5b.24xlarge r5ad.24xlarge r5.24xlarge r5n.24xlarge r5dn.24xlarge p2.16xlarge p3dn.24xlarge m6a.48xlarge g5.48xlarge r6i.32xlarge p4d.24xlarge]	{"commit": "754893a", "provisioner": "default"}

Karpenter v0.7.0 - t3a.micro $0.0094 2 1 GiB EBS Only Up to 5 Gigabit

karpenter-6799b465c5-xtncf controller 2022-03-23T14:58:35.534Z	INFO	controller.provisioning	Batched 1 pods in 1.000482332s	{"commit": "f78fa16", "provisioner": "default"}
karpenter-6799b465c5-xtncf controller 2022-03-23T14:58:35.539Z	DEBUG	controller.provisioning	Excluding instance type t3a.nano because there are not enough resources for kubelet and system overhead	{"commit": "f78fa16", "provisioner": "default"}
karpenter-6799b465c5-xtncf controller 2022-03-23T14:58:35.541Z	DEBUG	controller.provisioning	Excluding instance type t3.nano because there are not enough resources for kubelet and system overhead	{"commit": "f78fa16", "provisioner": "default"}
karpenter-6799b465c5-xtncf controller 2022-03-23T14:58:35.548Z	INFO	controller.provisioning	Computed packing of 1 node(s) for 1 pod(s) with instance type option(s) [t3.micro t3a.micro c1.medium t3a.small t3.small c4.large c3.large c6a.large c5.large c5d.large c6i.large c5a.large t3.medium t3a.medium c5ad.large c5n.large m1.large m3.large m4.large t3.large]	{"commit": "f78fa16", "provisioner": "default"}

@tzneal
Copy link
Contributor Author

tzneal commented Mar 23, 2022

This is a larger PR, but 500+ of the new lines of code are all new unit tests, for non-test code only it's 15 files changed, 353 insertions(+), 405 deletions(-)/

@netlify
Copy link

netlify bot commented Mar 23, 2022

Deploy Preview for karpenter-docs-prod canceled.

Name Link
🔨 Latest commit 40e8a5d
🔍 Latest deploy log https://app.netlify.com/sites/karpenter-docs-prod/deploys/62420353f535090009472fbe

@tzneal tzneal force-pushed the schedule-and-binpack branch from 754893a to 91a8512 Compare March 23, 2022 17:10
@tzneal tzneal marked this pull request as draft March 23, 2022 17:44
@tzneal tzneal force-pushed the schedule-and-binpack branch 2 times, most recently from 5c31be9 to ca2fc1e Compare March 25, 2022 15:28
pkg/apis/provisioning/v1alpha5/requirements.go Outdated Show resolved Hide resolved
pkg/apis/provisioning/v1alpha5/requirements.go Outdated Show resolved Hide resolved
pkg/controllers/provisioning/controller.go Outdated Show resolved Hide resolved
pkg/controllers/provisioning/provisioner.go Outdated Show resolved Hide resolved
pkg/controllers/provisioning/provisioner.go Outdated Show resolved Hide resolved
pkg/controllers/provisioning/provisioner.go Outdated Show resolved Hide resolved
@tzneal tzneal marked this pull request as ready for review March 25, 2022 16:25
@tzneal tzneal force-pushed the schedule-and-binpack branch from b827465 to 705ae2a Compare March 25, 2022 20:34
logging.FromContext(ctx).Errorf("Could not pack pods, %s", err)
return
if err := p.launch(ctx, nodes[i]); err != nil {
logging.FromContext(ctx).Errorf("Could not launch node, %s", err)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit:

  1. I think err prints poorly with the %s directive
  2. I'm a fan of thinking of the log level as part of the message, e.g. "ERROR Launching node, for reasons"
Suggested change
logging.FromContext(ctx).Errorf("Could not launch node, %s", err)
logging.FromContext(ctx).Errorf("Launching node, %s", err.Error())

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. This is ok as far as I know, see https://pkg.go.dev/fmt, specifically:

If the format (which is implicitly %v for Println etc.) is valid for a string (%s %q %v %x %X), the following two rules apply:

  1. *If an operand implements the error interface, the Error method will be invoked to convert the object to a string, which will then be formatted as required by the verb (if any).

  2. If an operand implements method String() string, that method will be invoked to convert the object to a string, which will then be formatted as required by the verb (if any). *

The code is here: https://github.com/golang/go/blob/80a7504a13a5dccb60757d1fc66d71bcba359799/src/fmt/print.go#L611 , %s and %v with the error only as the arg are the same as calling err.Error()

  1. SGTM

@tzneal
Copy link
Contributor Author

tzneal commented Mar 26, 2022

In this rework I updated the log message for attempting to launch a node, it looks like this now:

karpenter-65794dfdbc-hbgnp controller 2022-03-26T01:12:58.616Z	INFO	controller.provisioning	Batched 15 pods in 1.356854661s	{"commit": "105e698", "provisioner": "default"}
karpenter-65794dfdbc-hbgnp controller 2022-03-26T01:12:58.847Z	INFO	controller.provisioning	scheduled 15 pods onto 1 nodes in 228.026726ms	{"commit": "105e698", "provisioner": "default"}
karpenter-65794dfdbc-hbgnp controller 2022-03-26T01:12:58.847Z	INFO	controller.provisioning	launching node for 15 pods using resources cpu: 15335m memory: 3029Mi ephemeral-storage: 15Gi  from types [c4.4xlarge c3.4xlarge c5d.4xlarge c6a.4xlarge c5.4xlarge c6i.4xlarge c5ad.4xlarge c5a.4xlarge c5n.4xlarge m6i.4xlarge m5d.4xlarge m5.4xlarge m5a.4xlarge m5ad.4xlarge m4.4xlarge m6a.4xlarge m5n.4xlarge m5dn.4xlarge g4dn.4xlarge g5.4xlarge g4ad.4xlarge inf1.6xlarge c3.8xlarge cc2.8xlarge c6a.8xlarge c5a.8xlarge c6i.8xlarge c4.8xlarge c5ad.8xlarge c5.9xlarge c5d.9xlarge g2.8xlarge m5zn.6xlarge c5n.9xlarge r3.4xlarge i3.4xlarge r4.4xlarge g3.4xlarge r6i.4xlarge r5n.4xlarge c5.12xlarge c5d.12xlarge r5a.4xlarge r5d.4xlarge r5b.4xlarge c5ad.12xlarge r5dn.4xlarge r5.4xlarge c5a.12xlarge c6i.12xlarge r5ad.4xlarge c6a.12xlarge m5ad.8xlarge m5n.8xlarge m5.8xlarge m5dn.8xlarge m6i.8xlarge m5d.8xlarge m5a.8xlarge m6a.8xlarge g5.8xlarge g4dn.8xlarge g4ad.8xlarge c6i.16xlarge c5ad.16xlarge c6a.16xlarge c5a.16xlarge m4.10xlarge c5d.18xlarge i3en.6xlarge c5.18xlarge m5ad.12xlarge m5zn.12xlarge m5dn.12xlarge m6a.12xlarge m5n.12xlarge m5d.12xlarge m5.12xlarge m5a.12xlarge m6i.12xlarge g5.12xlarge g4dn.12xlarge c5n.18xlarge r3.8xlarge r4.8xlarge i3.8xlarge g3.8xlarge c6i.24xlarge c5ad.24xlarge r5n.8xlarge r5dn.8xlarge r5a.8xlarge r5d.8xlarge r6i.8xlarge c5.24xlarge c6a.24xlarge r5.8xlarge c5a.24xlarge r5ad.8xlarge r5b.8xlarge c5d.24xlarge p3.8xlarge m6i.16xlarge m5n.16xlarge m6a.16xlarge m4.16xlarge m5.16xlarge m5d.16xlarge m5ad.16xlarge m5dn.16xlarge m5a.16xlarge g4dn.16xlarge g5.16xlarge g4ad.16xlarge inf1.24xlarge c6i.32xlarge c6a.32xlarge r6i.12xlarge r5ad.12xlarge i3en.12xlarge r5n.12xlarge r5a.12xlarge r5.12xlarge r5d.12xlarge r5b.12xlarge r5dn.12xlarge m5a.24xlarge m5dn.24xlarge m6i.24xlarge m5d.24xlarge m5.24xlarge m5ad.24xlarge m5n.24xlarge m6a.24xlarge g5.24xlarge r4.16xlarge i3.16xlarge p2.8xlarge g3.16xlarge r5n.16xlarge r6i.16xlarge r5a.16xlarge r5.16xlarge r5d.16xlarge r5dn.16xlarge c6a.48xlarge r5ad.16xlarge r5b.16xlarge p3.16xlarge m6a.32xlarge m6i.32xlarge r5ad.24xlarge r5.24xlarge r5a.24xlarge r5d.24xlarge r5dn.24xlarge r6i.24xlarge r5n.24xlarge r5b.24xlarge i3en.24xlarge p2.16xlarge p3dn.24xlarge m6a.48xlarge g5.48xlarge r6i.32xlarge p4d.24xlarge]	{"commit": "105e698", "provisioner": "default"}

@tzneal tzneal force-pushed the schedule-and-binpack branch from 23e63e2 to 9434fab Compare March 26, 2022 01:23
return true
}

func (n *Node) RequiredResources() v1.ResourceList {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is only used by logging, WDYT about making node a Stringer and just log launching node %s

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Modified node to have a String() method and limited instance types to 5:

karpenter-69b966c4df-2jdh8 controller 2022-03-27T02:18:36.443Z	INFO	controller.provisioning	Launched instance: i-0db3804e44367a4ed, hostname: ip-192-168-135-166.us-west-2.compute.internal, type: t3a.2xlarge, zone: us-west-2c, capacityType: on-demand	{"commit": "c456cf5", "provisioner": "default"}
karpenter-69b966c4df-2jdh8 controller 2022-03-27T02:18:36.443Z	INFO	controller.provisioning	Launched with 5 pods using resources cpu: 5315m memory: 1093Mi ephemeral-storage: 5Gi  from types c1.xlarge, c3.2xlarge, c4.2xlarge, c5ad.2xlarge, c6i.2xlarge and 205 others	{"commit": "c456cf5", "provisioner": "default"}
karpenter-69b966c4df-2jdh8 controller 2022-03-27T02:18:36.536Z	INFO	controller.provisioning	Bound 5 pod(s) to node ip-192-168-135-166.us-west-2.compute.internal	{"commit": "c456cf5", "provisioner": "default"}

@@ -102,3 +170,15 @@ func (n Node) hasCompatibleResources(resourceList v1.ResourceList, it cloudprovi
}
return true
}

func (n *Node) recalculateDaemonResources() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My understanding is that this code intends to remove daemons from the overhead calculation once constraints are tightened. I'm not sure we want to do this for a couple of reasons:

  1. I'm concerned about the performance impact of recomputing requirements on every loop
  2. It's rare in practice for daemons to have large resource requests or complicated scheduling requirements
  3. Even with all of this additional math, we can't get it right in all cases. Consider the case where a daemonset only runs on arm64, but pods are compatible with either arm64 or amd64. We don't know the architecture until after the node is launched, at which point we can't change our binpacking decision.
  4. In the worst case scenario, we're creating nodes with additional room if it turns out the daemon can't schedule.

Instead, does it make sense to simply check the requirements once during NewNode. This will apply provisioner level requirements to the daemons, which is an easy to explain contract to customers. Further, if an edge use cases is causing undesirable behavior, the user has the ability to align the constraints of their provisioner and their daemonsets.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM, I implemented it because the old code had this behavior. It didn't bin-pack until it had accumulated a set of the compatible pods, so it only did it once. It's a separate commit on this PR as it requires removing an old test (from old hash bashed scheduling + bin-packing) that verified that this was performed.

I did go one step further and just computed daemon set resources given provisioner constraints as we only need to do it once that way.

@@ -45,11 +47,28 @@ func NewComplementSet(values ...string) Set {
}
}

// Hash provides a hash function so we can generate a good hash for Set which has no public fields.
func (s Set) Hash() (uint64, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this still used?

Copy link
Contributor Author

@tzneal tzneal Mar 27, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, but I left it in thinking we might hash something with a Set() later. If we did, the hash library would just ignore the Set type and not issue any errors.

}
return hashstructure.Hash(key, hashstructure.FormatV2, nil)
}

// DeepCopy creates a deep copy of the set object
// It is required by the Kubernetes CRDs code generation
func (s Set) DeepCopy() Set {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remind me where we're deep copying the set?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The generated DeepCopy method for Requirements calls Set's DeepCopy method.


cpuCmp := resources.Cmp(lhs[v1.ResourceCPU], rhs[v1.ResourceCPU])
if cpuCmp < 0 {
// LHS has less CPU, so it should be sorted after
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why comment here and not on memory?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The CPU check follows the same pattern of lesser-value => sorts after the other, figured documenting it once was enough.

"github.com/aws/karpenter/pkg/apis/provisioning/v1alpha5"
)

type NodeSet struct {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not tracking what this abstraction gets us. Can't we just call getDaemons in the scheduler and then maintain a []Node?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I planned on hanging the topology information on the nodeset when implementing pod affinity/anti-affinity.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit nervous about predicting the future too much due to #975. Squinting at it, I'm not sure that this abstraction will survive review, and would generally opt to not try to predict abstractions without a use case. Given that you have a stack of PRs, I'm happy to move forward with it to keep us moving, but I remain to be convinced.

@@ -59,34 +65,67 @@ func NewScheduler(kubeClient client.Client) *Scheduler {
func (s *Scheduler) Solve(ctx context.Context, provisioner *v1alpha5.Provisioner, instanceTypes []cloudprovider.InstanceType, pods []*v1.Pod) ([]*Node, error) {
defer metrics.Measure(schedulingDuration.WithLabelValues(injection.GetNamespacedName(ctx).Name))()
constraints := provisioner.Spec.Constraints.DeepCopy()
start := time.Now()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This time is already being measured by line 65.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That data isn't readily available without running prometheus though, right? It's measured, but I wanted to know how long scheduling was taking and normally just tail the controller logs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer to maintain the separation of concerns between metrics and logging. I'd hate to be in a habit where our logs contain a bunch of metrics data, which might encourage users to parse it rather than consume the metrics directly. I'd love to get to a point where we have canonical karpenter dashboards where we can view the breakdown of the entire algorithm.

nodes = append(nodes, NewNode(constraints, instanceTypes, pod))
n, err := NewNode(constraints, nodeSet.daemons, instanceTypes, pod)
if err != nil {
logging.FromContext(ctx).With("pod", client.ObjectKeyFromObject(pod)).Errorf("scheduling pod, %s", err)
Copy link
Contributor

@ellistarn ellistarn Mar 26, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thoughts on putting this on line 89 so any logs can benefit?

ctx = logging.WithLogger(ctx, logging.FromContext(ctx).With("pod", client.ObjectKeyFromObject(pod)))

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only log is on line 98 and the ctx isn't passed anywhere that can use it.

tzneal added 2 commits March 26, 2022 21:59
This solves some issues with topologies and prepares for pod-affinity in
that a separate bin-packing pass would break apart pods that the scheduler
intended to be packed together.  By combining bin-packing with scheduling
the 'scheduling nodes' created correspond to actual K8s nodes that will be
created via the cloud provider, eliminating any errors in counting
hostname topologies.
@tzneal tzneal force-pushed the schedule-and-binpack branch 2 times, most recently from 2a4ab18 to 318c70c Compare March 27, 2022 03:12
@tzneal tzneal force-pushed the schedule-and-binpack branch from 318c70c to 3bd976e Compare March 27, 2022 03:15
Copy link
Contributor

@ellistarn ellistarn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Apologies for so many comments. I noticed a bunch of things that seems odd, duplicated, or unnecessarily complex. Happy to discuss on a call.

pkg/controllers/provisioning/provisioner.go Outdated Show resolved Hide resolved
pkg/controllers/provisioning/scheduling/node.go Outdated Show resolved Hide resolved
// will be compatible with this node
for _, p := range pods {
n.Add(p)
}
return n
}

func (n Node) Compatible(pod *v1.Pod) error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't remember if I've mentioned this already (and can't find it), but it strikes me that Compatible and Add may be able to be collapsed into a single method to avoid some duplicated logic. Apologies if we've already gone over this,

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We want compatible to be separate for the next PR so we can implement preferred pod affinity and be a bit smarter about spreading pods around the nodes we already have to create anyway due to hostname topology spreads and pod anti-affinity (e.g. create 3 m5.8xlarge instead of a m5.24xlarge and 2x m5.large).

"github.com/aws/karpenter/pkg/apis/provisioning/v1alpha5"
)

type NodeSet struct {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit nervous about predicting the future too much due to #975. Squinting at it, I'm not sure that this abstraction will survive review, and would generally opt to not try to predict abstractions without a use case. Given that you have a stack of PRs, I'm happy to move forward with it to keep us moving, but I remain to be convinced.

@@ -59,34 +65,67 @@ func NewScheduler(kubeClient client.Client) *Scheduler {
func (s *Scheduler) Solve(ctx context.Context, provisioner *v1alpha5.Provisioner, instanceTypes []cloudprovider.InstanceType, pods []*v1.Pod) ([]*Node, error) {
defer metrics.Measure(schedulingDuration.WithLabelValues(injection.GetNamespacedName(ctx).Name))()
constraints := provisioner.Spec.Constraints.DeepCopy()
start := time.Now()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer to maintain the separation of concerns between metrics and logging. I'd hate to be in a habit where our logs contain a bunch of metrics data, which might encourage users to parse it rather than consume the metrics directly. I'd love to get to a point where we have canonical karpenter dashboards where we can view the breakdown of the entire algorithm.

pkg/controllers/provisioning/scheduling/node.go Outdated Show resolved Hide resolved
return nil
}
}
return errors.New("no matching instance type found")
}

func (n Node) reservedResources(it cloudprovider.InstanceType) v1.ResourceList {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is another helper that could be deduplicated if we combine Add/Compatible

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See above regarding intentional separation.

pkg/controllers/provisioning/scheduling/node.go Outdated Show resolved Hide resolved
pkg/controllers/provisioning/scheduling/node.go Outdated Show resolved Hide resolved
func (n *Node) Add(pod *v1.Pod) error {
n.Requirements = n.Requirements.Add(v1alpha5.NewPodRequirements(pod).Requirements...)

podRequests := resources.RequestsForPods(pod)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it simplify our max pods check if we include a resource request for 1 pod in podRequest? Then we can just treat it like all other requests.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about that, but we can't mutate the pod itself. I added it in resources.RequestsForPods/LimitsForPods.

@tzneal tzneal force-pushed the schedule-and-binpack branch from aa8c3fd to c18f206 Compare March 28, 2022 13:49
@tzneal tzneal force-pushed the schedule-and-binpack branch from c18f206 to 0a44b01 Compare March 28, 2022 14:04
@@ -65,3 +74,24 @@ func Quantity(value string) *resource.Quantity {
func IsZero(r resource.Quantity) bool {
return r.IsZero()
}

func Cmp(lhs resource.Quantity, rhs resource.Quantity) int {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I realized that this helper is now only used in Fits, so we could potentially remove this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Checked and the scheduler uses it too:

func byCPUAndMemoryDescending(pods []*v1.Pod) func(i int, j int) bool {
	return func(i, j int) bool {
		lhs := resources.RequestsForPods(pods[i])
		rhs := resources.RequestsForPods(pods[j])

		cpuCmp := resources.Cmp(lhs[v1.ResourceCPU], rhs[v1.ResourceCPU])
		if cpuCmp < 0 {
			// LHS has less CPU, so it should be sorted after
			return false
		} else if cpuCmp > 0 {
			return true
		}
		memCmp := resources.Cmp(lhs[v1.ResourceMemory], rhs[v1.ResourceMemory])

		if memCmp < 0 {
			return false
		} else if memCmp > 0 {
			return true
		}
		return false
	}
}

@tzneal tzneal merged commit ed1061b into aws:main Mar 28, 2022
@tzneal tzneal deleted the schedule-and-binpack branch March 28, 2022 20:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants