Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[cluster-autoscaler] Extreme overprovisioning when using pod interpendency constraints #4099

Closed
fullykubed opened this issue May 24, 2021 · 17 comments
Labels
area/cluster-autoscaler kind/bug Categorizes issue or PR as related to a bug.

Comments

@fullykubed
Copy link

fullykubed commented May 24, 2021

Which component are you using?:

cluster-autoscalers

What version of the component are you using?:

1.19.1, 1.20.0, 1.21.0

What k8s version are you using (kubectl version)?:

1.20.5

kubectl version Output
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.6", GitCommit:"8a62859e515889f07e3e3be6a1080413f17cf2c3", GitTreeState:"clean", BuildDate:"2021-04-15T03:28:42Z", GoVersion:"go1.15.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"54684493f8139456e5d2f963b23cb5003c4d8055", GitTreeState:"clean", BuildDate:"2021-03-22T23:02:59Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}

What environment is this in?:

Azure w/ VMSS scaling configured via auto-discovery but this will affect all environments

What did you expect to happen?:

When using pod affinities set with requiredDuringSchedulingIgnoredDuringExecution or topologySpreadConstraints with DoNotSchedule, I'd expect that the cluster to only add nodes where the pods can be scheduled (and no more). That appears consistent with @MaciekPytel's explanation here: #3721.

What happened instead?:

I am combining the steps to reproduce with the error description.

I have validated using many permutations of the different pod interdependency configurations, but the behavior appears consistent in all case.

Consider the scenario where you have three (3) node groups with different topology.kubernetes.io/zone labels. Each node has 4 vCPUs and there is a single node in each zone. This cluster has a single Deployment that implements topology spreading based on those constraints:

Example Manifest
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: autoscalers
  labels:
    scaletest: "true"
spec:
  progressDeadlineSeconds: 600
  replicas: 3
  selector:
    matchLabels:
      scaletest: "true"
  strategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        scaletest: "true"
      namespace: autoscalers
    spec:
      containers:
      - image: paulbouwer/hello-kubernetes:1
        name: test
        resources:
          limits:
            cpu: 500m
          requests:
            cpu: 500m
      topologySpreadConstraints:
      - labelSelector:
          matchLabels:
            scaletest: "true"
        maxSkew: 1
        topologyKey: kubernetes.io/hostname
        whenUnsatisfiable: DoNotSchedule

To start out there are three (3) replicas for the deployment. Each replica is assigned to a different node due to the topologySpreadConstraints.

Execute a scale up to a large number. For my example, I scaled to 30. Based on the manifest and node sizes, one would need 3 additional nodes, one in each zone (a total of 2 in each zone). Each node would have between 3 and 7 pods on it and this would be balanced across the zones.

While it will eventually reach this equilibrium the route is circuitous and expensive.

Instead of scaling one node group at a time (by adding a single node at a time), the autoscaler has two significant bugs:

(1) The binpacking estimators estimates each plausible node group on a one-pod-per-node basis. In my example, this results in about 10-11 extra nodes in the returned estimates resulting in dramatic over-provisioning for that node group.

(2) Even worse, when the next scale up loop runs, those extra nodes don't actually provide a place for many pods to run due to the scheduling constraints. In other words, they remain in the unschedulable pod list even after running filterOutSchedulablePodListProc. Thus the scheduler repeats the overprovisioning from step (1) for EACH plausible node group.

I have included some logs from one of my tests. While this contains slightly different numbers than the example above, you'll notice the serial scaling of each node group to the configured maximum as well as the least-waste estimator showing all the resource waste.

Logs

.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/4
I0523 00:56:05.135459 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/4
I0523 00:56:05.135478 1 azure_cache.go:152] FindForInstance: found asg aks-system-22409415-vmss in cache
I0523 00:56:05.135556 1 filter_out_schedulable.go:65] Filtering out schedulables
I0523 00:56:05.135573 1 filter_out_schedulable.go:132] Filtered out 0 pods using hints
I0523 00:56:05.136859 1 filter_out_schedulable.go:170] 20 pods were kept as unschedulable based on caching
I0523 00:56:05.136875 1 filter_out_schedulable.go:171] 0 pods marked as unschedulable can be scheduled.
I0523 00:56:05.136888 1 filter_out_schedulable.go:82] No schedulable pods
I0523 00:56:05.136906 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-nbxhg is unschedulable
I0523 00:56:05.136918 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-sr22x is unschedulable
I0523 00:56:05.136923 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-fzfrj is unschedulable
I0523 00:56:05.136928 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-vmpkp is unschedulable
I0523 00:56:05.136933 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-bzt7q is unschedulable
I0523 00:56:05.136938 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-gx8qc is unschedulable
I0523 00:56:05.136943 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-vvjhq is unschedulable
I0523 00:56:05.136948 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-n6qtx is unschedulable
I0523 00:56:05.136953 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-hb5km is unschedulable
I0523 00:56:05.136957 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-9mftf is unschedulable
I0523 00:56:05.136963 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-kbrqr is unschedulable
I0523 00:56:05.136975 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-zfqt9 is unschedulable
I0523 00:56:05.136981 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-4x9vq is unschedulable
I0523 00:56:05.136985 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-l4hst is unschedulable
I0523 00:56:05.136991 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-vqzzq is unschedulable
I0523 00:56:05.136996 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-9lqz8 is unschedulable
I0523 00:56:05.137009 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-z7fwq is unschedulable
I0523 00:56:05.137013 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-xwfnk is unschedulable
I0523 00:56:05.137019 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-zl2xd is unschedulable
I0523 00:56:05.137024 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-gzgkj is unschedulable
I0523 00:56:05.137035 1 klogx.go:86] 1 other pods are also unschedulable
I0523 00:56:05.137049 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/10
I0523 00:56:05.137084 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/10
I0523 00:56:05.137109 1 azure_cache.go:152] FindForInstance: found asg aks-sc1-22409415-vmss in cache
I0523 00:56:05.137120 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/4
I0523 00:56:05.137155 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/4
I0523 00:56:05.137181 1 azure_cache.go:152] FindForInstance: found asg aks-system-22409415-vmss in cache
I0523 00:56:05.137193 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc3-22409415-vmss/virtualMachines/11
I0523 00:56:05.137233 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc3-22409415-vmss/virtualMachines/11
I0523 00:56:05.137257 1 azure_cache.go:152] FindForInstance: found asg aks-sc3-22409415-vmss in cache
I0523 00:56:05.137268 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/0
I0523 00:56:05.137302 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/0
I0523 00:56:05.137326 1 azure_cache.go:152] FindForInstance: found asg aks-sc2-22409415-vmss in cache
I0523 00:56:05.137363 1 scale_up.go:364] Upcoming 0 nodes
I0523 00:56:05.146115 1 waste.go:57] Expanding Node Group aks-sc1-22409415-vmss would waste 75.00% CPU, 100.00% Memory, 87.50% Blended
I0523 00:56:05.146137 1 waste.go:57] Expanding Node Group aks-sc2-22409415-vmss would waste 75.00% CPU, 100.00% Memory, 87.50% Blended
I0523 00:56:05.146147 1 waste.go:57] Expanding Node Group aks-sc3-22409415-vmss would waste 75.00% CPU, 100.00% Memory, 87.50% Blended
I0523 00:56:05.146153 1 waste.go:57] Expanding Node Group aks-system-22409415-vmss would waste 50.00% CPU, 100.00% Memory, 75.00% Blended
I0523 00:56:05.146164 1 scale_up.go:456] Best option to resize: aks-system-22409415-vmss
I0523 00:56:05.146179 1 scale_up.go:460] Estimated 21 nodes needed in aks-system-22409415-vmss
I0523 00:56:05.146199 1 balancing_processor.go:111] Requested scale-up (21) exceeds node group set capacity, capping to 2
I0523 00:56:05.146217 1 scale_up.go:574] Final scale-up plan: [{aks-system-22409415-vmss 1->3 (max: 3)}]
I0523 00:56:05.146236 1 scale_up.go:663] Scale-up: setting group aks-system-22409415-vmss size to 3
I0523 00:56:05.146262 1 azure_scale_set.go:309] Waiting for virtualMachineScaleSetsClient.CreateOrUpdateAsync(aks-system-22409415-vmss)
I0523 00:56:05.146533 1 event_sink_logging_wrapper.go:48] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"autoscalers", Name:"cluster-autoscaler-status", UID:"18612b36-d2c4-4cc9-ad18-a97dfcc5c0a3", APIVersion:"v1", ResourceVersion:"117170", FieldPath:""}): type: 'Normal' reason: 'ScaledUpGroup' Scale-up: setting group aks-system-22409415-vmss size to 3
I0523 00:56:05.150844 1 event_sink_logging_wrapper.go:48] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"autoscalers", Name:"cluster-autoscaler-status", UID:"18612b36-d2c4-4cc9-ad18-a97dfcc5c0a3", APIVersion:"v1", ResourceVersion:"117170", FieldPath:""}): type: 'Normal' reason: 'ScaledUpGroup' Scale-up: setting group aks-system-22409415-vmss size to 3
I0523 00:56:05.854836 1 eventing_scale_up_processor.go:47] Skipping event processing for unschedulable pods since there is a ScaleUp attempt this loop
I0523 00:56:05.854890 1 azure_scale_set.go:272] Calling virtualMachineScaleSetsClient.WaitForAsyncOperationResult - updateVMSSCapacity(aks-system-22409415-vmss)
I0523 00:56:05.855226 1 event_sink_logging_wrapper.go:48] Event(v1.ObjectReference{Kind:"Pod", Namespace:"autoscalers", Name:"scale-test-6cc69f4cf-sr22x", UID:"e52cb859-4cb9-4139-acdb-235870bff202", APIVersion:"v1", ResourceVersion:"117113", FieldPath:""}): type: 'Normal' reason: 'TriggeredScaleUp' pod triggered scale-up: [{aks-system-22409415-vmss 1->3 (max: 3)}]
I0523 00:56:05.860966 1 event_sink_logging_wrapper.go:48] Event(v1.ObjectReference{Kind:"Pod", Namespace:"autoscalers", Name:"scale-test-6cc69f4cf-nbxhg", UID:"0188317d-8371-434f-ad77-8893ae0bb889", APIVersion:"v1", ResourceVersion:"117155", FieldPath:""}): type: 'Normal' reason: 'TriggeredScaleUp' pod triggered scale-up: [{aks-system-22409415-vmss 1->3 (max: 3)}]
I0523 00:56:05.864955 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/10
I0523 00:56:05.865008 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/10
I0523 00:56:05.865038 1 azure_cache.go:152] FindForInstance: found asg aks-sc1-22409415-vmss in cache
I0523 00:56:05.865054 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc3-22409415-vmss/virtualMachines/11
I0523 00:56:05.865101 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc3-22409415-vmss/virtualMachines/11
I0523 00:56:05.865128 1 azure_cache.go:152] FindForInstance: found asg aks-sc3-22409415-vmss in cache
I0523 00:56:05.865137 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/0
I0523 00:56:05.865182 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/0
I0523 00:56:05.865210 1 azure_cache.go:152] FindForInstance: found asg aks-sc2-22409415-vmss in cache
I0523 00:56:05.865740 1 event_sink_logging_wrapper.go:48] Event(v1.ObjectReference{Kind:"Pod", Namespace:"autoscalers", Name:"scale-test-6cc69f4cf-fzfrj", UID:"52b348f8-7a7f-4ff7-8fac-d1d6a9e80a6c", APIVersion:"v1", ResourceVersion:"117116", FieldPath:""}): type: 'Normal' reason: 'TriggeredScaleUp' pod triggered scale-up: [{aks-system-22409415-vmss 1->3 (max: 3)}]
I0523 00:56:05.869989 1 event_sink_logging_wrapper.go:48] Event(v1.ObjectReference{Kind:"Pod", Namespace:"autoscalers", Name:"scale-test-6cc69f4cf-vmpkp", UID:"6104b59e-07e6-47e6-b6bd-371c97820da6", APIVersion:"v1", ResourceVersion:"117121", FieldPath:""}): type: 'Normal' reason: 'TriggeredScaleUp' pod triggered scale-up: [{aks-system-22409415-vmss 1->3 (max: 3)}]
I0523 00:56:05.880788 1 event_sink_logging_wrapper.go:48] Event(v1.ObjectReference{Kind:"Pod", Namespace:"autoscalers", Name:"scale-test-6cc69f4cf-bzt7q", UID:"edfabe7a-2c7d-4d25-9353-d6474d6fe10d", APIVersion:"v1", ResourceVersion:"117124", FieldPath:""}): type: 'Normal' reason: 'TriggeredScaleUp' pod triggered scale-up: [{aks-system-22409415-vmss 1->3 (max: 3)}]
I0523 00:56:05.885703 1 event_sink_logging_wrapper.go:48] Event(v1.ObjectReference{Kind:"Pod", Namespace:"autoscalers", Name:"scale-test-6cc69f4cf-gx8qc", UID:"1db64b74-cdf4-4708-bc5e-bffde4ae17e0", APIVersion:"v1", ResourceVersion:"117137", FieldPath:""}): type: 'Normal' reason: 'TriggeredScaleUp' pod triggered scale-up: [{aks-system-22409415-vmss 1->3 (max: 3)}]
I0523 00:56:05.889979 1 event_sink_logging_wrapper.go:48] Event(v1.ObjectReference{Kind:"Pod", Namespace:"autoscalers", Name:"scale-test-6cc69f4cf-vvjhq", UID:"34ad8a15-95de-433b-83e7-ea37aa63661b", APIVersion:"v1", ResourceVersion:"117151", FieldPath:""}): type: 'Normal' reason: 'TriggeredScaleUp' pod triggered scale-up: [{aks-system-22409415-vmss 1->3 (max: 3)}]
I0523 00:56:05.895670 1 event_sink_logging_wrapper.go:48] Event(v1.ObjectReference{Kind:"Pod", Namespace:"autoscalers", Name:"scale-test-6cc69f4cf-n6qtx", UID:"0a7cc9cf-1b0a-4a83-b041-4287c995f589", APIVersion:"v1", ResourceVersion:"117095", FieldPath:""}): type: 'Normal' reason: 'TriggeredScaleUp' pod triggered scale-up: [{aks-system-22409415-vmss 1->3 (max: 3)}]
I0523 00:56:05.900682 1 event_sink_logging_wrapper.go:48] Event(v1.ObjectReference{Kind:"Pod", Namespace:"autoscalers", Name:"scale-test-6cc69f4cf-kbrqr", UID:"058332f6-9576-4a32-a76a-8bd50d58e8a0", APIVersion:"v1", ResourceVersion:"117117", FieldPath:""}): type: 'Normal' reason: 'TriggeredScaleUp' pod triggered scale-up: [{aks-system-22409415-vmss 1->3 (max: 3)}]
I0523 00:56:05.904964 1 event_sink_logging_wrapper.go:48] Event(v1.ObjectReference{Kind:"Pod", Namespace:"autoscalers", Name:"scale-test-6cc69f4cf-hb5km", UID:"650e04a0-dffb-4783-817a-d8c673c87f51", APIVersion:"v1", ResourceVersion:"117157", FieldPath:""}): type: 'Normal' reason: 'TriggeredScaleUp' pod triggered scale-up: [{aks-system-22409415-vmss 1->3 (max: 3)}]
I0523 00:56:05.911947 1 event_sink_logging_wrapper.go:48] Event(v1.ObjectReference{Kind:"Pod", Namespace:"autoscalers", Name:"scale-test-6cc69f4cf-9mftf", UID:"c57ebbdf-9186-4b7a-92d4-dfdda0ffca15", APIVersion:"v1", ResourceVersion:"117159", FieldPath:""}): type: 'Normal' reason: 'TriggeredScaleUp' pod triggered scale-up: [{aks-system-22409415-vmss 1->3 (max: 3)}]
I0523 00:56:06.055510 1 request.go:591] Throttling request took 143.497593ms, request: POST:https://10.10.0.1:443/api/v1/namespaces/autoscalers/events
I0523 00:56:06.062797 1 event_sink_logging_wrapper.go:48] Event(v1.ObjectReference{Kind:"Pod", Namespace:"autoscalers", Name:"scale-test-6cc69f4cf-zfqt9", UID:"cdf41e5c-6c45-48c2-b9f6-23513bb9180f", APIVersion:"v1", ResourceVersion:"117140", FieldPath:""}): type: 'Normal' reason: 'TriggeredScaleUp' pod triggered scale-up: [{aks-system-22409415-vmss 1->3 (max: 3)}]
I0523 00:56:06.255523 1 request.go:591] Throttling request took 192.633776ms, request: POST:https://10.10.0.1:443/api/v1/namespaces/autoscalers/events
I0523 00:56:06.262198 1 event_sink_logging_wrapper.go:48] Event(v1.ObjectReference{Kind:"Pod", Namespace:"autoscalers", Name:"scale-test-6cc69f4cf-4x9vq", UID:"cb58fc75-a8a6-417c-864d-15c94b685449", APIVersion:"v1", ResourceVersion:"117144", FieldPath:""}): type: 'Normal' reason: 'TriggeredScaleUp' pod triggered scale-up: [{aks-system-22409415-vmss 1->3 (max: 3)}]
I0523 00:56:06.455579 1 request.go:591] Throttling request took 193.269485ms, request: POST:https://10.10.0.1:443/api/v1/namespaces/autoscalers/events
I0523 00:56:06.461621 1 event_sink_logging_wrapper.go:48] Event(v1.ObjectReference{Kind:"Pod", Namespace:"autoscalers", Name:"scale-test-6cc69f4cf-vqzzq", UID:"9f301ca0-6138-45f2-8b3c-e9629723f5d0", APIVersion:"v1", ResourceVersion:"117163", FieldPath:""}): type: 'Normal' reason: 'TriggeredScaleUp' pod triggered scale-up: [{aks-system-22409415-vmss 1->3 (max: 3)}]
I0523 00:56:06.655520 1 request.go:591] Throttling request took 193.791792ms, request: POST:https://10.10.0.1:443/api/v1/namespaces/autoscalers/events
I0523 00:56:06.662092 1 event_sink_logging_wrapper.go:48] Event(v1.ObjectReference{Kind:"Pod", Namespace:"autoscalers", Name:"scale-test-6cc69f4cf-l4hst", UID:"c7e4289a-8538-4d78-9ef8-03a1ccaa9812", APIVersion:"v1", ResourceVersion:"117161", FieldPath:""}): type: 'Normal' reason: 'TriggeredScaleUp' pod triggered scale-up: [{aks-system-22409415-vmss 1->3 (max: 3)}]
I0523 00:56:06.855470 1 request.go:591] Throttling request took 193.277485ms, request: POST:https://10.10.0.1:443/api/v1/namespaces/autoscalers/events
I0523 00:56:06.861564 1 event_sink_logging_wrapper.go:48] Event(v1.ObjectReference{Kind:"Pod", Namespace:"autoscalers", Name:"scale-test-6cc69f4cf-9lqz8", UID:"ef79d642-a872-4e0a-8304-6d2f56df3475", APIVersion:"v1", ResourceVersion:"117101", FieldPath:""}): type: 'Normal' reason: 'TriggeredScaleUp' pod triggered scale-up: [{aks-system-22409415-vmss 1->3 (max: 3)}]
I0523 00:56:07.055519 1 request.go:591] Throttling request took 193.862493ms, request: POST:https://10.10.0.1:443/api/v1/namespaces/autoscalers/events
I0523 00:56:07.061989 1 event_sink_logging_wrapper.go:48] Event(v1.ObjectReference{Kind:"Pod", Namespace:"autoscalers", Name:"scale-test-6cc69f4cf-z7fwq", UID:"d1953dff-8c0e-4ed0-92c9-bd18e01a7bc4", APIVersion:"v1", ResourceVersion:"117106", FieldPath:""}): type: 'Normal' reason: 'TriggeredScaleUp' pod triggered scale-up: [{aks-system-22409415-vmss 1->3 (max: 3)}]
I0523 00:56:07.255445 1 request.go:591] Throttling request took 193.370286ms, request: POST:https://10.10.0.1:443/api/v1/namespaces/autoscalers/events
I0523 00:56:07.260539 1 event_sink_logging_wrapper.go:48] Event(v1.ObjectReference{Kind:"Pod", Namespace:"autoscalers", Name:"scale-test-6cc69f4cf-zl2xd", UID:"4d53f577-4954-4046-948f-16b2280935b3", APIVersion:"v1", ResourceVersion:"117149", FieldPath:""}): type: 'Normal' reason: 'TriggeredScaleUp' pod triggered scale-up: [{aks-system-22409415-vmss 1->3 (max: 3)}]
I0523 00:56:07.455518 1 request.go:591] Throttling request took 194.888807ms, request: POST:https://10.10.0.1:443/api/v1/namespaces/autoscalers/events
I0523 00:56:07.461756 1 event_sink_logging_wrapper.go:48] Event(v1.ObjectReference{Kind:"Pod", Namespace:"autoscalers", Name:"scale-test-6cc69f4cf-xwfnk", UID:"befd0d85-8f37-44d7-945e-35f70d69c309", APIVersion:"v1", ResourceVersion:"117147", FieldPath:""}): type: 'Normal' reason: 'TriggeredScaleUp' pod triggered scale-up: [{aks-system-22409415-vmss 1->3 (max: 3)}]
I0523 00:56:07.655424 1 request.go:591] Throttling request took 193.571088ms, request: POST:https://10.10.0.1:443/api/v1/namespaces/autoscalers/events
I0523 00:56:07.661830 1 event_sink_logging_wrapper.go:48] Event(v1.ObjectReference{Kind:"Pod", Namespace:"autoscalers", Name:"scale-test-6cc69f4cf-gzgkj", UID:"c34de0df-607b-4732-b772-d457cdafa098", APIVersion:"v1", ResourceVersion:"117153", FieldPath:""}): type: 'Normal' reason: 'TriggeredScaleUp' pod triggered scale-up: [{aks-system-22409415-vmss 1->3 (max: 3)}]
I0523 00:56:07.855504 1 request.go:591] Throttling request took 193.583189ms, request: POST:https://10.10.0.1:443/api/v1/namespaces/autoscalers/events
I0523 00:56:07.862462 1 event_sink_logging_wrapper.go:48] Event(v1.ObjectReference{Kind:"Pod", Namespace:"autoscalers", Name:"scale-test-6cc69f4cf-9wkx5", UID:"0aeef587-df1e-4b98-8235-78aa5202f028", APIVersion:"v1", ResourceVersion:"117166", FieldPath:""}): type: 'Normal' reason: 'TriggeredScaleUp' pod triggered scale-up: [{aks-system-22409415-vmss 1->3 (max: 3)}]
I0523 00:56:08.055544 1 request.go:591] Throttling request took 192.987381ms, request: POST:https://10.10.0.1:443/api/v1/namespaces/autoscalers/events
I0523 00:56:10.178295 1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.Node total 9 items received
I0523 00:56:15.865362 1 static_autoscaler.go:229] Starting main loop
I0523 00:56:15.865788 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/0
I0523 00:56:15.865841 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/0
I0523 00:56:15.865875 1 azure_cache.go:152] FindForInstance: found asg aks-sc2-22409415-vmss in cache
I0523 00:56:15.866123 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/10
I0523 00:56:15.866174 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/10
I0523 00:56:15.866198 1 azure_cache.go:152] FindForInstance: found asg aks-sc1-22409415-vmss in cache
I0523 00:56:15.866413 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/4
I0523 00:56:15.866457 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/4
I0523 00:56:15.866479 1 azure_cache.go:152] FindForInstance: found asg aks-system-22409415-vmss in cache
I0523 00:56:15.866536 1 taints.go:71] Removing rescheduler taint when creating template
I0523 00:56:15.866703 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc3-22409415-vmss/virtualMachines/11
I0523 00:56:15.866742 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc3-22409415-vmss/virtualMachines/11
I0523 00:56:15.866766 1 azure_cache.go:152] FindForInstance: found asg aks-sc3-22409415-vmss in cache
I0523 00:56:15.867002 1 azure_scale_set.go:548] Nodes: starts, scaleSet.Name: aks-system-22409415-vmss
I0523 00:56:15.867013 1 azure_scale_set.go:564] Nodes: starts to get VMSS VMs
I0523 00:56:15.867039 1 azure_scale_set.go:355] GetScaleSetVms: starts
I0523 00:56:15.932067 1 azure_scale_set.go:361] GetScaleSetVms: scaleSet.Name: aks-system-22409415-vmss, vmList: [{{} 0xc001559270 0xc0013bdf20 0xc000b31d80 0xc0013bd980 0xc0013bd940 0xc0015590f0 0xc0015592b0 0xc0015592c0 0xc001559100 map[cluster-autoscaler-enabled:0xc001559110 cluster-name:0xc001559130 createOperationID:0xc001559150 creationSource:0xc001559170 k8s.io_cluster-autoscaler_node-template_label_class:0xc001559190 k8s.io_cluster-autoscaler_node-template_taint_CriticalAddonsOnly:0xc0015591b0 max:0xc0015591d0 min:0xc0015591f0 orchestrator:0xc001559210 poolName:0xc001559230 resourceNameSuffix:0xc001559250]} {{} 0xc001708800 0xc000dbdd00 0xc000b31f80 0xc000dbdd60 0xc000dbdd20 0xc001708a10 0xc001708a00 0xc001708a20 0xc001559380 map[cluster-autoscaler-enabled:0xc001708a30 cluster-name:0xc001708a50 createOperationID:0xc001708a70 creationSource:0xc001708a90 k8s.io_cluster-autoscaler_node-template_label_class:0xc001708ab0 k8s.io_cluster-autoscaler_node-template_taint_CriticalAddonsOnly:0xc001708ad0 max:0xc001708af0 min:0xc001708b10 orchestrator:0xc001708b30 poolName:0xc001708b50 resourceNameSuffix:0xc001708b70]} {{} 0xc0017640d0 0xc000945480 0xc000970680 0xc000945a60 0xc0009459e0 0xc001764110 0xc0017640b0 0xc001764120 0xc0017640c0 map[cluster-autoscaler-enabled:0xc001764130 cluster-name:0xc001764150 createOperationID:0xc001764170 creationSource:0xc001764190 k8s.io_cluster-autoscaler_node-template_label_class:0xc0017641b0 k8s.io_cluster-autoscaler_node-template_taint_CriticalAddonsOnly:0xc0017641d0 max:0xc0017641f0 min:0xc001764210 orchestrator:0xc001764230 poolName:0xc001764250 resourceNameSuffix:0xc001764270]}]
I0523 00:56:15.932189 1 azure_scale_set.go:581] Nodes: returns
I0523 00:56:15.932221 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/10
I0523 00:56:15.932255 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/10
I0523 00:56:15.932289 1 azure_cache.go:152] FindForInstance: found asg aks-sc1-22409415-vmss in cache
I0523 00:56:15.932314 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc3-22409415-vmss/virtualMachines/11
I0523 00:56:15.932347 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc3-22409415-vmss/virtualMachines/11
I0523 00:56:15.932377 1 azure_cache.go:152] FindForInstance: found asg aks-sc3-22409415-vmss in cache
I0523 00:56:15.932386 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/4
I0523 00:56:15.932417 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/4
I0523 00:56:15.932438 1 azure_cache.go:152] FindForInstance: found asg aks-system-22409415-vmss in cache
I0523 00:56:15.932451 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/0
I0523 00:56:15.932491 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/0
I0523 00:56:15.932514 1 azure_cache.go:152] FindForInstance: found asg aks-sc2-22409415-vmss in cache
I0523 00:56:15.932529 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/7
I0523 00:56:15.932564 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/7
I0523 00:56:15.932587 1 azure_cache.go:155] FindForInstance: Couldn't find NodeGroup of instance {"azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/7"}
I0523 00:56:15.932611 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/6
I0523 00:56:15.932643 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/6
I0523 00:56:15.932667 1 azure_cache.go:155] FindForInstance: Couldn't find NodeGroup of instance {"azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/6"}
I0523 00:56:15.932689 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/8
I0523 00:56:15.932722 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/8
I0523 00:56:15.932747 1 azure_cache.go:155] FindForInstance: Couldn't find NodeGroup of instance {"azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/8"}
W0523 00:56:15.932771 1 clusterstate.go:594] Nodegroup is nil for azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/8
I0523 00:56:15.932778 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/9
I0523 00:56:15.932818 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/9
I0523 00:56:15.932841 1 azure_cache.go:155] FindForInstance: Couldn't find NodeGroup of instance {"azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/9"}
W0523 00:56:15.932860 1 clusterstate.go:594] Nodegroup is nil for azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/9
I0523 00:56:15.932914 1 static_autoscaler.go:320] 2 unregistered nodes present
I0523 00:56:15.933168 1 filter_out_schedulable.go:65] Filtering out schedulables
I0523 00:56:15.933192 1 filter_out_schedulable.go:132] Filtered out 0 pods using hints
I0523 00:56:15.933343 1 filter_out_schedulable.go:157] Pod autoscalers.scale-test-6cc69f4cf-9lqz8 marked as unschedulable can be scheduled on node template-node-for-aks-system-22409415-vmss-2847257467566504885-1. Ignoring in scale up.
I0523 00:56:15.934430 1 filter_out_schedulable.go:170] 19 pods were kept as unschedulable based on caching
I0523 00:56:15.934444 1 filter_out_schedulable.go:171] 1 pods marked as unschedulable can be scheduled.
I0523 00:56:15.934456 1 filter_out_schedulable.go:79] Schedulable pods present
I0523 00:56:15.934474 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-z7fwq is unschedulable
I0523 00:56:15.934487 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-xwfnk is unschedulable
I0523 00:56:15.934499 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-zl2xd is unschedulable
I0523 00:56:15.934506 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-gzgkj is unschedulable
I0523 00:56:15.934513 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-9wkx5 is unschedulable
I0523 00:56:15.934520 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-nbxhg is unschedulable
I0523 00:56:15.934534 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-sr22x is unschedulable
I0523 00:56:15.934540 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-fzfrj is unschedulable
I0523 00:56:15.934546 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-vmpkp is unschedulable
I0523 00:56:15.934553 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-bzt7q is unschedulable
I0523 00:56:15.934561 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-gx8qc is unschedulable
I0523 00:56:15.934574 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-vvjhq is unschedulable
I0523 00:56:15.934580 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-n6qtx is unschedulable
I0523 00:56:15.934586 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-hb5km is unschedulable
I0523 00:56:15.934601 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-9mftf is unschedulable
I0523 00:56:15.934605 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-kbrqr is unschedulable
I0523 00:56:15.934610 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-zfqt9 is unschedulable
I0523 00:56:15.934616 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-4x9vq is unschedulable
I0523 00:56:15.934622 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-l4hst is unschedulable
I0523 00:56:15.934629 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-vqzzq is unschedulable
I0523 00:56:15.934638 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/0
I0523 00:56:15.934680 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/0
I0523 00:56:15.934709 1 azure_cache.go:152] FindForInstance: found asg aks-sc2-22409415-vmss in cache
I0523 00:56:15.934725 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/10
I0523 00:56:15.934765 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/10
I0523 00:56:15.934794 1 azure_cache.go:152] FindForInstance: found asg aks-sc1-22409415-vmss in cache
I0523 00:56:15.934805 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/4
I0523 00:56:15.934848 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/4
I0523 00:56:15.934878 1 azure_cache.go:152] FindForInstance: found asg aks-system-22409415-vmss in cache
I0523 00:56:15.934887 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc3-22409415-vmss/virtualMachines/11
I0523 00:56:15.934936 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc3-22409415-vmss/virtualMachines/11
I0523 00:56:15.934980 1 azure_cache.go:152] FindForInstance: found asg aks-sc3-22409415-vmss in cache
I0523 00:56:15.935023 1 scale_up.go:364] Upcoming 2 nodes
I0523 00:56:15.940727 1 scale_up.go:288] Pod scale-test-6cc69f4cf-z7fwq can't be scheduled on aks-sc3-22409415-vmss, predicate checking error: node(s) didn't match pod topology spread constraints; predicateName=PodTopologySpread; reasons: node(s) didn't match pod topology spread constraints; debugInfo=
I0523 00:56:15.940741 1 scale_up.go:290] 19 other pods similar to scale-test-6cc69f4cf-z7fwq can't be scheduled on aks-sc3-22409415-vmss
I0523 00:56:15.940764 1 scale_up.go:437] No pod can fit to aks-sc3-22409415-vmss
I0523 00:56:15.940781 1 scale_up.go:400] Skipping node group aks-system-22409415-vmss - max size reached
I0523 00:56:15.940795 1 waste.go:57] Expanding Node Group aks-sc1-22409415-vmss would waste 75.00% CPU, 100.00% Memory, 87.50% Blended
I0523 00:56:15.940808 1 waste.go:57] Expanding Node Group aks-sc2-22409415-vmss would waste 75.00% CPU, 100.00% Memory, 87.50% Blended
I0523 00:56:15.940817 1 scale_up.go:456] Best option to resize: aks-sc1-22409415-vmss
I0523 00:56:15.940831 1 scale_up.go:460] Estimated 20 nodes needed in aks-sc1-22409415-vmss
I0523 00:56:15.940845 1 balancing_processor.go:111] Requested scale-up (20) exceeds node group set capacity, capping to 9
I0523 00:56:15.940856 1 scale_up.go:574] Final scale-up plan: [{aks-sc1-22409415-vmss 1->10 (max: 10)}]
I0523 00:56:15.940874 1 scale_up.go:663] Scale-up: setting group aks-sc1-22409415-vmss size to 10
I0523 00:56:15.940904 1 azure_scale_set.go:309] Waiting for virtualMachineScaleSetsClient.CreateOrUpdateAsync(aks-sc1-22409415-vmss)
I0523 00:56:16.937453 1 eventing_scale_up_processor.go:47] Skipping event processing for unschedulable pods since there is a ScaleUp attempt this loop
I0523 00:56:16.938216 1 azure_scale_set.go:272] Calling virtualMachineScaleSetsClient.WaitForAsyncOperationResult - updateVMSSCapacity(aks-sc1-22409415-vmss)
I0523 00:56:16.950379 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/10
I0523 00:56:16.950415 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/10
I0523 00:56:16.950433 1 azure_cache.go:152] FindForInstance: found asg aks-sc1-22409415-vmss in cache
I0523 00:56:16.950445 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc3-22409415-vmss/virtualMachines/11
I0523 00:56:16.950474 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc3-22409415-vmss/virtualMachines/11
I0523 00:56:16.950498 1 azure_cache.go:152] FindForInstance: found asg aks-sc3-22409415-vmss in cache
I0523 00:56:16.950505 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/0
I0523 00:56:16.950544 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/0
I0523 00:56:16.950566 1 azure_cache.go:152] FindForInstance: found asg aks-sc2-22409415-vmss in cache
I0523 00:56:18.176282 1 reflector.go:530] k8s.io/client-go/informers/factory.go:134: Watch close - *v1.PersistentVolumeClaim total 0 items received
I0523 00:56:26.950765 1 static_autoscaler.go:229] Starting main loop
I0523 00:56:26.951166 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/0
I0523 00:56:26.951223 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/0
I0523 00:56:26.951254 1 azure_cache.go:152] FindForInstance: found asg aks-sc2-22409415-vmss in cache
I0523 00:56:26.951472 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/10
I0523 00:56:26.951515 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/10
I0523 00:56:26.951541 1 azure_cache.go:152] FindForInstance: found asg aks-sc1-22409415-vmss in cache
I0523 00:56:26.951732 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/4
I0523 00:56:26.951783 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/4
I0523 00:56:26.951823 1 azure_cache.go:152] FindForInstance: found asg aks-system-22409415-vmss in cache
I0523 00:56:26.951887 1 taints.go:71] Removing rescheduler taint when creating template
I0523 00:56:26.952083 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc3-22409415-vmss/virtualMachines/11
I0523 00:56:26.952156 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc3-22409415-vmss/virtualMachines/11
I0523 00:56:26.952181 1 azure_cache.go:152] FindForInstance: found asg aks-sc3-22409415-vmss in cache
I0523 00:56:27.035525 1 azure_scale_set.go:548] Nodes: starts, scaleSet.Name: aks-system-22409415-vmss
I0523 00:56:27.035554 1 azure_scale_set.go:560] Nodes: returns with curSize 3
I0523 00:56:27.035534 1 azure_scale_set.go:548] Nodes: starts, scaleSet.Name: aks-sc1-22409415-vmss
I0523 00:56:27.035566 1 azure_scale_set.go:564] Nodes: starts to get VMSS VMs
I0523 00:56:27.035584 1 azure_scale_set.go:355] GetScaleSetVms: starts
I0523 00:56:27.122916 1 azure_scale_set.go:361] GetScaleSetVms: scaleSet.Name: aks-sc1-22409415-vmss, vmList: [{{} 0xc000ed6b10 0xc000c52960 0xc000d57b80 0xc000c52280 0xc000c528c0 0xc000ed68c0 0xc000ed6b00 0xc000ed68d0 0xc000ed68f0 map[app:0xc000ed6900 cluster-autoscaler-enabled:0xc000ed6920 cluster-name:0xc000ed6940 createOperationID:0xc000ed6960 creationSource:0xc000ed6980 k8s.io_cluster-autoscaler_node-template_label_class:0xc000ed69a0 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_region:0xc000ed69c0 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_zone:0xc000ed69e0 max:0xc000ed6a00 min:0xc000ed6a20 module:0xc000ed6a40 orchestrator:0xc000ed6a60 poolName:0xc000ed6a80 resourceNameSuffix:0xc000ed6aa0 terraform:0xc000ed6ac0 version:0xc000ed6ae0]} {{} 0xc000bd6aa0 0xc000c52ea0 0xc000d57d00 0xc000c19ea0 0xc000b627c0 0xc000bd6860 0xc000bd6ab0 0xc000bd6ac0 0xc000bd6870 map[app:0xc000bd6880 cluster-autoscaler-enabled:0xc000bd68b0 cluster-name:0xc000bd68e0 createOperationID:0xc000bd6900 creationSource:0xc000bd6920 k8s.io_cluster-autoscaler_node-template_label_class:0xc000bd6940 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_region:0xc000bd6960 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_zone:0xc000bd6980 max:0xc000bd69a0 min:0xc000bd69c0 module:0xc000bd69e0 orchestrator:0xc000bd6a00 poolName:0xc000bd6a20 resourceNameSuffix:0xc000bd6a40 terraform:0xc000bd6a60 version:0xc000bd6a80]} {{} 0xc000bd70c0 0xc000b62b80 0xc000d57e00 0xc000b62c40 0xc000b62bc0 0xc000bd70b0 0xc000bd70a0 0xc000bd6d80 0xc000bd6d90 map[app:0xc000bd6da0 cluster-autoscaler-enabled:0xc000bd6dc0 cluster-name:0xc000bd6e20 createOperationID:0xc000bd6e50 creationSource:0xc000bd6e70 k8s.io_cluster-autoscaler_node-template_label_class:0xc000bd6eb0 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_region:0xc000bd6ef0 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_zone:0xc000bd6f20 max:0xc000bd6f60 min:0xc000bd6fb0 module:0xc000bd6fd0 orchestrator:0xc000bd6ff0 poolName:0xc000bd7010 resourceNameSuffix:0xc000bd7030 terraform:0xc000bd7050 version:0xc000bd7080]} {{} 0xc0008fe420 0xc0010e5b00 0xc000d57f00 0xc00120a100 0xc0010e5ac0 0xc0008fe460 0xc000f0ab50 0xc000f0ab60 0xc0008fe470 map[app:0xc0008fe480 cluster-autoscaler-enabled:0xc0008fe4a0 cluster-name:0xc0008fe4c0 createOperationID:0xc0008fe4e0 creationSource:0xc0008fe500 k8s.io_cluster-autoscaler_node-template_label_class:0xc0008fe530 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_region:0xc0008fe550 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_zone:0xc0008fe580 max:0xc0008fe5b0 min:0xc0008fe5e0 module:0xc0008fe630 orchestrator:0xc0008fe650 poolName:0xc0008fe670 resourceNameSuffix:0xc0008fe6a0 terraform:0xc0008fe6c0 version:0xc0008fe6e0]} {{} 0xc0008fece0 0xc0015ca080 0xc000fd0000 0xc0015ca140 0xc0015ca0a0 0xc000864820 0xc0008feab0 0xc000864830 0xc0008feac0 map[app:0xc0008fead0 cluster-autoscaler-enabled:0xc0008feaf0 cluster-name:0xc0008feb10 createOperationID:0xc0008feb30 creationSource:0xc0008feb50 k8s.io_cluster-autoscaler_node-template_label_class:0xc0008feb70 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_region:0xc0008feb90 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_zone:0xc0008febb0 max:0xc0008febd0 min:0xc0008febf0 module:0xc0008fec10 orchestrator:0xc0008fec30 poolName:0xc0008fec60 resourceNameSuffix:0xc0008fec80 terraform:0xc0008feca0 version:0xc0008fecc0]} {{} 0xc0010e27e0 0xc000dd7900 0xc000fd0200 0xc000dd7960 0xc000dd7920 0xc0010e27b0 0xc0010e2a90 0xc0008648f0 0xc0010e27d0 map[app:0xc000864900 cluster-autoscaler-enabled:0xc000864920 cluster-name:0xc000864940 createOperationID:0xc000864980 creationSource:0xc0008649a0 k8s.io_cluster-autoscaler_node-template_label_class:0xc0008649c0 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_region:0xc0008649e0 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_zone:0xc000864a00 max:0xc000864a20 min:0xc000864a40 module:0xc000864a60 orchestrator:0xc000864a80 poolName:0xc000864aa0 resourceNameSuffix:0xc000864ac0 terraform:0xc000864af0 version:0xc000864b10]} {{} 0xc0010e30c0 0xc0005beb40 0xc000fd0300 0xc0005b71a0 0xc0005b7160 0xc0010e2e30 0xc0010e2e20 0xc0010e2b90 0xc0010e2ba0 map[app:0xc0010e2e40 cluster-autoscaler-enabled:0xc0010e2e60 cluster-name:0xc0010e2ea0 createOperationID:0xc0010e2ed0 creationSource:0xc0010e2ef0 k8s.io_cluster-autoscaler_node-template_label_class:0xc0010e2f30 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_region:0xc0010e2f50 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_zone:0xc0010e2f80 max:0xc0010e2fa0 min:0xc0010e2fc0 module:0xc0010e2fe0 orchestrator:0xc0010e3000 poolName:0xc0010e3020 resourceNameSuffix:0xc0010e3040 terraform:0xc0010e3060 version:0xc0010e30a0]} {{} 0xc000bbeff0 0xc0014f0160 0xc000fd0400 0xc00095fd40 0xc001003f20 0xc000bbebe0 0xc000bbefe0 0xc000bbebf0 0xc000bbec00 map[app:0xc000bbec10 cluster-autoscaler-enabled:0xc000bbec30 cluster-name:0xc000bbec50 createOperationID:0xc000bbec70 creationSource:0xc000bbec90 k8s.io_cluster-autoscaler_node-template_label_class:0xc000bbecb0 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_region:0xc000bbecd0 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_zone:0xc000bbecf0 max:0xc000bbed10 min:0xc000bbed30 module:0xc000bbed50 orchestrator:0xc000bbed70 poolName:0xc000bbed90 resourceNameSuffix:0xc000bbedb0 terraform:0xc000bbedd0 version:0xc000bbedf0]} {{} 0xc000b1cd90 0xc0013daa40 0xc000fd0700 0xc0013da420 0xc0013da320 0xc0011947a0 0xc000b1ca70 0xc0011947b0 0xc000b1ca80 map[app:0xc000b1ca90 cluster-autoscaler-enabled:0xc000b1cab0 cluster-name:0xc000b1cae0 createOperationID:0xc000b1cb00 creationSource:0xc000b1cb40 k8s.io_cluster-autoscaler_node-template_label_class:0xc000b1cb70 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_region:0xc000b1cba0 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_zone:0xc000b1cbe0 max:0xc000b1cc10 min:0xc000b1cc40 module:0xc000b1cc60 orchestrator:0xc000b1cc90 poolName:0xc000b1ccd0 resourceNameSuffix:0xc000b1cd00 terraform:0xc000b1cd40 version:0xc000b1cd70]} {{} 0xc001194d90 0xc000daf160 0xc000fd0900 0xc0008a18e0 0xc0008a18a0 0xc000cfc820 0xc000cfc810 0xc000cfca30 0xc000cfca40 map[app:0xc0011948d0 cluster-autoscaler-enabled:0xc0011948f0 cluster-name:0xc001194910 createOperationID:0xc001194930 creationSource:0xc001194950 k8s.io_cluster-autoscaler_node-template_label_class:0xc001194970 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_region:0xc001194990 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_zone:0xc0011949b0 max:0xc0011949e0 min:0xc001194a50 module:0xc001194ae0 orchestrator:0xc001194b10 poolName:0xc001194b50 resourceNameSuffix:0xc001194b90 terraform:0xc001194d30 version:0xc001194d60]}]
I0523 00:56:27.123307 1 azure_scale_set.go:581] Nodes: returns
I0523 00:56:27.123352 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/6
I0523 00:56:27.123390 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/6
I0523 00:56:27.123416 1 azure_cache.go:155] FindForInstance: Couldn't find NodeGroup of instance {"azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/6"}
I0523 00:56:27.123440 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/10
I0523 00:56:27.123468 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/10
I0523 00:56:27.123486 1 azure_cache.go:152] FindForInstance: found asg aks-sc1-22409415-vmss in cache
I0523 00:56:27.123494 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc3-22409415-vmss/virtualMachines/11
I0523 00:56:27.123527 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc3-22409415-vmss/virtualMachines/11
I0523 00:56:27.123543 1 azure_cache.go:152] FindForInstance: found asg aks-sc3-22409415-vmss in cache
I0523 00:56:27.123550 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/4
I0523 00:56:27.123578 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/4
I0523 00:56:27.123603 1 azure_cache.go:152] FindForInstance: found asg aks-system-22409415-vmss in cache
I0523 00:56:27.123615 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/0
I0523 00:56:27.123641 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/0
I0523 00:56:27.123659 1 azure_cache.go:152] FindForInstance: found asg aks-sc2-22409415-vmss in cache
I0523 00:56:27.123666 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/7
I0523 00:56:27.123700 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/7
I0523 00:56:27.123726 1 azure_cache.go:155] FindForInstance: Couldn't find NodeGroup of instance {"azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/7"}
I0523 00:56:27.123744 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/19
I0523 00:56:27.123775 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/19
I0523 00:56:27.123794 1 azure_cache.go:155] FindForInstance: Couldn't find NodeGroup of instance {"azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/19"}
W0523 00:56:27.123812 1 clusterstate.go:594] Nodegroup is nil for azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/19
I0523 00:56:27.123820 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/8
I0523 00:56:27.123854 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/8
I0523 00:56:27.123882 1 azure_cache.go:155] FindForInstance: Couldn't find NodeGroup of instance {"azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/8"}
W0523 00:56:27.123904 1 clusterstate.go:594] Nodegroup is nil for azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/8
I0523 00:56:27.123911 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/13
I0523 00:56:27.123954 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/13
I0523 00:56:27.123975 1 azure_cache.go:155] FindForInstance: Couldn't find NodeGroup of instance {"azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/13"}
W0523 00:56:27.123994 1 clusterstate.go:594] Nodegroup is nil for azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/13
I0523 00:56:27.124001 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/12
I0523 00:56:27.124026 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/12
I0523 00:56:27.124050 1 azure_cache.go:155] FindForInstance: Couldn't find NodeGroup of instance {"azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/12"}
W0523 00:56:27.124070 1 clusterstate.go:594] Nodegroup is nil for azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/12
I0523 00:56:27.124076 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/14
I0523 00:56:27.124122 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/14
I0523 00:56:27.124145 1 azure_cache.go:155] FindForInstance: Couldn't find NodeGroup of instance {"azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/14"}
W0523 00:56:27.124160 1 clusterstate.go:594] Nodegroup is nil for azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/14
I0523 00:56:27.124167 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/15
I0523 00:56:27.124202 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/15
I0523 00:56:27.124226 1 azure_cache.go:155] FindForInstance: Couldn't find NodeGroup of instance {"azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/15"}
W0523 00:56:27.124239 1 clusterstate.go:594] Nodegroup is nil for azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/15
I0523 00:56:27.124251 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/16
I0523 00:56:27.124283 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/16
I0523 00:56:27.124299 1 azure_cache.go:155] FindForInstance: Couldn't find NodeGroup of instance {"azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/16"}
W0523 00:56:27.124318 1 clusterstate.go:594] Nodegroup is nil for azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/16
I0523 00:56:27.124325 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/17
I0523 00:56:27.124354 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/17
I0523 00:56:27.124375 1 azure_cache.go:155] FindForInstance: Couldn't find NodeGroup of instance {"azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/17"}
W0523 00:56:27.124390 1 clusterstate.go:594] Nodegroup is nil for azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/17
I0523 00:56:27.124402 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/18
I0523 00:56:27.124426 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/18
I0523 00:56:27.124446 1 azure_cache.go:155] FindForInstance: Couldn't find NodeGroup of instance {"azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/18"}
W0523 00:56:27.124460 1 clusterstate.go:594] Nodegroup is nil for azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/18
I0523 00:56:27.124466 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/9
I0523 00:56:27.124500 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/9
I0523 00:56:27.124525 1 azure_cache.go:155] FindForInstance: Couldn't find NodeGroup of instance {"azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/9"}
W0523 00:56:27.124542 1 clusterstate.go:594] Nodegroup is nil for azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/9
I0523 00:56:27.124554 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/11
I0523 00:56:27.124586 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/11
I0523 00:56:27.124610 1 azure_cache.go:155] FindForInstance: Couldn't find NodeGroup of instance {"azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/11"}
W0523 00:56:27.124631 1 clusterstate.go:594] Nodegroup is nil for azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/11
I0523 00:56:27.124682 1 static_autoscaler.go:320] 11 unregistered nodes present
I0523 00:56:27.125658 1 filter_out_schedulable.go:65] Filtering out schedulables
I0523 00:56:27.125685 1 filter_out_schedulable.go:132] Filtered out 0 pods using hints
I0523 00:56:27.125805 1 filter_out_schedulable.go:157] Pod autoscalers.scale-test-6cc69f4cf-9mftf marked as unschedulable can be scheduled on node template-node-for-aks-sc1-22409415-vmss-128522484412670169-3. Ignoring in scale up.
I0523 00:56:27.125918 1 filter_out_schedulable.go:157] Pod autoscalers.scale-test-6cc69f4cf-n6qtx marked as unschedulable can be scheduled on node template-node-for-aks-system-22409415-vmss-4338128316479634658-1. Ignoring in scale up.
I0523 00:56:27.127108 1 filter_out_schedulable.go:170] 18 pods were kept as unschedulable based on caching
I0523 00:56:27.127118 1 filter_out_schedulable.go:171] 2 pods marked as unschedulable can be scheduled.
I0523 00:56:27.127128 1 filter_out_schedulable.go:79] Schedulable pods present
I0523 00:56:27.127144 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-hb5km is unschedulable
I0523 00:56:27.127148 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-4x9vq is unschedulable
I0523 00:56:27.127152 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-l4hst is unschedulable
I0523 00:56:27.127157 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-vqzzq is unschedulable
I0523 00:56:27.127161 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-kbrqr is unschedulable
I0523 00:56:27.127165 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-zfqt9 is unschedulable
I0523 00:56:27.127168 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-xwfnk is unschedulable
I0523 00:56:27.127172 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-zl2xd is unschedulable
I0523 00:56:27.127176 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-gzgkj is unschedulable
I0523 00:56:27.127179 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-9wkx5 is unschedulable
I0523 00:56:27.127184 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-9lqz8 is unschedulable
I0523 00:56:27.127187 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-z7fwq is unschedulable
I0523 00:56:27.127191 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-vmpkp is unschedulable
I0523 00:56:27.127194 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-bzt7q is unschedulable
I0523 00:56:27.127199 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-gx8qc is unschedulable
I0523 00:56:27.127202 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-vvjhq is unschedulable
I0523 00:56:27.127206 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-nbxhg is unschedulable
I0523 00:56:27.127210 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-sr22x is unschedulable
I0523 00:56:27.127214 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-fzfrj is unschedulable
I0523 00:56:27.127221 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/0
I0523 00:56:27.127248 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/0
I0523 00:56:27.127264 1 azure_cache.go:152] FindForInstance: found asg aks-sc2-22409415-vmss in cache
I0523 00:56:27.127269 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/10
I0523 00:56:27.127293 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/10
I0523 00:56:27.127307 1 azure_cache.go:152] FindForInstance: found asg aks-sc1-22409415-vmss in cache
I0523 00:56:27.127312 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/4
I0523 00:56:27.127335 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/4
I0523 00:56:27.127349 1 azure_cache.go:152] FindForInstance: found asg aks-system-22409415-vmss in cache
I0523 00:56:27.127357 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc3-22409415-vmss/virtualMachines/11
I0523 00:56:27.127380 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc3-22409415-vmss/virtualMachines/11
I0523 00:56:27.127394 1 azure_cache.go:152] FindForInstance: found asg aks-sc3-22409415-vmss in cache
I0523 00:56:27.127420 1 scale_up.go:364] Upcoming 11 nodes
I0523 00:56:27.128271 1 scale_up.go:400] Skipping node group aks-sc1-22409415-vmss - max size reached
I0523 00:56:27.130123 1 scale_up.go:288] Pod scale-test-6cc69f4cf-hb5km can't be scheduled on aks-sc3-22409415-vmss, predicate checking error: node(s) didn't match pod topology spread constraints; predicateName=PodTopologySpread; reasons: node(s) didn't match pod topology spread constraints; debugInfo=
I0523 00:56:27.130143 1 scale_up.go:290] 18 other pods similar to scale-test-6cc69f4cf-hb5km can't be scheduled on aks-sc3-22409415-vmss
I0523 00:56:27.130169 1 scale_up.go:437] No pod can fit to aks-sc3-22409415-vmss
I0523 00:56:27.130196 1 scale_up.go:400] Skipping node group aks-system-22409415-vmss - max size reached
I0523 00:56:27.130207 1 waste.go:57] Expanding Node Group aks-sc2-22409415-vmss would waste 73.61% CPU, 100.00% Memory, 86.81% Blended
I0523 00:56:27.130227 1 scale_up.go:456] Best option to resize: aks-sc2-22409415-vmss
I0523 00:56:27.130236 1 scale_up.go:460] Estimated 18 nodes needed in aks-sc2-22409415-vmss
I0523 00:56:27.130251 1 balancing_processor.go:111] Requested scale-up (18) exceeds node group set capacity, capping to 9
I0523 00:56:27.130268 1 scale_up.go:574] Final scale-up plan: [{aks-sc2-22409415-vmss 1->10 (max: 10)}]
I0523 00:56:27.130282 1 scale_up.go:663] Scale-up: setting group aks-sc2-22409415-vmss size to 10
I0523 00:56:27.130306 1 azure_scale_set.go:309] Waiting for virtualMachineScaleSetsClient.CreateOrUpdateAsync(aks-sc2-22409415-vmss)
I0523 00:56:28.813733 1 eventing_scale_up_processor.go:47] Skipping event processing for unschedulable pods since there is a ScaleUp attempt this loop
I0523 00:56:28.813808 1 azure_scale_set.go:272] Calling virtualMachineScaleSetsClient.WaitForAsyncOperationResult - updateVMSSCapacity(aks-sc2-22409415-vmss)
I0523 00:56:28.824080 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/10
I0523 00:56:28.824140 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/10
I0523 00:56:28.824167 1 azure_cache.go:152] FindForInstance: found asg aks-sc1-22409415-vmss in cache
I0523 00:56:28.824175 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc3-22409415-vmss/virtualMachines/11
I0523 00:56:28.824207 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc3-22409415-vmss/virtualMachines/11
I0523 00:56:28.824231 1 azure_cache.go:152] FindForInstance: found asg aks-sc3-22409415-vmss in cache
I0523 00:56:28.824242 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/0
I0523 00:56:28.824280 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/0
I0523 00:56:28.824300 1 azure_cache.go:152] FindForInstance: found asg aks-sc2-22409415-vmss in cache
I0523 00:56:38.824502 1 static_autoscaler.go:229] Starting main loop
I0523 00:56:38.897413 1 azure_scale_set.go:548] Nodes: starts, scaleSet.Name: aks-sc1-22409415-vmss
I0523 00:56:38.897433 1 azure_scale_set.go:560] Nodes: returns with curSize 10
I0523 00:56:38.897442 1 azure_scale_set.go:548] Nodes: starts, scaleSet.Name: aks-sc2-22409415-vmss
I0523 00:56:38.897445 1 azure_scale_set.go:564] Nodes: starts to get VMSS VMs
I0523 00:56:38.897462 1 azure_scale_set.go:355] GetScaleSetVms: starts
I0523 00:56:38.987241 1 azure_scale_set.go:361] GetScaleSetVms: scaleSet.Name: aks-sc2-22409415-vmss, vmList: [{{} 0xc0013fa3b0 0xc000ca1680 0xc000982980 0xc000ca0e00 0xc000ca16c0 0xc000b9fcf0 0xc000b9ff40 0xc0013fa3a0 0xc000b9fd00 map[app:0xc000b9fd10 cluster-autoscaler-enabled:0xc000b9fd30 cluster-name:0xc000b9fd50 createOperationID:0xc000b9fd70 creationSource:0xc000b9fd90 k8s.io_cluster-autoscaler_node-template_label_class:0xc000b9fdb0 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_region:0xc000b9fdd0 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_zone:0xc000b9fdf0 max:0xc000b9fe10 min:0xc000b9fe30 module:0xc000b9fe50 orchestrator:0xc000b9fe70 poolName:0xc000b9fe90 resourceNameSuffix:0xc000b9feb0 terraform:0xc000b9fed0 version:0xc000b9fef0]} {{} 0xc000fed940 0xc00095fb20 0xc0007e6200 0xc00095f600 0xc00095fde0 0xc000fed990 0xc000fed980 0xc000fed9a0 0xc000fed9b0 map[app:0xc000fed9c0 cluster-autoscaler-enabled:0xc000fed9e0 cluster-name:0xc000feda10 createOperationID:0xc000feda30 creationSource:0xc000feda50 k8s.io_cluster-autoscaler_node-template_label_class:0xc000feda70 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_region:0xc000feda90 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_zone:0xc000fedab0 max:0xc000fedad0 min:0xc000fedaf0 module:0xc000fedb10 orchestrator:0xc000fedb30 poolName:0xc000fedb50 resourceNameSuffix:0xc000fedb70 terraform:0xc000fedb90 version:0xc000fedbb0]} {{} 0xc000fedea0 0xc000866f20 0xc0007e6480 0xc000866c00 0xc000866ee0 0xc000f1f980 0xc000f1f970 0xc000fedc80 0xc000f1f990 map[app:0xc000fedc90 cluster-autoscaler-enabled:0xc000fedcb0 cluster-name:0xc000fedce0 createOperationID:0xc000fedd00 creationSource:0xc000fedd20 k8s.io_cluster-autoscaler_node-template_label_class:0xc000fedd40 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_region:0xc000fedd60 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_zone:0xc000fedd80 max:0xc000fedda0 min:0xc000feddc0 module:0xc000fedde0 orchestrator:0xc000fede00 poolName:0xc000fede20 resourceNameSuffix:0xc000fede40 terraform:0xc000fede60 version:0xc000fede80]} {{} 0xc000f1fa90 0xc000dee480 0xc0007e6880 0xc0008673a0 0xc000dee220 0xc000f1fa80 0xc000b517b0 0xc000b517c0 0xc000b51800 map[app:0xc000b51810 cluster-autoscaler-enabled:0xc000b51840 cluster-name:0xc000b51860 createOperationID:0xc000b51880 creationSource:0xc000b518a0 k8s.io_cluster-autoscaler_node-template_label_class:0xc000b518c0 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_region:0xc000b518e0 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_zone:0xc000b51900 max:0xc000b51970 min:0xc000b519b0 module:0xc000b519d0 orchestrator:0xc000b51a00 poolName:0xc000b51a20 resourceNameSuffix:0xc000b51a40 terraform:0xc000b51a60 version:0xc000b51a80]} {{} 0xc000b51ea0 0xc000dee9a0 0xc0007e6b80 0xc000b86c80 0xc000b86c40 0xc000b51c20 0xc000f7a7f0 0xc000f7a890 0xc000f7a8a0 map[app:0xc000b51c30 cluster-autoscaler-enabled:0xc000b51c50 cluster-name:0xc000b51c70 createOperationID:0xc000b51c90 creationSource:0xc000b51cb0 k8s.io_cluster-autoscaler_node-template_label_class:0xc000b51cf0 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_region:0xc000b51d50 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_zone:0xc000b51d80 max:0xc000b51da0 min:0xc000b51dc0 module:0xc000b51de0 orchestrator:0xc000b51e00 poolName:0xc000b51e20 resourceNameSuffix:0xc000b51e40 terraform:0xc000b51e60 version:0xc000b51e80]} {{} 0xc000f7abd0 0xc000b87360 0xc0007e7900 0xc0008a1940 0xc000b87320 0xc000f7abb0 0xc000be0a10 0xc000be0a20 0xc000be0a40 map[app:0xc000be0a50 cluster-autoscaler-enabled:0xc000be0a80 cluster-name:0xc000be0ab0 createOperationID:0xc000be0ad0 creationSource:0xc000be0af0 k8s.io_cluster-autoscaler_node-template_label_class:0xc000be0b10 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_region:0xc000be0b30 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_zone:0xc000be0b50 max:0xc000be0b70 min:0xc000be0bb0 module:0xc000be0bd0 orchestrator:0xc000be0bf0 poolName:0xc000be0c10 resourceNameSuffix:0xc000be0c30 terraform:0xc000be0c50 version:0xc000be0c70]} {{} 0xc000b6b140 0xc000c341a0 0xc0007e7c80 0xc0007777a0 0xc000777c00 0xc000b6aee0 0xc000be0dd0 0xc000b6aef0 0xc000be0de0 map[app:0xc000b6af00 cluster-autoscaler-enabled:0xc000b6af20 cluster-name:0xc000b6af40 createOperationID:0xc000b6af60 creationSource:0xc000b6af80 k8s.io_cluster-autoscaler_node-template_label_class:0xc000b6afa0 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_region:0xc000b6afd0 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_zone:0xc000b6b000 max:0xc000b6b030 min:0xc000b6b060 module:0xc000b6b080 orchestrator:0xc000b6b0a0 poolName:0xc000b6b0c0 resourceNameSuffix:0xc000b6b0e0 terraform:0xc000b6b100 version:0xc000b6b120]} {{} 0xc000b6b230 0xc000c34d40 0xc000a7a200 0xc0015caf40 0xc0015cae60 0xc000c98e50 0xc000c98e40 0xc000c98e60 0xc000c98e70 map[app:0xc000c98e80 cluster-autoscaler-enabled:0xc000c98ea0 cluster-name:0xc000c98ec0 createOperationID:0xc000c98ee0 creationSource:0xc000c98f00 k8s.io_cluster-autoscaler_node-template_label_class:0xc000c98f20 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_region:0xc000c98f40 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_zone:0xc000c98f60 max:0xc000c98f80 min:0xc000c98fa0 module:0xc000c98fc0 orchestrator:0xc000c98fe0 poolName:0xc000c99000 resourceNameSuffix:0xc000c99020 terraform:0xc000c99040 version:0xc000c99060]} {{} 0xc000c572a0 0xc00120aa00 0xc000a7b100 0xc00120a140 0xc00120a100 0xc000c57290 0xc000c57010 0xc000c57020 0xc000c57030 map[app:0xc000c57040 cluster-autoscaler-enabled:0xc000c57060 cluster-name:0xc000c57080 createOperationID:0xc000c570a0 creationSource:0xc000c570c0 k8s.io_cluster-autoscaler_node-template_label_class:0xc000c570e0 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_region:0xc000c57100 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_zone:0xc000c57120 max:0xc000c57150 min:0xc000c57170 module:0xc000c571a0 orchestrator:0xc000c571e0 poolName:0xc000c57210 resourceNameSuffix:0xc000c57230 terraform:0xc000c57250 version:0xc000c57270]} {{} 0xc000c57a00 0xc00120b1a0 0xc000a7b400 0xc00120b220 0xc00120b1c0 0xc000c579a0 0xc000c57980 0xc000c579c0 0xc000c579e0 map[app:0xc000c57440 cluster-autoscaler-enabled:0xc000c57470 cluster-name:0xc000c574a0 createOperationID:0xc000c574e0 creationSource:0xc000c57510 k8s.io_cluster-autoscaler_node-template_label_class:0xc000c57530 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_region:0xc000c57550 k8s.io_cluster-autoscaler_node-template_label_topology.kubernetes.io_zone:0xc000c57570 max:0xc000c57590 min:0xc000c575b0 module:0xc000c575d0 orchestrator:0xc000c575f0 poolName:0xc000c57620 resourceNameSuffix:0xc000c57640 terraform:0xc000c57660 version:0xc000c57680]}]
I0523 00:56:38.987565 1 azure_scale_set.go:581] Nodes: returns
I0523 00:56:38.987583 1 azure_scale_set.go:548] Nodes: starts, scaleSet.Name: aks-sc3-22409415-vmss
I0523 00:56:38.987589 1 azure_scale_set.go:560] Nodes: returns with curSize 1
I0523 00:56:38.987595 1 azure_scale_set.go:548] Nodes: starts, scaleSet.Name: aks-system-22409415-vmss
I0523 00:56:38.987618 1 azure_scale_set.go:560] Nodes: returns with curSize 3
I0523 00:56:38.987626 1 azure_cache.go:82] Invalidating unowned instance cache
I0523 00:56:38.987639 1 azure_manager.go:173] Refreshed ASG list, next refresh after 2021-05-23 00:57:38.987633713 +0000 UTC m=+1211.330830441
I0523 00:56:38.987995 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/10
I0523 00:56:38.988036 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/10
I0523 00:56:38.988071 1 azure_cache.go:152] FindForInstance: found asg aks-sc1-22409415-vmss in cache
I0523 00:56:38.988273 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/4
I0523 00:56:38.988312 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/4
I0523 00:56:38.988347 1 azure_cache.go:152] FindForInstance: found asg aks-system-22409415-vmss in cache
I0523 00:56:38.988398 1 taints.go:71] Removing rescheduler taint when creating template
I0523 00:56:38.988517 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc3-22409415-vmss/virtualMachines/11
I0523 00:56:38.988555 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc3-22409415-vmss/virtualMachines/11
I0523 00:56:38.988588 1 azure_cache.go:152] FindForInstance: found asg aks-sc3-22409415-vmss in cache
I0523 00:56:38.988752 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/0
I0523 00:56:38.988787 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/0
I0523 00:56:38.988836 1 azure_cache.go:152] FindForInstance: found asg aks-sc2-22409415-vmss in cache
I0523 00:56:38.989087 1 azure_scale_set.go:548] Nodes: starts, scaleSet.Name: aks-system-22409415-vmss
I0523 00:56:38.989103 1 azure_scale_set.go:560] Nodes: returns with curSize 3
I0523 00:56:38.989120 1 azure_scale_set.go:548] Nodes: starts, scaleSet.Name: aks-sc1-22409415-vmss
I0523 00:56:38.989131 1 azure_scale_set.go:548] Nodes: starts, scaleSet.Name: aks-sc2-22409415-vmss
I0523 00:56:38.989137 1 azure_scale_set.go:560] Nodes: returns with curSize 10
I0523 00:56:38.989145 1 azure_scale_set.go:560] Nodes: returns with curSize 10
I0523 00:56:38.989210 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/4
I0523 00:56:38.989248 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/4
I0523 00:56:38.989289 1 azure_cache.go:152] FindForInstance: found asg aks-system-22409415-vmss in cache
I0523 00:56:38.989304 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/0
I0523 00:56:38.989336 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/0
I0523 00:56:38.989370 1 azure_cache.go:152] FindForInstance: found asg aks-sc2-22409415-vmss in cache
I0523 00:56:38.989381 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/7
I0523 00:56:38.989413 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/7
I0523 00:56:38.989449 1 azure_cache.go:155] FindForInstance: Couldn't find NodeGroup of instance {"azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/7"}
I0523 00:56:38.989468 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/6
I0523 00:56:38.989500 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/6
I0523 00:56:38.989532 1 azure_cache.go:155] FindForInstance: Couldn't find NodeGroup of instance {"azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/6"}
I0523 00:56:38.989546 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/10
I0523 00:56:38.989576 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/10
I0523 00:56:38.989606 1 azure_cache.go:152] FindForInstance: found asg aks-sc1-22409415-vmss in cache
I0523 00:56:38.989614 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc3-22409415-vmss/virtualMachines/11
I0523 00:56:38.989643 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc3-22409415-vmss/virtualMachines/11
I0523 00:56:38.989674 1 azure_cache.go:152] FindForInstance: found asg aks-sc3-22409415-vmss in cache
I0523 00:56:38.989684 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/13
I0523 00:56:38.989716 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/13
I0523 00:56:38.989738 1 azure_cache.go:152] FindForInstance: found asg aks-sc2-22409415-vmss in cache
I0523 00:56:38.989745 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/15
I0523 00:56:38.989781 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/15
I0523 00:56:38.989797 1 azure_cache.go:152] FindForInstance: found asg aks-sc2-22409415-vmss in cache
I0523 00:56:38.989802 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/16
I0523 00:56:38.989834 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/16
I0523 00:56:38.989860 1 azure_cache.go:152] FindForInstance: found asg aks-sc2-22409415-vmss in cache
I0523 00:56:38.989868 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/18
I0523 00:56:38.989900 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/18
I0523 00:56:38.989931 1 azure_cache.go:152] FindForInstance: found asg aks-sc2-22409415-vmss in cache
I0523 00:56:38.989938 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/8
I0523 00:56:38.989969 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/8
I0523 00:56:38.989989 1 azure_cache.go:152] FindForInstance: found asg aks-system-22409415-vmss in cache
I0523 00:56:38.989995 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/11
I0523 00:56:38.990026 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/11
I0523 00:56:38.990042 1 azure_cache.go:152] FindForInstance: found asg aks-sc1-22409415-vmss in cache
I0523 00:56:38.990054 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/12
I0523 00:56:38.990086 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/12
I0523 00:56:38.990108 1 azure_cache.go:152] FindForInstance: found asg aks-sc2-22409415-vmss in cache
I0523 00:56:38.990114 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/9
I0523 00:56:38.990145 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/9
I0523 00:56:38.990171 1 azure_cache.go:152] FindForInstance: found asg aks-system-22409415-vmss in cache
I0523 00:56:38.990178 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/14
I0523 00:56:38.990208 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/14
I0523 00:56:38.990231 1 azure_cache.go:152] FindForInstance: found asg aks-sc1-22409415-vmss in cache
I0523 00:56:38.990240 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/17
I0523 00:56:38.990258 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/17
I0523 00:56:38.990283 1 azure_cache.go:152] FindForInstance: found asg aks-sc2-22409415-vmss in cache
I0523 00:56:38.990294 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/10
I0523 00:56:38.990324 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/10
I0523 00:56:38.990354 1 azure_cache.go:152] FindForInstance: found asg aks-sc2-22409415-vmss in cache
I0523 00:56:38.990360 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/14
I0523 00:56:38.990392 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/14
I0523 00:56:38.990407 1 azure_cache.go:152] FindForInstance: found asg aks-sc2-22409415-vmss in cache
I0523 00:56:38.990411 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/17
I0523 00:56:38.990436 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/17
I0523 00:56:38.990461 1 azure_cache.go:152] FindForInstance: found asg aks-sc1-22409415-vmss in cache
I0523 00:56:38.990467 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/19
I0523 00:56:38.990496 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/19
I0523 00:56:38.990533 1 azure_cache.go:152] FindForInstance: found asg aks-sc1-22409415-vmss in cache
I0523 00:56:38.990540 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/15
I0523 00:56:38.990566 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/15
I0523 00:56:38.990582 1 azure_cache.go:152] FindForInstance: found asg aks-sc1-22409415-vmss in cache
I0523 00:56:38.990589 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/16
I0523 00:56:38.990616 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/16
I0523 00:56:38.990643 1 azure_cache.go:152] FindForInstance: found asg aks-sc1-22409415-vmss in cache
I0523 00:56:38.990655 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/18
I0523 00:56:38.990683 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/18
I0523 00:56:38.990714 1 azure_cache.go:152] FindForInstance: found asg aks-sc1-22409415-vmss in cache
I0523 00:56:38.990726 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/11
I0523 00:56:38.990757 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/11
I0523 00:56:38.990781 1 azure_cache.go:152] FindForInstance: found asg aks-sc2-22409415-vmss in cache
I0523 00:56:38.990787 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/12
I0523 00:56:38.990818 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/12
I0523 00:56:38.990839 1 azure_cache.go:152] FindForInstance: found asg aks-sc1-22409415-vmss in cache
I0523 00:56:38.990845 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/13
I0523 00:56:38.990876 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/13
I0523 00:56:38.990891 1 azure_cache.go:152] FindForInstance: found asg aks-sc1-22409415-vmss in cache
I0523 00:56:38.990944 1 static_autoscaler.go:320] 20 unregistered nodes present
I0523 00:56:38.992242 1 filter_out_schedulable.go:65] Filtering out schedulables
I0523 00:56:38.992270 1 filter_out_schedulable.go:132] Filtered out 0 pods using hints
I0523 00:56:38.992406 1 filter_out_schedulable.go:157] Pod autoscalers.scale-test-6cc69f4cf-n6qtx marked as unschedulable can be scheduled on node template-node-for-aks-sc2-22409415-vmss-381114526762976423-8. Ignoring in scale up.
I0523 00:56:38.992571 1 filter_out_schedulable.go:157] Pod autoscalers.scale-test-6cc69f4cf-hb5km marked as unschedulable can be scheduled on node template-node-for-aks-sc1-22409415-vmss-4973045475441444873-1. Ignoring in scale up.
I0523 00:56:38.992727 1 filter_out_schedulable.go:157] Pod autoscalers.scale-test-6cc69f4cf-9mftf marked as unschedulable can be scheduled on node template-node-for-aks-system-22409415-vmss-6330885697262780955-0. Ignoring in scale up.
I0523 00:56:38.992811 1 filter_out_schedulable.go:157] Pod autoscalers.scale-test-6cc69f4cf-kbrqr marked as unschedulable can be scheduled on node template-node-for-aks-system-22409415-vmss-6330885697262780955-1. Ignoring in scale up.
I0523 00:56:38.992909 1 filter_out_schedulable.go:157] Pod autoscalers.scale-test-6cc69f4cf-zfqt9 marked as unschedulable can be scheduled on node template-node-for-aks-sc2-22409415-vmss-381114526762976423-2. Ignoring in scale up.
I0523 00:56:38.992985 1 filter_out_schedulable.go:157] Pod autoscalers.scale-test-6cc69f4cf-4x9vq marked as unschedulable can be scheduled on node template-node-for-aks-sc1-22409415-vmss-4973045475441444873-5. Ignoring in scale up.
I0523 00:56:38.993044 1 filter_out_schedulable.go:157] Pod autoscalers.scale-test-6cc69f4cf-l4hst marked as unschedulable can be scheduled on node template-node-for-aks-sc2-22409415-vmss-381114526762976423-4. Ignoring in scale up.
I0523 00:56:38.993122 1 filter_out_schedulable.go:157] Pod autoscalers.scale-test-6cc69f4cf-vqzzq marked as unschedulable can be scheduled on node template-node-for-aks-sc1-22409415-vmss-4973045475441444873-6. Ignoring in scale up.
I0523 00:56:38.993976 1 filter_out_schedulable.go:170] 12 pods were kept as unschedulable based on caching
I0523 00:56:38.993990 1 filter_out_schedulable.go:171] 8 pods marked as unschedulable can be scheduled.
I0523 00:56:38.994000 1 filter_out_schedulable.go:79] Schedulable pods present
I0523 00:56:38.994029 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-9lqz8 is unschedulable
I0523 00:56:38.994038 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-z7fwq is unschedulable
I0523 00:56:38.994043 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-xwfnk is unschedulable
I0523 00:56:38.994048 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-zl2xd is unschedulable
I0523 00:56:38.994053 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-gzgkj is unschedulable
I0523 00:56:38.994060 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-9wkx5 is unschedulable
I0523 00:56:38.994070 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-sr22x is unschedulable
I0523 00:56:38.994076 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-fzfrj is unschedulable
I0523 00:56:38.994085 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-vmpkp is unschedulable
I0523 00:56:38.994091 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-bzt7q is unschedulable
I0523 00:56:38.994096 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-gx8qc is unschedulable
I0523 00:56:38.994100 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-vvjhq is unschedulable
I0523 00:56:38.994106 1 klogx.go:86] Pod autoscalers/scale-test-6cc69f4cf-nbxhg is unschedulable
I0523 00:56:38.994115 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/10
I0523 00:56:38.994150 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc1-22409415-vmss/virtualMachines/10
I0523 00:56:38.994178 1 azure_cache.go:152] FindForInstance: found asg aks-sc1-22409415-vmss in cache
I0523 00:56:38.994186 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/4
I0523 00:56:38.994218 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-system-22409415-vmss/virtualMachines/4
I0523 00:56:38.994249 1 azure_cache.go:152] FindForInstance: found asg aks-system-22409415-vmss in cache
I0523 00:56:38.994254 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc3-22409415-vmss/virtualMachines/11
I0523 00:56:38.994281 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc3-22409415-vmss/virtualMachines/11
I0523 00:56:38.994313 1 azure_cache.go:152] FindForInstance: found asg aks-sc3-22409415-vmss in cache
I0523 00:56:38.994319 1 azure_cache.go:117] FindForInstance: starts, ref: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/0
I0523 00:56:38.994348 1 azure_cache.go:119] FindForInstance: resourceID: azure:///subscriptions/11077a2c-19d7-4cfa-bdb9-491da501c62c/resourceGroups/polaris-infra-nodes/providers/Microsoft.Compute/virtualMachineScaleSets/aks-sc2-22409415-vmss/virtualMachines/0
I0523 00:56:38.994373 1 azure_cache.go:152] FindForInstance: found asg aks-sc2-22409415-vmss in cache
I0523 00:56:38.994410 1 scale_up.go:364] Upcoming 20 nodes
I0523 00:56:38.995052 1 scale_up.go:400] Skipping node group aks-sc1-22409415-vmss - max size reached
I0523 00:56:38.995080 1 scale_up.go:400] Skipping node group aks-sc2-22409415-vmss - max size reached
I0523 00:56:38.996584 1 scale_up.go:400] Skipping node group aks-system-22409415-vmss - max size reached
I0523 00:56:38.996603 1 waste.go:57] Expanding Node Group aks-sc3-22409415-vmss would waste 72.92% CPU, 100.00% Memory, 86.46% Blended
I0523 00:56:38.996617 1 scale_up.go:456] Best option to resize: aks-sc3-22409415-vmss
I0523 00:56:38.996627 1 scale_up.go:460] Estimated 12 nodes needed in aks-sc3-22409415-vmss
I0523 00:56:38.996647 1 balancing_processor.go:111] Requested scale-up (12) exceeds node group set capacity, capping to 9
I0523 00:56:38.996667 1 scale_up.go:574] Final scale-up plan: [{aks-sc3-22409415-vmss 1->10 (max: 10)}]
I0523 00:56:38.996686 1 scale_up.go:663] Scale-up: setting group aks-sc3-22409415-vmss size to 10
I0523 00:56:38.996707 1 azure_scale_set.go:309] Waiting for virtualMachineScaleSetsClient.CreateOrUpdateAsync(aks-sc3-22409415-vmss)
I0523 00:56:40.230606 1 eventing_scale_up_processor.go:47] Skipping event processing for unschedulable pods since there is a ScaleUp attempt this loop

Ultimately, this results in dozens of unneeded nodes being spun up. Depending on your scale down configuration, it can take a LONG time to spin down and correctly allocate, adding a lot disruption in the process.

Anything else we need to know?:

I understand that these pod interdependency constraints are not ideal for the cluster-autoscaler due to its core scale-up algorithm which focuses on a single node group at a time. However, I expect the cluster-autoscaler to be conservative in its scaling assumptions (e..g, one-at-a-time) rather than going right to # of unschedulable pods x # of node groups -> # of new nodes.

I think this is a "bug" for the following reasons:

(1) While the expected behavior for these situations isn't formally documented, it is called out here (#3721 (comment)) and matches my assumptions.

(2) As it stands this is a massive footgun that renders any required pod interdependencies a non-starter for clusters with highly variable load. Without going through the source code, I don't know how anyone would expect the current behavior.

(3) The more conservative approach is a better default , especially since overprovisioning is easily attainable if you need something more immediate.

(4) Most of the scaled nodes are unnecessary and will cause serious pod disruption during scale down (not to mention the cost). For this reason, I cannot think why anyone would be depending on the current behavior.

Fortuantely, I think I've identified the root cause and I'd like to propose a couple fix-ups for feedback from those with more experience in the codebase.

Most importantly, this is really an issue with the BinpackingNodeEstimator which makes an incorrect assumption in this loop:

for _, podInfo := range podInfos {

Specifically, if the predicateChecker.FitsAnyNodeMatching call fails, the binpacker assumes that adding a new node will automatically make the pod scheduler. This assumption is incorrect in the context of required pod interdependencies.

Proposal 1

After adding a new node to the snapshot

newNodeName, err := estimator.addNewNodeToSnapshot(nodeTemplate, newNodeNameIndex)

the estimator should run predicateChecker,CheckPredicates to ensure that the pod can actually be scheduled on this new node.

If no, you could do one of two things:

  • break the loop to return len(newNodeNames) (before adding the unnecessary node to the map)
  • revert the cluster snapshot to before adding the node and skip adding the pod; there is no reason other pods behind the current one could not be potentially schedulable

I think this fix would solve the problems I observed because new nodes won't be selected by the binpacker that the pods cannot actually be scheduled on.

Proposal 2

The predicateChecker.FitsAnyNodeMatching should be replaced with predicateChecker.FitsAnyNode. When taking the pod interdependcy constraints into account, there is no "correctness" reason to avoid trying to schedule across the whole cluster, and this is in fact closer to the behavior that will be observed after scale-up.

Now, I can definitely see a "performance" reason to have this short-circuit there. However, I haven't done any timings of this computation loop, though I imagine it is enhanced by using the scheduler framework for predicate checking since v1.18.

If there is a performance constraint for large clusters here, I'd like to propose that flags are added for configuring the behavior here similar to the --scale-down-candidates-pool-ratiooption. This would greatly help in finding efficient fits for pod interdepency contraints (though it obviously wont be perfect in every scenario, at least it'll be much better than now).


If these are accepted, happy to whip up PRs, but I look forward to your feedback first.

@fullykubed fullykubed added the kind/bug Categorizes issue or PR as related to a bug. label May 24, 2021
@MaciekPytel
Copy link
Contributor

Hi, can you confirm this was tested in a version that contains #3883? Your description seems to match that issue pretty well. In that issue the symptoms were huge over-scale-up when using antiaffinity or topology spreading with hostname topology (which is what your example uses).

Replying to the issues mentioned:

(1) The binpacking estimators estimates each plausible node group on a one-pod-per-node basis. In my example, this results in about 10-11 extra nodes in the returned estimates resulting in dramatic over-provisioning for that node group.

The workload you linked uses podTopologySpreading with hostname topology, meaning only one such pod can schedule on a node. So the one-pod-per-node result of binpacking seems correct to me.

(2) Even worse, when the next scale up loop runs, those extra nodes don't actually provide a place for many pods to run due to the scheduling constraints. In other words, they remain in the unschedulable pod list even after running filterOutSchedulablePodListProc. Thus the scheduler repeats the overprovisioning from step (1) for EACH plausible node group.

This is exactly the issue that was fixed in #3883.

Regarding the proposals:

  1. We check that each pod would fit an empty node belonging to a given nodegroup before running binpacking (here). It's possible this is not enough when using zonal antiaffinity or topology spread, but I'd like a concrete proof that this is indeed a case before we make changes. Performance-wise binpacking is the most sensitive part of CA and I'd like to make sure any change is well understood and well tested.
  2. I assume you mean using FitsAnyNode here? If so, I don't see how this would help. Each binpacking simulation is for one nodegroup only (and we assume nodegroup to be zonal) and we reset ClusterSnapshot back to its original state after the simulation is done. We already know the pods can't fit on any nodes that were in ClusterSnapshot before we started binpacking (the pods would have been removed from unschedulable list in filterOutSchedulable step) and we only add nodes from one nodegroup in binpacking. I don't see any scenario in which a pod could fit a node from outside the nodegroup that is binpacked. And the performance impact of this change would be crippling - in fact we started with FitsAnyNode and we had to switch it because of scalability issues.

@fullykubed
Copy link
Author

fullykubed commented May 24, 2021

mea culpa. I was running so many tests that I posted the wrong manifest. You are absolutely correct that builds including #3883 works according to expectations, but only for kubernetes.io/hostname label constraints.

While I am glad that that common use-case is now covered, all other labels will have the behavior described above. In fact, the logs I posted were from a run testing topology.kubernetes.io/zone spread. Here is an updated manifest that I have confirmed will be broken even on 1.21.0.

Correctly Broken Manifest
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: autoscalers
  labels:
    scaletest: "true"
spec:
  progressDeadlineSeconds: 600
  replicas: 3
  selector:
    matchLabels:
      scaletest: "true"
  strategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        scaletest: "true"
      namespace: autoscalers
    spec:
      containers:
      - image: paulbouwer/hello-kubernetes:1
        name: test
        resources:
          limits:
            cpu: 500m
          requests:
            cpu: 500m
      topologySpreadConstraints:
      - labelSelector:
          matchLabels:
            scaletest: "true"
        maxSkew: 1
        topologyKey: topology.kubernetes.io/zone
        whenUnsatisfiable: DoNotSchedule

Let's focus on proposal 1 first since that embodies what I would consider the "correctness" problem while proposal 2 is more focused on more efficient binpacking (from a pod placement perspective).

Here is my understanding of how the algorithm works and how it is broken:

(1) A sample pod from each pod equivalence group is checked to see if it can be placed on an example node template from each node group. If yes, that node group becomes a plausible option.

if err := context.PredicateChecker.CheckPredicates(context.ClusterSnapshot, samplePod, nodeInfo.Node().Name); err == nil {

(2) Plausible node groups get an estimate of the # of nodes that should be added to fit the pods.

option.NodeCount = estimator.Estimate(option.Pods, nodeInfo)

(3) Inside the binpacking estimator, the estimator runs a scheduling simulation with the cluster snapshot.

In the simulation, if the pod can be added to any nodes that were added during this simulation, go ahead and add it.

nodeName, err := estimator.predicateChecker.FitsAnyNodeMatching(estimator.clusterSnapshot, podInfo.pod, func(nodeInfo *schedulerframework.NodeInfo) bool {

If not, then add add a new node and without any predicate checking just assume that the pod can be added to the new node.

newNodeName, err := estimator.addNewNodeToSnapshot(nodeTemplate, newNodeNameIndex)


The real problem is the new node assumption from (3) is incorrect and results pods being added to nodes in the simulation for which they cannot actually be added according to the scheduler.

I think the clearest example is in fact for pods with a required topology.kubernetes.io/zone spread constraint (either pod anti-affinity or with topology spread). Consider any well balanced cluster across three (3) zones that requires adding ten (10) new nodes due to resource requirements but for pods with the constraints in the manifest above. You can follow the source code to see that the estimator will say the ten (10) required nodes should be immediately placed in that node group even though the majority of the pods will remain unschedulable. This snowballs into adding ten (10) new nodes for each node group because those ten (10) new nodes in the single zone don't actually allow many pods to be scheduled after the initial scale up run.

This results in the behavior described in the initial post (and the logs are an example of this scenario, not kubernetes.io/hostname like I mistakenly said). Confirmed that this behavior exists in 1.21.0 by running on my cluster and following the source code.

Based on your previous comments, I think we both agree that the expected behavior is for the autoscaler to generally try to add nodes that its reasonably sure (using the predicate checker) that pods can be scheduled on, even if that means that the scale up must take place over many cluster-autoscaler scale-up loops.

Regarding performance, my proposal would definitely increase the computation, but it only checks the single new node so it scales with the # of pods trying to be scheduled rather than with the size of the cluster. After testing multiple huge scaling events, it seems that adding just a single additional predicate check per pod per node group (in the worst case) is within acceptable performance timings (see the timings from the logs from the original post). Even better, if the pod is unschedulable, this actually saves work if you break out of the estimator loop at this point.

Moreover, it avoids the long and tedious scale down down process which I'd argue is a bigger performance concern because the scale down computation does scale with the size of the cluster. Not to mention it simply takes a long time if using the out-of-the-box configuration.

Thanks for the quick response @MaciekPytel. Looking forward to your thoughts!

@alexcristi
Copy link

Hi @MaciekPytel! We've been experiencing the same behavior as @jclangst described when using the topology.kubernetes.io/zone spread constraint. Have you considered addressing this issue, too?

@fullykubed
Copy link
Author

@MaciekPytel Just wanted to follow up to see if you are the one who we should be corresponding with?

@fullykubed
Copy link
Author

@MaciekPytel Apologies for pinging you again. I was really hoping that the fix could be integrated in the 1.22 release, but it looks like the initial one was already cut last week. This issue is definitely affecting several teams and the impact makes the autoscaler quasi-unusable on clusters with many node groups and fairly common scheduling constraints, adding dozens of unnecessary nodes even on small scaling events. I'd really like the next release to contain the proposed fix so that we can remove our custom patches. I am willing to do any required legwork including working with you to ensure correctness and performance. Can you please advise on next steps?

@fullykubed
Copy link
Author

fullykubed commented Aug 25, 2021

Nevermind. It appears the conversation has moved here: #4129. It appears another user has gone ahead and implemented the patch from proposal 1 almost exactly as we did, and that you all discussed it at the last meeting. Looking forward to getting this integrated!

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 14, 2021
@pierluigilenoci
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 14, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 14, 2022
@pierluigilenoci
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 14, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 12, 2022
@pierluigilenoci
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 13, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 11, 2022
@pierluigilenoci
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 12, 2022
@fullykubed
Copy link
Author

I believe this was resolved by #4970

@pierluigilenoci
Copy link
Contributor

@jclangst if you confirm that the problem is solved, this issue can be closed.

@fullykubed
Copy link
Author

I can confirm there are new options that at least enable those running into this issue the ability to choose a desired solution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cluster-autoscaler kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

7 participants