You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To test that the autoscaler works, I launched a 300 nginx pod deployment using the following yaml:
cat > nginx-example-autoscale.yml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 300
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
EOF
# DEPLOY
kubectl apply -f nginx-example-autoscale.yml
# CHECK IF SCALING UP WORKS; IT DOES
watch -n1 kubectl top node
# REMOVE DEPLOY AND WAIT A COUPLE MINUTES
kubectl delete -f nginx-example-autoscale.yml
# CHECK IF SCALING DOWN WORKS; IT DOES
watch -n1 kubectl top node
On an EKS cluster with Kubernetes version 1.28, if you pipe the logs of the autoscaler pod, you will notice the errors, which I have listed in the Actual Results section of this report.
According to Kubernetes, the policy/v1beta1 API was deprecated since 1.25.
Instead, the policy/v1 API should be used, which involves an if-else block in the Helm template.
There should be no API deprecation error messages.
Actual Results
# API deprecation errors are shown for PodDisruptionBudget and CSIStorageCapacity:1 reflector.go:138] k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resource1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: the server could not find the requested resource
# More verbose logs belowI1115 15:28:21.807496 1 static_autoscaler.go:230] Starting main loopI1115 15:28:21.808073 1 filter_out_schedulable.go:65] Filtering out schedulablesI1115 15:28:21.808088 1 filter_out_schedulable.go:132] Filtered out 0 pods using hintsI1115 15:28:21.808093 1 filter_out_schedulable.go:170] 0 pods were kept as unschedulable based on cachingI1115 15:28:21.808096 1 filter_out_schedulable.go:171] 0 pods marked as unschedulable can be scheduled.I1115 15:28:21.808101 1 filter_out_schedulable.go:82] No schedulable podsI1115 15:28:21.808110 1 static_autoscaler.go:419] No unschedulable podsI1115 15:28:21.808122 1 static_autoscaler.go:466] Calculating unneeded nodesI1115 15:28:21.808133 1 pre_filtering_processor.go:66] Skipping ip-10-10-3-226.ec2.internal - node group min size reachedI1115 15:28:21.808148 1 scale_down.go:509] Scale-down calculation: ignoring 2 nodes unremovable in the last 5m0sI1115 15:28:21.808176 1 static_autoscaler.go:520] Scale down status: unneededOnly=false lastScaleUpTime=2023-11-14 19:43:36.461349334 +0000 UTC m=+404.222749662 lastScaleDownDeleteTime=2023-11-14 19:50:18.631108429 +0000 UTC m=+806.392508757 lastScaleDownFailTime=2023-11-14 18:37:14.739858175 +0000 UTC m=-3577.498741491 scaleDownForbidden=false isDeleteInProgress=false scaleDownInCooldown=falseI1115 15:28:21.808204 1 static_autoscaler.go:533] Starting scale downI1115 15:28:21.808238 1 scale_down.go:918] No candidates for scale downI1115 15:28:26.825479 1 reflector.go:255] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309W1115 15:28:26.843868 1 reflector.go:324] k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resourceE1115 15:28:26.843889 1 reflector.go:138] k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:309: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: the server could not find the requested resourceI1115 15:28:31.822565 1 static_autoscaler.go:230] Starting main loop
The text was updated successfully, but these errors were encountered:
Hello @dudeitssm, did you try using underlying Helm chart version compatible with EKS 1.28, you can supply this using variable helm_chart_version and setting it to 9.34.1. Please, let me know if this works for you.
dudeitssm
changed the title
bug: deprecated API errors in autoscaler logs when module is used with AWS EKS v1.28
[SOLVED] deprecated API errors in autoscaler logs when module is used with AWS EKS v1.28
May 28, 2024
Summary
API deprecation errors are shown in the autoscaler logs due to usage of the old beta APIs.
They include
v1beta1.PodDisruptionBudget
andv1beta1.CSIStorageCapacity
.Edit: SOLVED! See #21 (comment)
Issue Type
Bug Report
Terraform Version
Steps to Reproduce
To test that the autoscaler works, I launched a 300 nginx pod deployment using the following yaml:
On an EKS cluster with Kubernetes version 1.28, if you pipe the logs of the autoscaler pod, you will notice the errors, which I have listed in the
Actual Results
section of this report.There is an open PR on Kubernetes' repo, with a workaround.
Expected Results
According to Kubernetes, the policy/v1beta1 API was deprecated since 1.25.
Instead, the policy/v1 API should be used, which involves an if-else block in the Helm template.
There should be no API deprecation error messages.
Actual Results
The text was updated successfully, but these errors were encountered: