Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OutOfSync for HPA, due not sorted yaml keys #7846

Open
knusperkrone opened this issue Dec 3, 2021 · 9 comments
Open

OutOfSync for HPA, due not sorted yaml keys #7846

knusperkrone opened this issue Dec 3, 2021 · 9 comments
Labels
bug Something isn't working more-information-needed Further information is requested

Comments

@knusperkrone
Copy link

Describe the bug

When adding a HPA, with memory and cpu limits, then argoCD application appears OutOfSync.

This is because of not consequent sorting the live manifest and the desired manifest.

To Reproduce

Applying a hpa like:

metrics:
  {{ if $root.Values.resources.hpa.targetCPUUtilizationPercentage }}
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: {{ $root.Values.resources.hpa.targetCPUUtilizationPercentage }}
  {{ end }}
  {{ if $root.Values.resources.hpa.targetMemoryUtilizationPercentage }}
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: {{ $root.Values.resources.hpa.targetCPUUtilizationPercentage }}
  {{ end }}

Expected behavior

ArgoCd shoulnd't mark such projects OutOfSync

Screenshots

Screenshot from 2021-12-03 10-57-26

Version

Argo CD
v2.1.7+a408e29

@knusperkrone knusperkrone added the bug Something isn't working label Dec 3, 2021
@mikebryant
Copy link
Contributor

Related upstream issue: kubernetes/kubernetes#74099

@fernandesnikhil
Copy link

Ran into the same issue. The workaround mentioned in #1079 (comment) under item 1 does not work for us, because while we have some control over the ordering of cpu and memory in the metrics list, we allow any number of custom metrics to be appended to the metrics list in the HPA spec via a helm values.

@dmitry-mightydevops
Copy link

I got the same issue with karpenter
image

@ricardojdsilva87
Copy link

ricardojdsilva87 commented Jun 22, 2023

Hello everyone,
I have the exact same problem in different HPAs, for example on the nginx controller:
kubernetes/ingress-nginx#10038

It seems that somehow ArgoCD is ordering alphabetically the metrics on the generated manifest and that causes the differences because the oficial helm chart is written in another way:
https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/templates/controller-hpa.yaml
And on the cluster the the HPA manifest is applied as is on the helm chart:
image

This means that argocd is ordering alphabetically the metrics to be compared when it should follow the helm chart manifest.
i think that this should be fixed on ArgoCD code because we might have hundreds of external helm charts that can use the approach they want.

Thank you!

@mikebryant
Copy link
Contributor

mikebryant commented Jun 22, 2023

This isn't ArgoCD doing it - ArgoCD is comparing the order in the chart to the one the cluster reports, and seeing a diff. When it changes the cluster to match the chart, something in the cluster (HPA Controller, apiserver etc) is reordering them.

The component changing the ordering is the one that needs to be fixed - see kubernetes/kubernetes#74099

@mikebryant
Copy link
Contributor

To demonstrate, without using ArgoCD at all:

[sbx|default] ➜  ~ cat /tmp/hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: php-apache
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: php-apache
  minReplicas: 1
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 50
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 50
[sbx|default] ➜  ~ k apply -f /tmp/hpa.yaml
horizontalpodautoscaler.autoscaling/php-apache created
[sbx|default] ➜  ~ k get hpa php-apache -o yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"autoscaling/v2","kind":"HorizontalPodAutoscaler","metadata":{"annotations":{},"name":"php-apache","namespace":"default"},"spec":{"maxReplicas":10,"metrics":[{"resource":{"name":"cpu","target":{"averageUtilization":50,"type":"Utilization"}},"type":"Resource"},{"resource":{"name":"memory","target":{"averageUtilization":50,"type":"Utilization"}},"type":"Resource"}],"minReplicas":1,"scaleTargetRef":{"apiVersion":"apps/v1","kind":"Deployment","name":"php-apache"}}}
  creationTimestamp: "2023-06-22T14:36:54Z"
  name: php-apache
  namespace: default
  resourceVersion: "802102959"
  uid: 6412a1ef-b454-468f-ac3e-8bc9f460c67b
spec:
  maxReplicas: 10
  metrics:
  - resource:
      name: memory
      target:
        averageUtilization: 50
        type: Utilization
    type: Resource
  - resource:
      name: cpu
      target:
        averageUtilization: 50
        type: Utilization
    type: Resource
  minReplicas: 1
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: php-apache
status:
  currentMetrics: null
  desiredReplicas: 0

When we check it after apply, they come back in a different order to the one I asked for

@ricardojdsilva87
Copy link

Thanks for the explanation @mikebryant.
It seems that for now we would have to ignore these fields on argocd until kubernetes fixes it on their side

@mikebryant
Copy link
Contributor

@ricardojdsilva87 What version of Kubernetes are you on?
Per kubernetes/kubernetes#74099 (comment), I think this may be fixed from 1.27.0 onwards

@andrii-korotkov-verkada
Copy link
Contributor

Is this still an issue in newer versions?
If so, we can add a normalizer similar to https://github.com/argoproj/gitops-engine/blob/88c35a9acf6991acc67d6da5111de8c628b4a0be/pkg/diff/diff.go#L883-L917

@andrii-korotkov-verkada andrii-korotkov-verkada added the more-information-needed Further information is requested label Nov 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working more-information-needed Further information is requested
Projects
None yet
Development

No branches or pull requests

6 participants