-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add custom metric to hpa template #5704
Conversation
Thanks for your pull request. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). 📝 Please follow instructions at https://git.k8s.io/community/CLA.md#the-contributor-license-agreement to sign the CLA. It may take a couple minutes for the CLA signature to be fully registered; after that, please reply here with a new comment and we'll verify. Thanks.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Hi @alexandrsemak. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I signed it |
charts/ingress-nginx/values.yaml
Outdated
@@ -577,4 +588,4 @@ tcp: {} | |||
# Ref: https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/udp | |||
## | |||
udp: {} | |||
# 53: "kube-system/kube-dns:53" | |||
# 53: "kube-system/kube-dns:53" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please don't remove the newline
/ok-to-test |
charts/ingress-nginx/values.yaml
Outdated
@@ -226,10 +226,10 @@ controller: | |||
resources: | |||
# limits: | |||
# cpu: 100m | |||
# memory: 90Mi | |||
# memory: 200Mi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why are we changing these values?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ChiefAlexander low memory value triggered hpa even w/o any load, so if you install chart with 90Mi memory it will scale hpa to maxreplicas
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see. As we don't default the HPA to enabled I'd rather we leave this alone and instead make a note of that in our Charts Readme. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't mind. I will back this value
/assign @ChiefAlexander |
This is looking good. Please squash your commits. https://github.com/kubernetes/community/blob/master/contributors/guide/pull-requests.md#6-squashing |
charts/ingress-nginx/Chart.yaml
Outdated
@@ -1,6 +1,6 @@ | |||
apiVersion: v1 | |||
name: ingress-nginx | |||
version: 2.4.0 | |||
version: 2.5.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
version: 2.5.0 | |
version: 2.6.0 |
Another change got in before yours. Please bump the chart version and squash the commit. After that this lgtm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry about this. Yet another change was merged in before yours again. Gonna need to bump the minor version and rebase your changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you mean chart version 2.6.1 ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gotcha, done
/retest |
@ChiefAlexander please could you resolve conflict? |
Your PR needs a rebase against the changes in origin. |
@alexandrsemak: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
@ChiefAlexander done |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: alexandrsemak, ChiefAlexander The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
yeah no problem, this is much better than support own chart |
# - type: Pods | ||
# pods: | ||
# metric: | ||
# name: nginx_ingress_controller_nginx_process_requests_total |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@alexandrsemak nginx_ingress_controller_nginx_process_requests_total
metric is a counter IMO making it not appropriate metrics for HPA; other than being "reset" when container is restarted and pod cycled, just its rate may get reduced when additional replicas get added.
Maybe nginx_ingress_controller_nginx_process_connections
would be more appropriate. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sslavic sorry for confusing by this metric it was provided just for example the main goal was add ability to add any of custom metric. BTW kubernetes use timestamp for calculate averagevalue even with counter metric it will works correct, but you right we can change it to nginx_ingress_controller_nginx_process_connections
or wrap nginx_ingress_controller_nginx_process_requests_total
in rate
What this PR does / why we need it:
Types of changes
Which issue/s this PR fixes
Low resources request triggered HPA scaling even without any load
How Has This Been Tested?
Tested on K8s 1.16.10
Checklist: