-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Speed up admission hook by eliminating deep copy of Ingresses in CheckIngress #7298
Speed up admission hook by eliminating deep copy of Ingresses in CheckIngress #7298
Conversation
Hi @cgorbit. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/ok-to-test
thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/assign @rikatz |
Hey @cgorbit and @tao12345666333 reflecting a bit here. So, trying to get some context, I've seen in past that part of k8s codes as well moved to the approach of deep copying a structure before modifying it. I'm not sure if this is due to some safeness (multi threads + pointers of the same object, for example) but I'll need to reflect better about this. I want some thoughts on the security implications of removing the deepcopy in favor of accessing this pointer directly :) |
Let's hold it first, and then cancel the hold once we reach a consensus. /hold |
So, I don't see why we can't create more locations in the same way. |
@cgorbit I took some time to proper read the code, here's my concern: While in loop https://github.com/kubernetes/ingress-nginx/blob/master/internal/ingress/controller/location.go#L47 all locations verifications are "skipped" after some condition is met (loop continues to next item), exactly here something different happens: So, because this is a pointer, you might end adding the same pointer twice, and when you try to change only the pathType information you can end turning this information into something invalid. In the end, what is going to happen is, you have a location that you set as pathPrefix and it's skipped here to the next set of instructions, then this same location is added here and duplicated as pathTypeExact here When you add your patch, instead of being duplicated, you just turn that pathTypePrefix into a pathTypeExact as you are changing the pointer :) Maybe some tests about the behavior of a mixed Exact/Prefix location would be good to confirm that this wont break anything :) |
I don't change the pointer. I don't add the same pointer twice to First I do shallow copy of |
@cgorbit tested here, thanks for the patience :) |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: cgorbit, rikatz, tao12345666333 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/hold cancel |
…kIngress (#7298) (#7333) Co-authored-by: Kirill Trofimenkov <[email protected]>
…kIngress (kubernetes#7298) (kubernetes#7333) Co-authored-by: Kirill Trofimenkov <[email protected]>
On big k8s clusters with a lot of Ingresses web admission hook may be too slow.
For example in my real developing k8s cluster with have nginx ingress controller which contains 120 server sections (hosts) and 5271 location sections (endpoints) and have 19MB size as nginx text config.
On such big config amission hook for each ingress update becomes too slow, it lasts about 7 seconds now on setup described above (k8s inside aws ec2).
Sometimes when somebody install helm charts, which contains many ingresses admission hooks must be evaluated many times in a row and we got error from master about admission hook time out (of 30s).
This PR decrease timings for setup described above by 3s.
Types of changes
Which issue/s this PR fixes
fixes #7297
How Has This Been Tested?
e2e test: https://gist.github.com/cgorbit/431086b0e78f1a8b75cd497235ae8d51
Also tested in runtime of developer's k8s cluster at work.
Checklist: