-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chore(vpa): update dependencies #7551
base: master
Are you sure you want to change the base?
Conversation
Hi @davidspek. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
8d10f0a
to
00a106e
Compare
/remove-kind feature Thanks for the PR, can you also bump the VPA's dependencies, as this only does the e2e tests |
@adrianmoisey There are too many changes for GitHub to display them in the PR. But if you look at https://github.com/davidspek/autoscaler/tree/feat/update-vpa-k8s/vertical-pod-autoscaler the dependencies were updated for the regular VPA as well. |
Wow! GitHub breaks is such wonderful ways when the commit is large. |
So far this looks good to me. I am getting some failures on the e2e tests locally, but I think those aren't related. I'll spend some more time looking into that to make sure it's unrelated. |
Great, thanks! |
There are two small linting issues:
(if you run this locally, you may see more linting issues, but a PR was merged into master to take care of those, the ones I pasted above is the output after I merged master into your branch, locally). |
/lgtm e2e tests passed on my laptop. Something to note: not all the dependencies have been upgraded. I don't mind doing those in a later PR, unless you want to do them now |
I can do them here. Which ones did I miss? |
I see I missed |
|
Oh I thought I'd scope this to just the k8s dependencies. Should I change it to all? |
/lgtm |
@@ -236,7 +236,7 @@ func setupSuite() { | |||
// #41007. To avoid those pods preventing the whole test runs (and just | |||
// wasting the whole run), we allow for some not-ready pods (with the | |||
// number equal to the number of allowed not-ready nodes). | |||
if err := e2epod.WaitForPodsRunningReady(context.TODO(), c, metav1.NamespaceSystem, int32(framework.TestContext.MinStartupPods), int32(framework.TestContext.AllowedNotReadyNodes), podStartupTimeout); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like we are losing the framework.TestContext.AllowedNotReadyNodes
here, I haven't looked yet but do we know why? Is it OK?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Found the original PR that removed this: kubernetes/kubernetes#124205
So looks generally harmless? Would love to validate that framework.TestContext.AllowedNotReadyNodes
was always 0 in our case as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For what it's worth, I've run the e2e tests locally and everything passes
/lgtm The delta in changes here is crazy! Over 500k less lines of code! 2 super quick things:
|
My understanding is that this is partially affecting this - it means our tests with this VPA version will likely only work in the later Kubernetes versions. We'll have to be more careful about updating the compatibility table when we make the same changes in the main VPA code. |
GitHub breaks on this change, it's both the VPA's dependencies and the e2e tests's dependencies.
I've been testing it locally using Kubernetes 1.26.3 and it works. What I don't know is what the promises of client-go are regarding backwards compatibility, and how we determine when to update that compatibility table in the VPA's docs. |
@raywainman This PR also updates the dependencies for the main VPA, not just In terms of general compatibility, the cluster-autoscaler is already on |
Wrt to all the comments that the diff is huge. We should remove the |
What about:
It may make review much easier |
I created issue for the vendor dir removal: #7570 |
Signed-off-by: David van der Spek <[email protected]>
Signed-off-by: David van der Spek <[email protected]>
Signed-off-by: David van der Spek <[email protected]>
Signed-off-by: David van der Spek <[email protected]>
5a9cc12
to
e5fcd0b
Compare
New changes are detected. LGTM label has been removed. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: davidspek The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it worth removing this, and only scoping the PR to dependencies only?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure
79d273e
to
e5fcd0b
Compare
Signed-off-by: David van der Spek <[email protected]>
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
This PR updates the dependency package versions for the vertical pod autoscaler
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: