-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Don't treat lack of queried node in cache as error (cherry-pick to 1.16) #2373
Merged
k8s-ci-robot
merged 1 commit into
kubernetes:cluster-autoscaler-release-1.16
from
losipiuk:lo/cherry-pick-2357-1-16
Sep 23, 2019
Merged
Don't treat lack of queried node in cache as error (cherry-pick to 1.16) #2373
k8s-ci-robot
merged 1 commit into
kubernetes:cluster-autoscaler-release-1.16
from
losipiuk:lo/cherry-pick-2357-1-16
Sep 23, 2019
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Revert "Return error if aws fail to find nodegroup for node" This reverts commit f65e47a. The cluster autoscaler periodically looks for associations between Kubernetes node objects and cloud provider-known records for instances in node groups managed by the autoscaler. When the autoscaler considers a Kubernetes node that is not a member of any of these managed node groups, the cloud provider has no such record in its cache. When some callers assumed that these queries would yield either a node or an error, an earlier fix reported this lack of a cached record as an error. However, callers responding to such an error caused the autoscaler to forgo most of its usual pod maintenance responsibilities. At this point, these callers now expect that the query can fail, can yield a record, or can succeed but without yielding any record. It's that last case that arises when querying a Kubernetes node that's not meant to be managed by the autoscaler. Restoring this three-outcome behavior to the (*awsCloudProvider).NodeGroupForNode function allows the autoscaler to ignore such unmanaged nodes and carry on with the rest of its intended behavior.
k8s-ci-robot
added
cncf-cla: yes
Indicates the PR's author has signed the CNCF CLA.
size/S
Denotes a PR that changes 10-29 lines, ignoring generated files.
labels
Sep 23, 2019
/assgin @krzysztof-jastrzebski |
losipiuk
changed the title
Don't treat lack of queried node in cache as error
Don't treat lack of queried node in cache as error (cherry-pick to 1.16)
Sep 23, 2019
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: losipiuk The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
k8s-ci-robot
added
the
approved
Indicates a PR has been approved by an approver from all required OWNERS files.
label
Sep 23, 2019
/lgtm |
k8s-ci-robot
added
the
lgtm
"Looks good to me", indicates that a PR is ready to be merged.
label
Sep 23, 2019
/lgtm |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
approved
Indicates a PR has been approved by an approver from all required OWNERS files.
cncf-cla: yes
Indicates the PR's author has signed the CNCF CLA.
lgtm
"Looks good to me", indicates that a PR is ready to be merged.
size/S
Denotes a PR that changes 10-29 lines, ignoring generated files.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Revert "Return error if aws fail to find nodegroup for node"
This reverts commit f65e47a.
The cluster autoscaler periodically looks for associations between
Kubernetes node objects and cloud provider-known records for instances
in node groups managed by the autoscaler. When the autoscaler
considers a Kubernetes node that is not a member of any of these
managed node groups, the cloud provider has no such record in its
cache. When some callers assumed that these queries would yield either
a node or an error, an earlier fix reported this lack of a cached
record as an error. However, callers responding to such an error
caused the autoscaler to forgo most of its usual pod maintenance
responsibilities.
At this point, these callers now expect that the query can fail, can
yield a record, or can succeed but without yielding any record. It's
that last case that arises when querying a Kubernetes node that's not
meant to be managed by the autoscaler. Restoring this three-outcome
behavior to the (*awsCloudProvider).NodeGroupForNode function allows
the autoscaler to ignore such unmanaged nodes and carry on with the
rest of its intended behavior.