Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't treat lack of queried node in cache as error (cherry-pick to 1.16) #2373

Conversation

losipiuk
Copy link
Contributor

Revert "Return error if aws fail to find nodegroup for node"
This reverts commit f65e47a.

The cluster autoscaler periodically looks for associations between
Kubernetes node objects and cloud provider-known records for instances
in node groups managed by the autoscaler. When the autoscaler
considers a Kubernetes node that is not a member of any of these
managed node groups, the cloud provider has no such record in its
cache. When some callers assumed that these queries would yield either
a node or an error, an earlier fix reported this lack of a cached
record as an error. However, callers responding to such an error
caused the autoscaler to forgo most of its usual pod maintenance
responsibilities.

At this point, these callers now expect that the query can fail, can
yield a record, or can succeed but without yielding any record. It's
that last case that arises when querying a Kubernetes node that's not
meant to be managed by the autoscaler. Restoring this three-outcome
behavior to the (*awsCloudProvider).NodeGroupForNode function allows
the autoscaler to ignore such unmanaged nodes and carry on with the
rest of its intended behavior.

Revert "Return error if aws fail to find nodegroup for node"
This reverts commit f65e47a.

The cluster autoscaler periodically looks for associations between
Kubernetes node objects and cloud provider-known records for instances
in node groups managed by the autoscaler. When the autoscaler
considers a Kubernetes node that is not a member of any of these
managed node groups, the cloud provider has no such record in its
cache. When some callers assumed that these queries would yield either
a node or an error, an earlier fix reported this lack of a cached
record as an error. However, callers responding to such an error
caused the autoscaler to forgo most of its usual pod maintenance
responsibilities.

At this point, these callers now expect that the query can fail, can
yield a record, or can succeed but without yielding any record. It's
that last case that arises when querying a Kubernetes node that's not
meant to be managed by the autoscaler. Restoring this three-outcome
behavior to the (*awsCloudProvider).NodeGroupForNode function allows
the autoscaler to ignore such unmanaged nodes and carry on with the
rest of its intended behavior.
@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Sep 23, 2019
@losipiuk
Copy link
Contributor Author

/assgin @krzysztof-jastrzebski

@losipiuk
Copy link
Contributor Author

cc: @Jeffwan @seh

@losipiuk losipiuk changed the title Don't treat lack of queried node in cache as error Don't treat lack of queried node in cache as error (cherry-pick to 1.16) Sep 23, 2019
@losipiuk
Copy link
Contributor Author

/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: losipiuk

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 23, 2019
@krzysztof-jastrzebski
Copy link
Contributor

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Sep 23, 2019
@krzysztof-jastrzebski
Copy link
Contributor

/lgtm

@k8s-ci-robot k8s-ci-robot merged commit f44388a into kubernetes:cluster-autoscaler-release-1.16 Sep 23, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants