Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CSI: ensure initial unpublish state is checkpointed #14675

Merged
merged 1 commit into from
Sep 27, 2022
Merged

CSI: ensure initial unpublish state is checkpointed #14675

merged 1 commit into from
Sep 27, 2022

Conversation

tgross
Copy link
Member

@tgross tgross commented Sep 23, 2022

A test flake revealed a bug in the CSI unpublish workflow, where an unpublish
that comes from a client that's successfully done the node-unpublish step will
not have the claim checkpointed if the controller-unpublish step fails. This
will result in a delay in releasing the volume claim until the next GC.

This changeset also ensures we're using a new snapshot after each write to raft,
and fixes two timing issues in test where either the volume watcher can
unpublish before the unpublish RPC is sent or we don't wait long enough in
resource-restricted environments like GHA.


Two notes for reviewers:

  • This is somewhat related to CSI: failed allocation should not block its own controller unpublish #14484 but is really a distinct bug, so I've given it its own changelog and it'll need backports.
  • This doesn't fix the longstanding issue of Nomad thumbing its nose at the concurrency requirements of the CSI spec, inasmuch that the volumewatcher can kick off after we've checkpointed (which doesn't actually result in bad behavior so long as plugins are mostly spec-compliant themselves, but...). I've got some thoughts about fixing that, but it's not going to land in 1.4.0.

@tgross tgross changed the title CSI: ensure we're using new snapshots after checkpoint CSI: fix test flake in TestCSIVolumeEndpoint_Unpublish Sep 23, 2022
@tgross tgross changed the title CSI: fix test flake in TestCSIVolumeEndpoint_Unpublish CSI: ensure initial unpublish state is checkpointed Sep 26, 2022
@tgross tgross added this to the 1.4.x milestone Sep 26, 2022
A test flake revealed a bug in the CSI unpublish workflow, where an unpublish
that comes from a client that's successfully done the node-unpublish step will
not have the claim checkpointed if the controller-unpublish step fails. This
will result in a delay in releasing the volume claim until the next GC.

This changeset also ensures we're using a new snapshot after each write to raft,
and fixes two timing issues in test where either the volume watcher can
unpublish before the unpublish RPC is sent or we don't wait long enough in
resource-restricted environements like GHA.
@tgross tgross added backport/1.1.x backport to 1.1.x release line backport/1.2.x backport to 1.1.x release line backport/1.3.x backport to 1.3.x release line labels Sep 27, 2022
@tgross tgross modified the milestones: 1.4.x, 1.4.0 Sep 27, 2022
@tgross tgross marked this pull request as ready for review September 27, 2022 11:46
Copy link
Contributor

@DerekStrickland DerekStrickland left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@tgross tgross merged commit b4cd9af into main Sep 27, 2022
@tgross tgross deleted the csi-flaky branch September 27, 2022 12:43
tgross added a commit that referenced this pull request Sep 27, 2022
A test flake revealed a bug in the CSI unpublish workflow, where an unpublish
that comes from a client that's successfully done the node-unpublish step will
not have the claim checkpointed if the controller-unpublish step fails. This
will result in a delay in releasing the volume claim until the next GC.

This changeset also ensures we're using a new snapshot after each write to raft,
and fixes two timing issues in test where either the volume watcher can
unpublish before the unpublish RPC is sent or we don't wait long enough in
resource-restricted environements like GHA.
tgross added a commit that referenced this pull request Sep 27, 2022
A test flake revealed a bug in the CSI unpublish workflow, where an unpublish
that comes from a client that's successfully done the node-unpublish step will
not have the claim checkpointed if the controller-unpublish step fails. This
will result in a delay in releasing the volume claim until the next GC.

This changeset also ensures we're using a new snapshot after each write to raft,
and fixes two timing issues in test where either the volume watcher can
unpublish before the unpublish RPC is sent or we don't wait long enough in
resource-restricted environements like GHA.
tgross added a commit that referenced this pull request Sep 27, 2022
A test flake revealed a bug in the CSI unpublish workflow, where an unpublish
that comes from a client that's successfully done the node-unpublish step will
not have the claim checkpointed if the controller-unpublish step fails. This
will result in a delay in releasing the volume claim until the next GC.

This changeset also ensures we're using a new snapshot after each write to raft,
and fixes two timing issues in test where either the volume watcher can
unpublish before the unpublish RPC is sent or we don't wait long enough in
resource-restricted environements like GHA.

Co-authored-by: Tim Gross <[email protected]>
tgross added a commit that referenced this pull request Sep 27, 2022
A test flake revealed a bug in the CSI unpublish workflow, where an unpublish
that comes from a client that's successfully done the node-unpublish step will
not have the claim checkpointed if the controller-unpublish step fails. This
will result in a delay in releasing the volume claim until the next GC.

This changeset also ensures we're using a new snapshot after each write to raft,
and fixes two timing issues in test where either the volume watcher can
unpublish before the unpublish RPC is sent or we don't wait long enough in
resource-restricted environements like GHA.

Co-authored-by: Tim Gross <[email protected]>
@github-actions
Copy link

I'm going to lock this pull request because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active contributions.
If you have found a problem that seems related to this change, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Jan 26, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
backport/1.1.x backport to 1.1.x release line backport/1.2.x backport to 1.1.x release line backport/1.3.x backport to 1.3.x release line theme/storage type/bug
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants