Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Backport release-1.29] Applier manager improvements #5303

Merged
merged 6 commits into from
Dec 10, 2024

Conversation

k0s-bot
Copy link

@k0s-bot k0s-bot commented Nov 29, 2024

Automated backport to release-1.29, triggered by a label in #5172.
See #5171 #5062 #5122.

The map is only ever used in the loop to create and remove stacks, so it
doesn't need to be stored in the struct. This ensures that there can't
be any racy concurrent accesses to it.

Signed-off-by: Tom Wieczorek <[email protected]>
(cherry picked from commit ba547ed)
(cherry picked from commit c6dba06)
(cherry picked from commit 05a768b)
The only reason these channels get closed is if the watcher itself gets
closed. This happens only when the method returns, which in turn only
happens when the context is done. In this case, the loop has already
exited without a select on a potentially closed channel. So the branches
that checked for closed channels were effectively unreachable during
runtime.

Signed-off-by: Tom Wieczorek <[email protected]>
(cherry picked from commit db5e0d2)
(cherry picked from commit 102b7e3)
(cherry picked from commit b9fd9bd)
Rename cancelWatcher to stop and wait until the newly added stopped
channel is closed. Also, add a stopped channel to each stack to do the
same for each stack-specific goroutine.

Signed-off-by: Tom Wieczorek <[email protected]>
(cherry picked from commit 402c728)
(cherry picked from commit dbc286c)
(cherry picked from commit 95e5ec0)
Cancel the contexts with a cause. Add this cause to the log statements
when exiting loops. Rename bundlePath to bundleDir to reflect the fact
that it is a directory, not a file.

Signed-off-by: Tom Wieczorek <[email protected]>
(cherry picked from commit edb105c)
(cherry picked from commit a22902b)
(cherry picked from commit 394198b)
Exit the loop on error and restart it after a one-minute delay to allow
it to recover in a new run. Also replace the bespoke retry loop for
stacks with the Kubernetes client's wait package.

Signed-off-by: Tom Wieczorek <[email protected]>
(cherry picked from commit 404c6cf)
(cherry picked from commit 3058460)
(cherry picked from commit 87ce481)
Seems to be a remnant from the past.

Signed-off-by: Tom Wieczorek <[email protected]>
(cherry picked from commit c2beea7)
(cherry picked from commit 4b2efbe)
(cherry picked from commit 04ba246)
@k0s-bot k0s-bot requested a review from a team as a code owner November 29, 2024 13:06
@twz123 twz123 changed the title [Backport release-1.29] [Backport release-1.30] Applier manager improvements [Backport release-1.29] Applier manager improvements Nov 29, 2024
@jnummelin jnummelin merged commit 4a29c7f into release-1.29 Dec 10, 2024
79 checks passed
@jnummelin jnummelin deleted the backport-5172-to-release-1.29 branch December 10, 2024 10:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants