Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kube: notifyproxy: fix lost READY message #15820

Merged
merged 2 commits into from
Sep 26, 2022

Conversation

vrothberg
Copy link
Member

See the messages of the two commits.

Does this PR introduce a user-facing change?

Fix a bug in the sd-notify integration of `kube play` where a READY message from a container may get lost.

@openshift-ci openshift-ci bot added release-note approved Indicates a PR has been approved by an approver from all required OWNERS files. labels Sep 15, 2022
@rhatdan
Copy link
Member

rhatdan commented Sep 15, 2022

LGTM
@mheon @Luap99 @giuseppe @baude PTAL

@vrothberg
Copy link
Member Author

Almost, still needs some massaging

// goroutines. One waiting for the `READY` message, the other waiting
// for the container to stop running.
errorChan := make(chan error, 1)
readyChan := make(chan bool)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this needs a defer close for both channels inside their goroutines to prevent write after close problems?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't that cause a write after close? Let's assume we have a read error and the container exits. If routine 1 closes the channel that routine 2 wants to write the error to, we'd run into this issue.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made both channels buffered to make sure none of the two routines can block. Once both are done, the channels will get garbage collected. But there is no guarantee that both routines will be finished once the function returns. There is a chance that the container sends ready, does something and exits. Routine 1 receives the READY, routine 2 detects the container isn't running and sends the error before ctx has been cancelled.

Copy link
Member

@Luap99 Luap99 Sep 19, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was trying to say close them inside the goroutine with defer, i.e. on line 121 and 160

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I understood. You owe an explanation though. Only when closing the channels, they are subject to a write after close. Since they're buffered, the routines cannot block and the channel will be garbage collected when both routines returned. I think that closing makes it more complicated than necessary. But I may be missing something.

pkg/systemd/notifyproxy/notifyproxy.go Outdated Show resolved Hide resolved
Use a wait group to a) wait for all proxies in parallel
                    b) avoid the potential for ABBA deadlocks

[NO NEW TESTS NEEDED] as it is not changing functionality

Signed-off-by: Valentin Rothberg <[email protected]>
The read deadline may yield the READY message to be lost in space.
Instead, use a more Go-idiomatic alternative by using two goroutines;
one reading from the connection, the other watching the container.

[NO NEW TESTS NEEDED] since existing tests are exercising this
functionality already.

Fixes: containers#15800
Signed-off-by: Valentin Rothberg <[email protected]>
@vrothberg
Copy link
Member Author

@containers/podman-maintainers PTAL

@rhatdan
Copy link
Member

rhatdan commented Sep 17, 2022

LGTM

@vrothberg
Copy link
Member Author

Can we please get this in? If there are concerns left on the code I appreciate we'd resolve them in a timely manner.

Copy link
Collaborator

@flouthoc flouthoc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM
/lgtm
/approve

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Sep 26, 2022
@edsantiago
Copy link
Member

I don't feel qualified to lgtm anything having to do with Go channels, but I approve this in principle.

Copy link
Member

@giuseppe giuseppe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Sep 26, 2022

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: flouthoc, giuseppe, vrothberg

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [flouthoc,giuseppe,vrothberg]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-merge-robot openshift-merge-robot merged commit 1d63d9f into containers:main Sep 26, 2022
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 20, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 20, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. release-note
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants