-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test/system/250-systemd.bats: fix flake #16852
Conversation
Fix a flake in the kube-template test. After stopping the service, we want to make sure that the service container gets removed. However, ther is a small race window. `systemctl stop` will return when the service container _exits_. In between that and the `container exists` check, the service container may have not yet been removed. Hence, add a loop to account for that race. Fixes: containers#16047 Signed-off-by: Valentin Rothberg <[email protected]>
@containers/podman-maintainers PTAL |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: giuseppe, vrothberg The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I always thought systemctl stop
and systemctl start
are blocking? So I assume the commands in the unit should have been all be finished by the time the systemctl command finishes.
You made me doubt. I will do some more/better research with more coffee tomorrow.. /hold |
I found this in
It only mentions the start part but not stop. |
@@ -443,7 +443,14 @@ EOF | |||
|
|||
# Clean up | |||
systemctl stop $service_name | |||
run_podman 1 container exists $service_container | |||
for i in {0..5}; do |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you do a podman wait $service_container
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not yet. Once #16853 is in, we can.
In the recent past, I met the frequent need to wait for a container to exist that, at the same time, may get removed (e.g., system tests in [1]). Add an `--ignore` option to podman-wait which will ignore errors when a specified container is missing and mark its exit code as -1. Also remove ID fields from the WaitReport. It is actually not used by callers and removing it makes the code simpler and faster. Once merged, we can go over the tests and simplify them. [1] github.com/containers/pull/16852 Signed-off-by: Valentin Rothberg <[email protected]>
Closing. The test is doing the right thing. Must be something else but I was chasing a ghost. |
Fix a flake in the kube-template test. After stopping the service, we want to make sure that the service container gets removed. However, ther is a small race window.
systemctl stop
will return when the service container exits. In between that and thecontainer exists
check, the service container may have not yet been removed. Hence, add a loop to account for that race.Fixes: #16047
Signed-off-by: Valentin Rothberg [email protected]
Does this PR introduce a user-facing change?