-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
system tests: the catch-up game #8720
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: edsantiago The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
- run test: minor cleanup to .containerenv test. Basically, make it do only two podman-runs (they're expensive) and tighten up the results checks - ps test: add ps -a --storage. Requires small tweak to run_podman helper, so we can have "timeout" be an expected result - sdnotify test: workaround for containers#8718 (seeing MAINPID=xxx as last output line instead of READY=1). As found by the newly-added debugging echos, what we are seeing is: MAINPID=103530 READY=1 MAINPID=103530 It's not supposed to be that way; it's supposed to be just the first two. But when faced with reality, we must bend to accommodate it, so let's accept READY=1 anywhere in the output stream, not just as the last line. Signed-off-by: Ed Santiago <[email protected]>
@containers/podman-maintainers PTAL (low priority). I have updated the commit message, PR description, and code thanks to a stroke of luck: what was intended as a debug-printf for analyzing flakes, actually triggered right here, so I was able to gather data on #8718 and add a workaround. (I'm not going to claim it as fixed though). |
is "${#lines[@]}" "2" "podman ps -a --storage sees buildah container" | ||
|
||
# This is what deletes the container | ||
# FIXME: why doesn't "podman rm --storage $cid" do anything? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
--storage is really just a flag to add to --all. The combination removes all --storage containers and all podman containers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But --storage
doesn't actually seem to do anything at all: rm -f
removes storage containers just by itself. And from looking at the code, I don't see --storage
assigned to any option variable that is checked anywhere (but I could be wrong):
podman/cmd/podman/containers/rm.go
Lines 71 to 75 in 999d40d
if !registry.IsRemote() { | |
// This option is deprecated, but needs to still exists for backwards compatibility | |
flags.Bool("storage", false, "Remove container from storage library") | |
_ = flags.MarkHidden("storage") | |
} |
[EDIT: I originally wrote that rm -a
removed buildah containers. I meant -f
. Corrected above.]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Your right we don't refer to these as --storage but as --external in the CLI. And I don't see any use of --external in podman rm.
I guess we only use it in podman ps and podman container exists.
But podman rm can remove --external containers, if they are specifies by name. But not --all
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So.... playing with this a little more... I'm getting more confused. There seem to be two types of buildah containers: complete ones, and incomplete ones? Let me see if I can express this in a way that makes sense.
Complete buildah containers can be seen and removed by podman without -f
:
$ buildah from quay.io/libpod/testimage:20200929
testimage-working-container
$ ./bin/podman rm testimage-working-container
testimage-working-container <--- yay, success
Incomplete buildah containers can only be removed with -f
:
$ printf "FROM quay.io/libpod/testimage:20200929\nRUN sleep 30\n"| timeout -v 5 podman build -t thiswillneverexist -
STEP 1: FROM quay.io/libpod/testimage:20200929
STEP 2: RUN sleep 30
timeout: sending signal TERM to command ‘podman’
$ ./bin/podman rm testimage-working-container
Error: no container with name or ID testimage-working-container found: no such container
$ ./bin/podman ps -a --storage
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ef0cfdd5fc92 quay.io/libpod/testimage:20200929 buildah 51 seconds ago storage testimage-working-container
$ ./bin/podman rm ef0cfdd5fc92
Error: no container with name or ID ef0cfdd5fc92 found: no such container
$ ./bin/podman rm -f ef0cfdd5fc92
ef0cfdd5fc92
As someone clueless about storage internals, I find this perplexing. It makes me think I might need to extend this new test so it checks both cases, complete and incomplete containers (or whatever the proper name is).
@rhatdan, @nalind, does this surprise you, or is it just a "duh, of course" that I'm not understanding?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think ./bin/podman rm ef0cfdd5fc92 failing is a bug. I am not sure why it failed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's 100% reproducible though. There must be something different between the two types of containers; I just don't know how to even look for such a difference.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Filed #8735
LGTM |
2 similar comments
LGTM |
LGTM |
/lgtm |
run test: minor cleanup to .containerenv test. Basically,
make it do only two podman-runs (they're expensive) and
tighten up the results checks
ps test: add ps -a --storage. Requires small tweak to
run_podman helper, so we can have "timeout" be an expected
result
sdnotify test: workaround for CI flake: sdnotify-conmon test: getting MAINPID=1 instead of READY #8718 (seeing MAINPID=xxx as
last output line instead of READY=1). As found by the
newly-added debugging echos, what we are seeing is:
It's not supposed to be that way; it's supposed to be just
the first two. But when faced with reality, we must bend
to accommodate it, so let's accept READY=1 anywhere in
the output stream, not just as the last line.
Signed-off-by: Ed Santiago [email protected]