Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

system tests: the catch-up game #8720

Merged
merged 1 commit into from
Dec 16, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
48 changes: 27 additions & 21 deletions test/system/030-run.bats
Original file line number Diff line number Diff line change
Expand Up @@ -548,27 +548,33 @@ json-file | f
}

@test "Verify /run/.containerenv exist" {
run_podman run --rm $IMAGE ls -1 /run/.containerenv
is "$output" "/run/.containerenv"

run_podman run --privileged --rm $IMAGE sh -c '. /run/.containerenv; echo $engine'
is "$output" ".*podman.*" "failed to identify engine"

run_podman run --privileged --name "testcontainerenv" --rm $IMAGE sh -c '. /run/.containerenv; echo $name'
is "$output" ".*testcontainerenv.*"

run_podman run --privileged --rm $IMAGE sh -c '. /run/.containerenv; echo $image'
is "$output" ".*$IMAGE.*" "failed to idenitfy image"

run_podman run --privileged --rm $IMAGE sh -c '. /run/.containerenv; echo $rootless'
# FIXME: on some CI systems, 'run --privileged' emits a spurious
# warning line about dup devices. Ignore it.
remove_same_dev_warning
if is_rootless; then
is "$output" "1"
else
is "$output" "0"
fi
# Nonprivileged container: file exists, but must be empty
run_podman run --rm $IMAGE stat -c '%s' /run/.containerenv
is "$output" "0" "file size of /run/.containerenv, nonprivileged"

# Prep work: get ID of image; make a cont. name; determine if we're rootless
run_podman inspect --format '{{.ID}}' $IMAGE
local iid="$output"

random_cname=c$(random_string 15 | tr A-Z a-z)
local rootless=0
if is_rootless; then
rootless=1
fi

run_podman run --privileged --rm --name $random_cname $IMAGE \
sh -c '. /run/.containerenv; echo $engine; echo $name; echo $image; echo $id; echo $imageid; echo $rootless'

# FIXME: on some CI systems, 'run --privileged' emits a spurious
# warning line about dup devices. Ignore it.
remove_same_dev_warning

is "${lines[0]}" "podman-.*" 'containerenv : $engine'
is "${lines[1]}" "$random_cname" 'containerenv : $name'
is "${lines[2]}" "$IMAGE" 'containerenv : $image'
is "${lines[3]}" "[0-9a-f]\{64\}" 'containerenv : $id'
is "${lines[4]}" "$iid" 'containerenv : $imageid'
is "${lines[5]}" "$rootless" 'containerenv : $rootless'
}

@test "podman run with --net=host and --port prints warning" {
Expand Down
39 changes: 39 additions & 0 deletions test/system/040-ps.bats
Original file line number Diff line number Diff line change
Expand Up @@ -82,4 +82,43 @@ load helpers
run_podman rm -a
}

@test "podman ps -a --storage" {
skip_if_remote "ps --storage does not work over remote"

# Setup: ensure that we have no hidden storage containers
run_podman ps --storage -a
is "${#lines[@]}" "1" "setup check: no storage containers at start of test"

# Force a buildah timeout; this leaves a buildah container behind
PODMAN_TIMEOUT=5 run_podman 124 build -t thiswillneverexist - <<EOF
FROM $IMAGE
RUN sleep 30
EOF

run_podman ps -a
is "${#lines[@]}" "1" "podman ps -a does not see buildah container"

run_podman ps --storage -a
is "${#lines[@]}" "2" "podman ps -a --storage sees buildah container"
is "${lines[1]}" \
"[0-9a-f]\{12\} \+$IMAGE *buildah .* seconds ago .* storage .* ${PODMAN_TEST_IMAGE_NAME}-working-container" \
"podman ps --storage"

cid="${lines[1]:0:12}"

# 'rm -a' should be a NOP
run_podman rm -a
run_podman ps --storage -a
is "${#lines[@]}" "2" "podman ps -a --storage sees buildah container"

# This is what deletes the container
# FIXME: why doesn't "podman rm --storage $cid" do anything?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

--storage is really just a flag to add to --all. The combination removes all --storage containers and all podman containers.

Copy link
Member Author

@edsantiago edsantiago Dec 15, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But --storage doesn't actually seem to do anything at all: rm -f removes storage containers just by itself. And from looking at the code, I don't see --storage assigned to any option variable that is checked anywhere (but I could be wrong):

if !registry.IsRemote() {
// This option is deprecated, but needs to still exists for backwards compatibility
flags.Bool("storage", false, "Remove container from storage library")
_ = flags.MarkHidden("storage")
}

[EDIT: I originally wrote that rm -a removed buildah containers. I meant -f. Corrected above.]

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your right we don't refer to these as --storage but as --external in the CLI. And I don't see any use of --external in podman rm.
I guess we only use it in podman ps and podman container exists.

But podman rm can remove --external containers, if they are specifies by name. But not --all

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So.... playing with this a little more... I'm getting more confused. There seem to be two types of buildah containers: complete ones, and incomplete ones? Let me see if I can express this in a way that makes sense.

Complete buildah containers can be seen and removed by podman without -f:

$ buildah from quay.io/libpod/testimage:20200929
testimage-working-container
$ ./bin/podman rm testimage-working-container
testimage-working-container      <--- yay, success

Incomplete buildah containers can only be removed with -f:

$ printf "FROM quay.io/libpod/testimage:20200929\nRUN sleep 30\n"| timeout -v 5 podman build -t thiswillneverexist -
STEP 1: FROM quay.io/libpod/testimage:20200929
STEP 2: RUN sleep 30
timeout: sending signal TERM to command ‘podman’
$ ./bin/podman rm testimage-working-container
Error: no container with name or ID testimage-working-container found: no such container
$ ./bin/podman ps -a --storage
CONTAINER ID  IMAGE                              COMMAND  CREATED         STATUS   PORTS   NAMES
ef0cfdd5fc92  quay.io/libpod/testimage:20200929  buildah  51 seconds ago  storage          testimage-working-container
$ ./bin/podman rm ef0cfdd5fc92
Error: no container with name or ID ef0cfdd5fc92 found: no such container
$ ./bin/podman rm -f ef0cfdd5fc92
ef0cfdd5fc92

As someone clueless about storage internals, I find this perplexing. It makes me think I might need to extend this new test so it checks both cases, complete and incomplete containers (or whatever the proper name is).

@rhatdan, @nalind, does this surprise you, or is it just a "duh, of course" that I'm not understanding?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think ./bin/podman rm ef0cfdd5fc92 failing is a bug. I am not sure why it failed.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's 100% reproducible though. There must be something different between the two types of containers; I just don't know how to even look for such a difference.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Filed #8735

run_podman rm -f "$cid"

run_podman ps --storage -a
is "${#lines[@]}" "1" "storage container has been removed"
}



# vim: filetype=sh
11 changes: 10 additions & 1 deletion test/system/260-sdnotify.bats
Original file line number Diff line number Diff line change
Expand Up @@ -100,8 +100,17 @@ function _assert_mainpid_is_conmon() {
run_podman logs sdnotify_conmon_c
is "$output" "READY" "\$NOTIFY_SOCKET in container"

# The 'echo's help us debug failed runs
run cat $_SOCAT_LOG
is "${lines[-1]}" "READY=1" "final output from sdnotify"
echo "socat log:"
echo "$output"

# ARGH! 'READY=1' should always be the last output line. But sometimes,
# for reasons unknown, we get an extra MAINPID=xxx after READY=1 (#8718).
# Who knows if this is a systemd bug, or conmon, or what. I don't
# even know where to begin asking. So, to eliminate the test flakes,
# we look for READY=1 _anywhere_ in the output, not just the last line.
is "$output" ".*READY=1.*" "sdnotify sent READY=1"

_assert_mainpid_is_conmon "${lines[0]}"

Expand Down
7 changes: 5 additions & 2 deletions test/system/helpers.bash
Original file line number Diff line number Diff line change
Expand Up @@ -168,8 +168,11 @@ function run_podman() {

if [ "$status" -eq 124 ]; then
if expr "$output" : ".*timeout: sending" >/dev/null; then
echo "*** TIMED OUT ***"
false
# It's possible for a subtest to _want_ a timeout
if [[ "$expected_rc" != "124" ]]; then
echo "*** TIMED OUT ***"
false
fi
fi
fi

Expand Down