Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix handling of readonly containers when defined in kube.yaml #16682

Merged
merged 1 commit into from
Dec 4, 2022

Conversation

rhatdan
Copy link
Member

@rhatdan rhatdan commented Nov 29, 2022

The containers should be able to write to tmpfs mounted directories.

Signed-off-by: Daniel J Walsh [email protected]

Does this PR introduce a user-facing change?

Podman kube play with a readOnlyTmpfs Flag in the YAML can now write to tmpfs inside of container.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 29, 2022

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: rhatdan

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Nov 29, 2022
@mheon
Copy link
Member

mheon commented Nov 29, 2022

LGTM

Copy link
Member

@vrothberg vrothberg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code LGTM but I have a couple of comments on the system test.

@edsantiago PTAL

run_podman pod rm -a
run_podman rm -a

cat $YAML
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like a debug command.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed.

run_podman create --pod new:pod1 --name test1 $IMAGE touch /test
run_podman create --pod pod1 --read-only --name test2 $IMAGE touch /test
run_podman create --pod pod1 --read-only --name test3 $IMAGE touch /tmp/test
run_podman kube generate pod1 -f $YAML
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we check the output to verify that read-only field is properly set?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.


cat $YAML
run_podman kube play $YAML
run_podman inspect --format "{{.State.ExitCode}}" pod1-test1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we also check for the read-only fields to be set for the two containers?

run_podman inspect --format "{{.State.ExitCode}}" pod1-test2
is "$output" "1" "File system should be read/only"
run_podman inspect --format "{{.State.ExitCode}}" pod1-test3
is "$output" "0" "File system should be read/write"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can squash the three inspects into one querying all containers at once.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or use podman wait, but I lean slightly toward the way it is: it's nice to have an exact message for what happened (although the messages could be improved).

I added run_podman logs just before this, to investigate the problem on my laptop, and see:

142d0f92a92c touch: /tmp/test: Permission denied

This causes the pod1-test3 test to fail. It's not an AVC. I am stuck.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Different error messages, please. A pattern I like is that, when I see an error message in a failed test log, I can find exactly one and only one place where that error message appears. So like, for all three "should be" messages above, something like this please?

is ... "Root filesystem in container1 should be read/write"
is ... "Root filesystem in container2 should be read-only"
is ... "/tmp in read-only container should be read/write"

run_podman kube generate pod1 -f $YAML

run_podman pod rm -a
run_podman rm -a
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can remove the two commands by adding --replace to kube play below.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just remove the stray images, to remove warnings on cleanup.


run_podman kube down - < $YAML
run_podman pod rm -a
run_podman rm -a
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The two rms shouldn't be needed after kube down.

@edsantiago
Copy link
Member

/hold

rootless tests failing on my f37, need to understand why

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Nov 30, 2022
@edsantiago
Copy link
Member

Tests won't pass on my laptop. This can't merge until we figure out why. The only clue I've found is running mount in all the containers, and what I see is weird:

/dev/mapper/luks-d724ff70-d662-4cdc-a2e1-c7f7cd2c6803 on /tmp type btrfs (rw, ...

For a read-only container, I expected to see tmpfs:

$ bin/podman run --rm --read-only quay.io/libpod/testimage:20221018 mount|grep /tmp
tmpfs on /tmp type tmpfs (rw,...

I need to move on from this, so good luck. Again, please do not merge until this is resolved.

@rhatdan
Copy link
Member Author

rhatdan commented Nov 30, 2022

The READ/WRITE container will not have a tmpfs mounted on /tmp. Only the read-only ones.

Fixed up the tests and added some more checks.

@rhatdan rhatdan force-pushed the ro branch 2 times, most recently from 3edc5b9 to bd7914e Compare November 30, 2022 18:27
@edsantiago
Copy link
Member

The READ/WRITE container will not have a tmpfs mounted on /tmp. Only the read-only ones.

That's what I was trying to say: all the containers in the pod, on my system, have /foo/btrfs-whatever mounted on /tmp. None of the containers have a tmpfs. I believe this is incorrect.

And, tests still fail with this latest push.

Copy link
Member

@edsantiago edsantiago left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This. Is. Broken.

I don't know if it's a btrfs problem, or a crun/systemd/kernel difference between my laptop and Cirrus, or what. This cannot merge until someone figures out why this is broken.

@@ -196,6 +196,32 @@ EOF
run_podman rm -a
}

@test "podman kube play read-only" {
YAML=$PODMAN_TMPDIR/test.yml
run_podman create --pod new:pod1 --name test1 $IMAGE touch /test
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: touch /testrw. And on the next line, touch /testro. And on the next, touch /tmp/testtmp. In my debugging of the btrfs-not-tmpfs bug, I found that that simple change made reading the code much easier for me.

run_podman inspect --format "{{.State.ExitCode}}" pod1-test2
is "$output" "1" "File system should be read/only"
run_podman inspect --format "{{.State.ExitCode}}" pod1-test3
is "$output" "0" "File system should be read/write"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Different error messages, please. A pattern I like is that, when I see an error message in a failed test log, I can find exactly one and only one place where that error message appears. So like, for all three "should be" messages above, something like this please?

is ... "Root filesystem in container1 should be read/write"
is ... "Root filesystem in container2 should be read-only"
is ... "/tmp in read-only container should be read/write"

@edsantiago
Copy link
Member

On 1minutetip, with ext4, rootless test passes even though mount in the read-only container is still not a tmpfs:

   /dev/vda2 on /run type ext4 (rw,seclabel,relatime)
   /dev/vda2 on /tmp type ext4 (rw,seclabel,relatime)

Same kernel as my laptop, 6.0.9-300.fc37, so that's not the difference. I don't know how to run 1mt with a btrfs root, and I'm spending way too much time on this, so back to you.

@rhatdan
Copy link
Member Author

rhatdan commented Nov 30, 2022

I think the issue is that for some reason your container is forcing a mount on /tmp which should not be happening.

If you run this command, what do you see?

$ podman run --rm alpine mount | grep /tmp

I see nothing.

When I run with --read-only I see

$ podman run --rm --read-only alpine mount | grep /tmp
tmpfs on /tmp type tmpfs (rw,context="system_u:object_r:container_file_t:s0:c77,c326",nosuid,nodev,relatime,uid=3267,gid=3267,inode64)
tmpfs on /var/tmp type tmpfs (rw,context="system_u:object_r:container_file_t:s0:c77,c326",nosuid,nodev,relatime,uid=3267,gid=3267,inode64)

@edsantiago
Copy link
Member

I'm talking in the pod. In the pod, even though the container is read-only, mount shows /tmp being a real device. Not tmpfs.

Anyhow, I have a 1minutetip reproducer if you want it, ping me on IRC.

mnt := spec.Mount{
Destination: dest,
Type: define.TypeTmpfs,
Source: "tmpfs",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OMG I think this is it. This is the bug, or maybe it's somewhere else, but I'm kind of sure this is it.

Here is what is happening, in a very brief nutshell:

$ rm -rf tmpfs   ! humor me okay
$ bin/podman create --pod new:pod1 --read-only --name foo quay.io/libpod/testimage:20221018 touch /tmp/oh-hi-there
3d77b20d33a5fe8c27cd7c0051434c922d432fef2f4c320f41795ea8fa407889
$ bin/podman kube generate pod1 -f /tmp/foo.yaml
...
$ bin/podman kube play --replace /tmp/foo.yaml
...blah blah
$ ls -l tmpfs
total 0
-rw-r--r--. 1 esm esm 0 Nov 30 13:14 oh-hi-there
drwxr-xr-t. 1 esm esm 0 Nov 30 13:14 secrets/

That is: podman is creating a subdirectory called "tmpfs" in the current directory. I do not think that is what is intended. I think the intention is to mount an actual tmpfs.

The reason it failed on my laptop is that I ran hack/bats, which first runs as root, creates the root-owned subdirectory tmpfs, which then the second-pass (rootless) test of course fails to write with EACCES.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, this is it. If I change "tmpfs" to "sdfsdfsdf", recompile and run, I get a new subdirectory with that name. OK, I'm done with this. Time for a nap. My brain hurts.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok that is a different bug, not related to this PR. But podman generate kube is generating a bogus mount point for tmpfs mounts.

@edsantiago
Copy link
Member

Your new push fixes the "tmpfs-subdirectory" bug, thank you!

Since you have to re-push anyway to fix whitespace lint, could I ask you to consider addressing my other test-usability requests please?

run_podman 1 container exists pod1-test2
run_podman 1 container exists pod1-test3

run_podman rmi -a
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Eek! Could you remove this please?

Comment on lines -474 to -475
// Set enableServiceLinks to false as podman doesn't use the service port environment variables
enableServiceLinks := false
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is what's causing e2e test failures:

Value for field 'EnableServiceLinks' failed to satisfy matcher.
           Expected
               <*bool | 0x0>: nil
           to equal
               <*bool | 0xc0007deab0>: false

@edsantiago
Copy link
Member

System tests LGTM, thank you for addressing my readability concerns. e2e tests failing, should be an easy fix.

I placed a hold because of the failing tests (caused by the tmpfs-subdirectory-not-actually-tmpfs bug, which you've fixed). Removing it.

/hold cancel

@openshift-ci openshift-ci bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Dec 2, 2022
The containers should be able to write to tmpfs mounted directories.

Also cleanup output of podman kube generate to not show default values.

Signed-off-by: Daniel J Walsh <[email protected]>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 19, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 19, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. release-note
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants