Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Default to SELinux private label for play kube mounts #2575

Merged

Conversation

haircommander
Copy link
Collaborator

@haircommander haircommander commented Mar 7, 2019

Before, there were SELinux denials when a volume was mounted podman play kube. Fix this by hardcoding a shared label into the Volume.

@haircommander
Copy link
Collaborator Author

@ikke-t you can check this out if you want to test, but my logs now spit out some sql server output.

@haircommander haircommander changed the title Add volume permissions in play kube WIP Add volume permissions in play kube Mar 7, 2019
@openshift-ci-robot openshift-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Mar 7, 2019
@@ -282,7 +282,7 @@ func kubeContainerToCreateConfig(containerYAML v1.Container, runtime *libpod.Run
if err := validateVolumeCtrDir(volume.MountPath); err != nil {
return nil, errors.Wrapf(err, "error in parsing MountPath")
}
containerConfig.Volumes = append(containerConfig.Volumes, fmt.Sprintf("%s:%s", host_path, volume.MountPath))
containerConfig.Volumes = append(containerConfig.Volumes, fmt.Sprintf("%s:%s:z", host_path, volume.MountPath))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we make appending :z conditional on SELinux being enabled?
@rhatdan thoughts?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mheon honestly I'm not even happy with this solution. The k8s api isn't expressive enough for what we need here (what if a user wants ro). I think I'm going to default to appending :z if the user doesn't specify in the path (<ctr_path>:z) in the yaml. Only way I can think to add the correct permission but also not mess with k8s api...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be handled automatically by the command. The default for volumes is to create them with a shared label.
If I create a volume there is a good chance I will want more then one container to use it.

So no need to pass in the :z. Just when the tool creates the volume it should be labeled correctly.

@haircommander haircommander changed the title WIP Add volume permissions in play kube Add volume permissions in play kube Mar 13, 2019
@openshift-ci-robot openshift-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Mar 13, 2019
@haircommander
Copy link
Collaborator Author

haircommander commented Mar 13, 2019

Forgot to remove WIP. This is good to go. PTAL

(edit: upon further inspection I have deemed that to be a lie)

@haircommander haircommander changed the title Add volume permissions in play kube WIP Add volume permissions in play kube Mar 13, 2019
@openshift-ci-robot openshift-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Mar 13, 2019
@rhatdan rhatdan changed the title WIP Add volume permissions in play kube Add volume permissions in play kube Mar 13, 2019
@openshift-ci-robot openshift-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Mar 13, 2019
@@ -282,7 +282,7 @@ func kubeContainerToCreateConfig(containerYAML v1.Container, runtime *libpod.Run
if err := validateVolumeCtrDir(volume.MountPath); err != nil {
return nil, errors.Wrapf(err, "error in parsing MountPath")
}
containerConfig.Volumes = append(containerConfig.Volumes, fmt.Sprintf("%s:%s", host_path, volume.MountPath))
containerConfig.Volumes = append(containerConfig.Volumes, fmt.Sprintf("%s:%s:z", host_path, volume.MountPath))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be handled automatically by the command. The default for volumes is to create them with a shared label.
If I create a volume there is a good chance I will want more then one container to use it.

So no need to pass in the :z. Just when the tool creates the volume it should be labeled correctly.

@@ -28,6 +29,7 @@ import (
const (
// https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
createDirectoryPermission = 0755
defaultMountLabel = "z"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not used?

@rhatdan
Copy link
Member

rhatdan commented Mar 21, 2019

@haircommander Are you going to update this one?

@haircommander
Copy link
Collaborator Author

@rhatdan so, it turns out I don't think DirectoryOrCreate something we should support right now. We don't have a containers mount label until the container is created, so we'd have to go through all of the volumes a second time and relabel. I think we should do more thinking on the way that DirectoryOrCreate would work, if someone is looking for support for it.

@haircommander haircommander changed the title Add volume permissions in play kube Remove DirectoryOrCreate option from play kube Mar 21, 2019
@rhatdan
Copy link
Member

rhatdan commented Mar 21, 2019

If a named volume is created then it should be labeled with the shared label. This is what happens when you execute

podman volume create foobar

So if podman play creates a volume it should create it the same way.

@cevich
Copy link
Member

cevich commented Mar 21, 2019

(Last test failures were due to known certificate problem, rebase against latest master and they should run better)

@haircommander haircommander changed the title Remove DirectoryOrCreate option from play kube Default to SELinux shared label for play kube mounts Mar 22, 2019
@haircommander
Copy link
Collaborator Author

Okay @rhatdan: Per a conversation with @mheon on irc, we have decided this is the correct solution. We can't call label.Relabel, as we don't have the container's mount label until after its created, and it would complicate the code to do it after. PTAL

@haircommander haircommander force-pushed the hotfix_play_kube branch 3 times, most recently from b2fac48 to c5b64fd Compare March 22, 2019 18:46
@haircommander
Copy link
Collaborator Author

@mheon @rhatdan PTAL

@ikke-t
Copy link

ikke-t commented Mar 23, 2019

I record here my latest failure with the mounts:

awxweb container can't acces it's exported share. Are the mounts done right by podman?

[root@ohuska exported_volumes]# podman top awxweb user huser pid label args
USER   HUSER   PID   LABEL                                         COMMAND
1000   1000    1     system_u:system_r:container_t:s0:c719,c893   /tini -- /bin/sh -c /usr/bin/launch_awx.sh 
1000   1000    8     system_u:system_r:container_t:s0:c719,c893   bash /usr/bin/launch_awx.sh 
1000   1000    123   system_u:system_r:container_t:s0:c719,c893   /usr/bin/python2 /usr/bin/supervisord -c /supervisor.conf 
1000   1000    126   system_u:system_r:container_t:s0:c719,c893   python /usr/bin/config-watcher 
1000   1000    127   system_u:system_r:container_t:s0:c719,c893   /var/lib/awx/venv/awx/bin/python3 /var/lib/awx/venv/awx/bin/daphne -b 127.0.0.1 -p 8051 awx.asgi:channel_layer 
1000   1000    128   system_u:system_r:container_t:s0:c719,c893   nginx: master process nginx -g daemon off; 
1000   1000    129   system_u:system_r:container_t:s0:c719,c893   /var/lib/awx/venv/awx/bin/uwsgi --socket 127.0.0.1:8050 --module=awx.wsgi:application --vacuum --processes=5 --harakiri=120 --no-orphans --master --max-requests=1000 --master-fifo=/var/lib/awx/awxfifo --lazy-apps -b 32768 
1000   1000    130   system_u:system_r:container_t:s0:c719,c893   nginx: worker process 
1000   1000    131   system_u:system_r:container_t:s0:c719,c893   /var/lib/awx/venv/awx/bin/uwsgi --socket 127.0.0.1:8050 --module=awx.wsgi:application --vacuum --processes=5 --harakiri=120 --no-orphans --master --max-requests=1000 --master-fifo=/var/lib/awx/awxfifo --lazy-apps -b 32768 
1000   1000    132   system_u:system_r:container_t:s0:c719,c893   /var/lib/awx/venv/awx/bin/uwsgi --socket 127.0.0.1:8050 --module=awx.wsgi:application --vacuum --processes=5 --harakiri=120 --no-orphans --master --max-requests=1000 --master-fifo=/var/lib/awx/awxfifo --lazy-apps -b 32768 
1000   1000    133   system_u:system_r:container_t:s0:c719,c893   /var/lib/awx/venv/awx/bin/uwsgi --socket 127.0.0.1:8050 --module=awx.wsgi:application --vacuum --processes=5 --harakiri=120 --no-orphans --master --max-requests=1000 --master-fifo=/var/lib/awx/awxfifo --lazy-apps -b 32768 
1000   1000    134   system_u:system_r:container_t:s0:c719,c893   /var/lib/awx/venv/awx/bin/uwsgi --socket 127.0.0.1:8050 --module=awx.wsgi:application --vacuum --processes=5 --harakiri=120 --no-orphans --master --max-requests=1000 --master-fifo=/var/lib/awx/awxfifo --lazy-apps -b 32768 
1000   1000    135   system_u:system_r:container_t:s0:c719,c893   /var/lib/awx/venv/awx/bin/uwsgi --socket 127.0.0.1:8050 --module=awx.wsgi:application --vacuum --processes=5 --harakiri=120 --no-orphans --master --max-requests=1000 --master-fifo=/var/lib/awx/awxfifo --lazy-apps -b 32768 

here you can see the mount points from host side, it's the awx-data what is prevented from acces:

[root@ohuska exported_volumes]# ls -laZ
total 88
drwxr-xr-x. 19               33 root  unconfined_u:object_r:container_var_lib_t:s0     4096 Mar 24 00:20 .
drwx------.  5 root             root  unconfined_u:object_r:container_var_lib_t:s0     4096 Feb  6 19:19 ..
drwxr-xr-x.  2             1000 root  unconfined_u:object_r:container_var_lib_t:s0     4096 Mar 24 00:20 awx-data
drwx------. 19 systemd-coredump root  system_u:object_r:container_file_t:s0            4096 Mar 24 00:21 awx-pg
drwxr-xr-x.  2 root             root  unconfined_u:object_r:container_var_lib_t:s0     4096 Mar 18 07:00 backup
drwxr-xr-x.  3              472 root  system_u:object_r:container_file_t:s0:c468,c880  4096 Sep  6  2018 grafana-etc
drwxr-xr-x.  5              472 root  system_u:object_r:container_file_t:s0:c468,c880  4096 Mar 24 00:26 grafana-lib
drwxrwxrwx. 15               33 ikke  system_u:object_r:container_file_t:s0:c15,c770   4096 Mar  3 13:19 nextcloud_www

Note how the nextcloud and others have more selinux numbers there, where as the one for awx does not have the same as processes inside containers do have.

The yml file for this is here:
#2752

where you can see how mounts are done. Seems like podman won't create correct labels now for existing dirs. Data was moved from docker backup, that's why they are not empty dirs.

@ikke-t
Copy link

ikke-t commented Mar 26, 2019

Please ignore my previous comment. I found the user error with directory permissions.

@mheon
Copy link
Member

mheon commented Mar 27, 2019

/approve

@openshift-ci-robot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: haircommander, mheon

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 27, 2019
@mheon
Copy link
Member

mheon commented Mar 27, 2019

Code LGTM on my side
@rhatdan PTAL


// LabelVolumePath takes a mount path for a volume and gives it an
// selinux label of either shared or not
func LabelVolumePath(path string, shared bool) error {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are you passing in shared, and never using it?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah was supposed to be using it, fixed

@rhatdan
Copy link
Member

rhatdan commented Mar 28, 2019

Remove the shared param, and then it LGTM

@haircommander
Copy link
Collaborator Author

I opted to actually use it instead. PTAL

@rhatdan
Copy link
Member

rhatdan commented Mar 28, 2019

LGTM

return errors.Wrapf(err, "error releasing label %q", mountLabel)
}
if err := label.Relabel(path, mountLabel, shared); err != nil {
return errors.Wrapf(err, "error setting selinux label for %s to %q", path, mountLabel)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

might be good to work the shared value into the error here, in case it make a difference.

@TomSweeneyRedHat
Copy link
Member

One suggestion for consideration, tests aren't hip atm. Code LGTM otherwise.

@mheon
Copy link
Member

mheon commented Mar 28, 2019

Tests both seem like flakes, restarting

@@ -299,6 +304,9 @@ func kubeContainerToCreateConfig(ctx context.Context, containerYAML v1.Container
if err := shared.ValidateVolumeCtrDir(volume.MountPath); err != nil {
return nil, errors.Wrapf(err, "error in parsing MountPath")
}
// At this point, we know the name of tho host and the container path, but we don't know the SElinux MountLabel of the container.
// Therefore, we can't set the permissions of the mount here. Instead, mark the mount as shared (sharedMountPermission),
// and allow libpod to correctly set the permissions once the container is created
containerConfig.Volumes = append(containerConfig.Volumes, fmt.Sprintf("%s:%s", host_path, volume.MountPath))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not seeing sharedMountPermission anywhere here, though?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oof out of date comment

Before, there were SELinux denials when a volume was bind-mounted by podman play kube.
Partially fix this by setting the default private label for mounts created by play kube (with DirectoryOrCreate)
For volumes mounted as Directory, the user will have to set their own SELinux permissions on the mount point

also remove left over debugging print statement

Signed-off-by: Peter Hunt <[email protected]>
@mheon
Copy link
Member

mheon commented Mar 28, 2019

bot, retest this please
/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Mar 28, 2019
@rhatdan
Copy link
Member

rhatdan commented Mar 29, 2019

bot, retest this please

@openshift-merge-robot openshift-merge-robot merged commit 9b78935 into containers:master Mar 29, 2019
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 26, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 26, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants