Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman run --device-cgroup-rule option is not honored #10302

Closed
vikas-goel opened this issue May 10, 2021 · 1 comment · Fixed by #10895
Closed

podman run --device-cgroup-rule option is not honored #10302

vikas-goel opened this issue May 10, 2021 · 1 comment · Fixed by #10895
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@vikas-goel
Copy link
Contributor

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

When --device-cgroup-rule=b 7:* rmw option is passed to podman run command along with CAP_MKNOD capability, the container is expected to be able to setup loop devices. However, podman is not honoring the option. The same option works with Docker.

Steps to reproduce the issue:

  1. Start a podman container with --device-cgroup-rule=b 7:* rmw --device /dev/loop-control:/dev/loop-control:rwm --cap-add CAP_MKNOD option.

  2. Log into the container (podman exec -it <container-name> bash)

  3. Create a virtual block device using dd command and apply file-system using mkfs -t xfs

  4. Create a loop device using mknod command

  5. Attach the virtual block device using losetup command

Describe the results you received:
losetup fails with "failed to set up loop device: Operation not permitted" error.

[root@flex-vm-02 ~]# podman container inspect tme-mas-02 | grep -i device-cgroup-rule
                "--device-cgroup-rule=b 7:* rmw",
[root@flex-vm-02 ~]# podman exec -it tme-mas-02 bash
bash-4.2# dd if=/dev/zero of=/mnt/nblogs/vg.img count=208896
208896+0 records in
208896+0 records out
106954752 bytes (107 MB) copied, 0.412903 s, 259 MB/s
bash-4.2# mkfs -t xfs /mnt/nblogs/vg.img
meta-data=/mnt/nblogs/vg.img     isize=512    agcount=4, agsize=6528 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=26112, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=855, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
bash-4.2# losetup -f
/dev/loop1
bash-4.2# mknod /dev/loop1 b 7 1
bash-4.2# mkdir /mnt/vg
bash-4.2# losetup -f /mnt/nblogs/vg.img
losetup: /mnt/nblogs/vg.img: failed to set up loop device: Operation not permitted
bash-4.2# losetup /dev/loop1 /mnt/nblogs/vg.img
losetup: /dev/loop1: failed to set up loop device: Operation not permitted
bash-4.2#

Describe the results you expected:
The losetup should work as it happens in case of Docker

[root@flex-vm-03 ~]# ls -l /dev/loop*
crw-rw----. 1 root disk 10, 237 May 10 13:24 /dev/loop-control
[root@flex-vm-03 ~]# docker inspect tme-mas-03 | grep -A 2 DeviceCgroupRules
            "DeviceCgroupRules": [
                "b 7:* rmw"
            ],
[root@flex-vm-03 ~]# docker exec -it tme-mas-03 bash
bash-4.2# dd if=/dev/zero of=/mnt/nblogs/vg.img count=208896
208896+0 records in
208896+0 records out
106954752 bytes (107 MB) copied, 0.361704 s, 296 MB/s
bash-4.2# mkfs -t xfs /mnt/nblogs/vg.img
meta-data=/mnt/nblogs/vg.img     isize=512    agcount=4, agsize=6528 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=26112, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=855, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
bash-4.2# losetup -f
/dev/loop0
bash-4.2# mknod /dev/loop0 b 7 0
bash-4.2# losetup -f /mnt/nblogs/vg.img
bash-4.2# losetup -a
/dev/loop0: [64768]:68590193 (/mnt/nblogs/vg.img)

Additional information you deem important (e.g. issue happens only occasionally):
Consistent

Output of podman version:

Version:      3.1.0-dev
API Version:  3.1.0-dev
Go Version:   go1.16.1
Built:        Fri Mar 26 11:32:03 2021
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.19.8
  cgroupManager: systemd
  cgroupVersion: v1
  conmon:
    package: conmon-2.0.26-1.module+el8.4.0+10198+36d1d0e3.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.26, commit: 0a5175681bdd52b99f1f0f442cbba8f8c126a1c9'
  cpus: 8
  distribution:
    distribution: '"rhel"'
    version: "8.4"
  eventLogger: file
  hostname: flex-vm-02.dc2.ros2100.veritas.com
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 4.18.0-293.el8.x86_64
  linkmode: dynamic
  memFree: 7874523136
  memTotal: 33511845888
  ociRuntime:
    name: runc
    package: runc-1.0.0-70.rc92.module+el8.4.0+10198+36d1d0e3.x86_64
    path: /usr/bin/runc
    version: 'runc version spec: 1.0.2-dev'
  os: linux
  remoteSocket:
    exists: true
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    selinuxEnabled: true
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 16910295040
  swapTotal: 16924012544
  uptime: 137h 24m 14.8s (Approximately 5.71 days)
registries:
  search:
  - registry.access.redhat.com
  - registry.redhat.io
  - docker.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 5
    paused: 0
    running: 2
    stopped: 3
  graphDriverName: overlay
  graphOptions:
    overlay2.size: 10G
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 3
  runRoot: /run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 3.1.0-dev
  Built: 1616783523
  BuiltTime: Fri Mar 26 11:32:03 2021
  GitCommit: ""
  GoVersion: go1.16.1
  OsArch: linux/amd64
  Version: 3.1.0-dev

Package info (e.g. output of rpm -q podman or apt list podman):

podman-3.1.0-0.13.module_el8.5.0+733+9bb5dffa.x86_64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/master/troubleshooting.md)

Yes

Additional environment details (AWS, VirtualBox, physical, etc.):
Red Hat Enterprise Linux 8.4 Beta
VMware virtual machine

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label May 10, 2021
@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@rhatdan rhatdan self-assigned this Jul 9, 2021
rhatdan added a commit to rhatdan/podman that referenced this issue Jul 21, 2021
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 21, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants