Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman run: gives error while loading shared libraries: libc.so.6: cannot change memory protections #3234

Closed
sinnykumari opened this issue May 30, 2019 · 46 comments
Labels
locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@sinnykumari
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

kind bug

Description
podman run gives error while trying to run a container

# podman run -i -t registry.fedoraproject.org/fedora bash
bash: error while loading shared libraries: libtinfo.so.6: cannot change memory protections

Describe the results you expected:
podman should run container and give bash prompt inside container

Additional information you deem important (e.g. issue happens only occasionally):

Tried few things but dind't get fixed:

  • Reinstalled container-selinux package and restorecon -R -v /var/lib/containers
  • Reinstalled podman and ran restorecon -R -v /var/lib/containers
  • Removed everything from /var/lib/containers and /home/root/containers/

Note: podman run works with selinux set to Permissive

Output of podman version: podman-1.2.0-2.git3bd528e.fc29.aarch64

# podman version
Version:            1.2.0
RemoteAPI Version:  1
Go Version:         go1.11.5
OS/Arch:            linux/arm64

Output of podman info --debug:

# podman info --debug
debug:
  compiler: gc
  git commit: ""
  go version: go1.11.5
  podman version: 1.2.0
host:
  BuildahVersion: 1.7.2
  Conmon:
    package: podman-1.2.0-2.git3bd528e.fc29.aarch64
    path: /usr/libexec/podman/conmon
    version: 'conmon version 1.12.0-dev, commit: a5b8d77e006ee972d9bbfd37699da552c934e33a'
  Distribution:
    distribution: fedora
    version: "29"
  MemFree: 15248670720
  MemTotal: 16781996032
  OCIRuntime:
    package: runc-1.0.0-93.dev.gitb9b6cc6.fc29.aarch64
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc8+dev
      commit: b8b7b8ec668cd816610ec7be29cf2cef2b62c8ae
      spec: 1.0.1-dev
  SwapFree: 8480878592
  SwapTotal: 8480878592
  arch: arm64
  cpus: 8
  hostname: apm-mustang-ev3-04.lab.eng.brq.redhat.com
  kernel: 5.0.17-200.fc29.aarch64
  os: linux
  rootless: false
  uptime: 59m 36.72s
insecure registries:
  registries: []
registries:
  registries:
  - docker.io
  - registry.fedoraproject.org
  - quay.io
  - registry.access.redhat.com
  - registry.centos.org
store:
  ConfigFile: /etc/containers/storage.conf
  ContainerStore:
    number: 2
  GraphDriverName: overlay
  GraphOptions:
  - overlay.mountopt=nodev
  GraphRoot: /home/root/containers/storage
  GraphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  ImageStore:
    number: 1
  RunRoot: /home/root/containers/storage
  VolumePath: /home/root/containers/storage/volumes

Additional environment details (AWS, VirtualBox, physical, etc.):

  • F29, Physical machine (X-Gene Mustang Board), aarch64
@mheon
Copy link
Member

mheon commented May 30, 2019

Alright, we're probably getting an AVC. Can you retrieve it? If you run the failing podman command again and immediately journalctl -b 0 | grep AVC you should be able to grab it. If you can pastebin it, we can figure out how to adjust SELinux to prevent this.

@rhatdan
Copy link
Member

rhatdan commented May 30, 2019

Is this using an arm image? Or did it pull an X86 image?

@jcajka
Copy link

jcajka commented May 30, 2019

@rhatdan aarch64 image, seen the same thing there. Also for the record I have done relabel there.

@rhatdan
Copy link
Member

rhatdan commented May 30, 2019

Well it could be SELinux, @jcajka Does it work if you do a setenforce 0

@sinnykumari
Copy link
Author

sinnykumari commented May 30, 2019

yes, it works with setenforce 0

@rhatdan
Copy link
Member

rhatdan commented May 30, 2019

Then can you get the AVC messages either by executing

ausearch -m avc -ts recent

Or
journalctl -b | grep -i avc

@rhatdan
Copy link
Member

rhatdan commented May 30, 2019

Did you change the location of container/storage?

@sinnykumari
Copy link
Author

journalctl -b 0 | grep AVC

output after immediate run

May 30 15:19:00 apm-mustang-ev3-04.lab.eng.brq.redhat.com audit[5944]: AVC avc:  denied  { read } for  pid=5944 comm="bash" path="/usr/lib64/libtinfo.so.6.1" dev="dm-2" ino=23727484 scontext=system_u:system_r:container_t:s0:c288,c778 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=file permissive=0

@sinnykumari
Copy link
Author

Did you change the location of container/storage?

Don't think so. Is there a way to check that? I already removed contents from /var/lib/containers and /home/root/containers/ but no luck

@rhatdan
Copy link
Member

rhatdan commented May 30, 2019

What does podman info state?

@rhatdan
Copy link
Member

rhatdan commented May 30, 2019

Basically this AVC incidates that /usr/lib64/libtinfo.so.6.1 is labeled as if it was in a users home directory.
Stored under /home.

@sinnykumari
Copy link
Author

What does podman info state?

podman info output is in #3234 (comment)

@sinnykumari
Copy link
Author

Basically this AVC incidates that /usr/lib64/libtinfo.so.6.1 is labeled as if it was in a users home directory.
Stored under /home.

$ ls -lZ /usr/lib64/libtinfo.so.6.1 
-rwxr-xr-x. 1 root root system_u:object_r:lib_t:s0 215712 Sep 25  2018 /usr/lib64/libtinfo.so.6.1
``

@rhatdan
Copy link
Member

rhatdan commented May 31, 2019

Ok I missed this up front. If you are going to move the container storage to another location, then you will need to fix the labels.

/home/root/containers/storage

# semanage fcontext -a -e /var/lib/containers /home/root/containers
# restorecon -R -v /home/root/containers

Also you should not move RunRoot, just leave it on /run.

RunRoot: /home/root/containers/storage

This should be available to root users. You want the runroot stored on a tmpfs, so a reboot cleans it out.

RunRoot: /var/run/containers/storage

@rhatdan
Copy link
Member

rhatdan commented May 31, 2019

This is not a bug, but a configuration issue.

@rhatdan rhatdan closed this as completed May 31, 2019
@tremes
Copy link

tremes commented Jul 15, 2021

Hi,
I just encountered this issue after upgrading to Fedora 33. I tried the commands mentioned here (or in https://github.com/containers/podman/blob/main/troubleshooting.md#11-changing-the-location-of-the-graphroot-leads-to-permission-denied) with changing the labels, but it didn't help me. What am I missing pls?

I didn't change anything in my config. The graphRoot is /home/tremes/.local/share/containers/storage and volumePath is /home/tremes/.local/share/containers/storage/volumes. Is that OK?

@phhu
Copy link

phhu commented Jul 15, 2021

...I have the same issue, on Fedora 33 also. Same paths as tremes above; same error "/bin/sh: error while loading shared libraries: libc.so.6: cannot change memory protections" on running podman run -i -p 8080:80/tcp docker.io/library/httpd. It works with selinux disabled, and when running as root instead of my own user account. Not sure what to do.

journalctl -b 0 | grep AVC gives:

AVC avc: denied { read } for pid=16198 comm="httpd-foregroun" path="/lib/x86_64-linux-gnu/libc-2.28.so" dev="dm-0" ino=5068683 scontext=system_u:system_r:container_t:s0:c12,c883 tcontext=unconfined_u:object_r:data_home_t:s0 tclass=file permissive=0

@rhatdan
Copy link
Member

rhatdan commented Jul 15, 2021

Looks like container-selinux-2.163.0-2.fc33 never built for f33, you will need this with the latest kernel.

https://koji.fedoraproject.org/koji/taskinfo?taskID=71960590

@dustymabe
Copy link
Contributor

https://koji.fedoraproject.org/koji/taskinfo?taskID=71960590

can we get that in a bodhi update?

@rhatdan
Copy link
Member

rhatdan commented Jul 15, 2021

Way ahead of you.
https://bodhi.fedoraproject.org/updates/FEDORA-2021-862d1936a6

@tremes
Copy link

tremes commented Jul 16, 2021

Nice. Thank you @rhatdan.

@jonasbartho
Copy link

jonasbartho commented Jul 22, 2021

@rhatdan, I am experiencing the same on fedora 33(kernel 5.13.4) running rootless podman v3.1.2 in combination with container-selinux-2.160.2-1.fc33.noarch.

I also get the following when trying to build with rootless podman:
Error relocating /lib/ld-musl-x86_64.so.1: RELRO protection failed: Permission denied
Error relocating /bin/sh: RELRO protection failed: Permission denied

Are we obliged to download container-selinux-2.163.0-2.fc33 through bodhi/koji to fix this in fedora33 or is a newer version of that package going to be released soon in the official repos?

@rhatdan
Copy link
Member

rhatdan commented Jul 22, 2021

You should be able to get it via updates-testing, and it should be available soon in release.

@rhatdan
Copy link
Member

rhatdan commented Jul 22, 2021

Looks like it will be pushed to stable in 5 days.

@jonasbartho
Copy link

Super, thanks!

@stbischof
Copy link

I still have this on fedora-silverblue rawhide

Fr 23 Jul 2021 14:23:07 CEST

podman run -it --rm -p 27017:27017 mongo  
/bin/bash: error while loading shared libraries: libtinfo.so.5: cannot change memory protections

@rhatdan
Copy link
Member

rhatdan commented Jul 24, 2021

Could you run restorecon -R -v $HOME/.local/share/containers

This might be a problem in silverblue or any rpmostree based OS. Since rpm post install scripts do not run.
Basically container-selinux had to fix labels in users homedirs because of a change in the linux kernel.

@stbischof
Copy link

Sorry . No switched to ubuntu yesterday.

@torvitas
Copy link

torvitas commented Jul 28, 2021

I am as well running fedora 33. I had the same issue after doing a "dnf update" yesterday. Just did another dnf update today. Everything back to normal now. I can confirm the container-selinux did a restorecon on almost the whole system.

Though I have to admit that I can feel the pain @stbischof has.
Things like that made me switch from arch to fedora. Things like that made me wait several months after a fedora release before actually upgrading. Having issues like that on my work-desktop by simply updating somehow worries me. I mean that's the default container runtime on fedora, isn't it? How could that get past the quality gates?

Almost forgot to thank you @rhatdan for quickly fixing the issue!

@rhatdan
Copy link
Member

rhatdan commented Jul 29, 2021

This was caused by a kernel update allowing for a new feature. We saw this coming, and fixed it in F34 and Rawhide, before it hit, or as soon as it hit. We had a fix for this in F33, but the package was not building, and no one noticed it until people started complaining.

@nccurry
Copy link

nccurry commented Feb 17, 2022

Based on the discussion in bugzilla 1868590 executing the following fixed this issue on Fedora 35 for me:

# Note: This will reset your podman configuration to the default
$ podman system reset

@rawfoxDE
Copy link

That fixed the issue for me, thanks a ton ^^

@pjstirling
Copy link

I'm just starting out with podman, and I got this message on fedora 35, upgraded to 36, and am still seeing it:

[peter@fedora local-projects]$ podman system reset
WARNING! This will remove:
- all containers
- all pods
- all images
- all networks
- all build cache
- all machines
Are you sure you want to continue? [y/N] y
[peter@fedora local-projects]$ podman run -t fedora
Resolved "fedora" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf)
Trying to pull registry.fedoraproject.org/fedora:latest...
Getting image source signatures
Copying blob 62946078034b done
Copying config 2ecb6df959 done
Writing manifest to image destination
Storing signatures
/bin/bash: error while loading shared libraries: /lib64/libc.so.6: cannot apply additional memory protection after relocation: Permission denied
[peter@fedora local-projects]$ journalctl -b 0 | grep AVC
Aug 21 17:30:45 fedora audit[71642]: AVC avc: denied { read } for pid=71642 comm="bash" path="/usr/lib64/libc.so.6" dev="dm-0" ino=16845778 scontext=system_u:system_r:container_t:s0:c208,c770 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=file permissive=0
Aug 21 17:35:26 fedora audit[71832]: AVC avc: denied { read } for pid=71832 comm="bash" path="/usr/lib64/libc.so.6" dev="dm-0" ino=17116473 scontext=system_u:system_r:container_t:s0:c76,c467 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=file permissive=0

@mheon
Copy link
Member

mheon commented Aug 22, 2022

@rhatdan PTAL

@rhatdan
Copy link
Member

rhatdan commented Aug 23, 2022

This usually means container-selinux is not properly installed

$ sudo yum reinstall container-selinux
$ restorecon -R -v $HOME

Should fix the problem.

@pjstirling
Copy link

This usually means container-selinux is not properly installed

$ sudo yum reinstall container-selinux $ restorecon -R -v $HOME

Should fix the problem.

This did indeed fix things for me, thanks!

@fischer-felix
Copy link

Hi, I have moved the storage path for rootless podman by setting rootless_storage_path = "/shared/ol9-arm/podman-storage", however now the SELinux labels are incorrect.

I have tried running
sudo semanage fcontext -a -e ~/.local/share/containers /shared/ol9-arm/podman-storage
and then
sudo restorecon -R -vv /shared/ol9-arm/podman-storage
but this did not work.

Is it even possible to have the storage location for rootless podman changed when using SELinux?

@rhatdan
Copy link
Member

rhatdan commented Sep 17, 2022

What does matchpathcon /shared/ol9-arm/podman-storage show?

What AVCs are you seeing?

@fischer-felix
Copy link

Everything in /shared/ol9-arm/podman-storage is labeled unconfined_u:object_r:data_home_t:s0 (including the actual directory)

The AVC I get when running sudo ausearch -m avc -ts recent is

time->Sat Sep 17 15:18:50 2022
type=PROCTITLE msg=audit(1663420730.400:140057): proctitle="bash"
type=SYSCALL msg=audit(1663420730.400:140057): arch=c00000b7 syscall=226 success=no exit=-13 a0=ffff8445a000 a1=12000 a2=0 a3=ffff84470910 items=0 ppid=2091754 pid=2091757 auid=1000 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=8 comm="bash" exe="/usr/bin/bash" subj=system_u:system_r:container_t:s0:c178,c541 key=(null)
type=AVC msg=audit(1663420730.400:140057): avc:  denied  { read } for  pid=2091757 comm="bash" path="/usr/lib64/libtinfo.so.6.2" dev="dm-2" ino=109270516 scontext=system_u:system_r:container_t:s0:c178,c541 tcontext=unconfined_u:object_r:data_home_t:s0 tclass=file permissive=0

@fischer-felix
Copy link

Never mind, I was just stupid when setting fcontext Equivalance.

Turns out I set ~/.local/share/containers = /shared/ol9-arm/podman-storage

This meant that when running sudo restorecon -R -F /shared/ol9-arm/podman-storage/ the descending into the directories did not work correctly and everything was labelled unconfined_u:object_r:data_home_t:s0, when in actuality overlay, overlay-images and overlay-layers should be labelled unconfined_u:object_r:container_ro_file_t:s0

Changing this to /shared/ol9-arm/podman-storage = /home/opc/.local/share/containers/storage and running restorecon again fixed the problem.

@Timmmm
Copy link

Timmmm commented Jan 9, 2023

I'm using RHEL 8.7 and get this issue. I tried this:

sudo dnf reinstall container-selinux
podman system reset # Necessary otherwise restorecon complains about lack of permissions for some files.
restorecon -R -v $HOME

Unfortunately it still fails:

> docker run --rm --name buildfarm-redis -p 6379:6379 redis:5.0.9
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
Resolved "redis" as an alias (/home/codasip.com/timothy.hutt/.cache/containers/short-name-aliases.conf)
...
Storing signatures
/bin/sh: error while loading shared libraries: libc.so.6: cannot change memory protections

Is there anything else to try? I haven't changed any configurations at all except adding myself to subuid/subgid and enabling user namespaces.

@Timmmm
Copy link

Timmmm commented Jan 9, 2023

Ah the workaround listed here worked. Basically disable SELinux.

sudo setenforce 0

Not ideal but it'll do for now.

@rhatdan
Copy link
Member

rhatdan commented Mar 3, 2023

Please don't jump onto an issue that is years old. Guaranteed the homedir is mislabled.

I would figue
/home/codasip.com/timothy.hutt/

Is your homedir? What is it's label?
ls -lZ /home/codasip.com/timothy.hutt/.local/share/containers

@rhatdan
Copy link
Member

rhatdan commented Mar 3, 2023

Open a new discussion, and I will help you fix the labels.

@ksinkar
Copy link

ksinkar commented Mar 21, 2023

@rhatdan @Timmmm

I was facing the same issue. Just applying the correct SELinux labels does not help.

Background

$ lsb_release

LSB Version:	:core-4.1-amd64:core-4.1-noarch
Distributor ID:	Fedora
Description:	Fedora release 37 (Thirty Seven)
Release:	37
Codename:	ThirtySeven


$ ls -lZ .local/share/containers

drwx------. 1 user user system_u:object_r:container_var_lib_t:s0  50 Mar 21 11:29 cache
drwx------. 1 user user system_u:object_r:container_var_lib_t:s0 222 Mar 21 11:29 storage

$ ls -lZ /var/lib/containers

drwxr-xr-x. 1 root root system_u:object_r:container_var_lib_t:s0 0 Feb 14 14:42 sigstore
drwxr-xr-x. 1 root root system_u:object_r:container_var_lib_t:s0 6 Mar 12 22:06 storage

Even the containers folder has the same permissions

drwx------. 1 user user system_u:object_r:container_var_lib_t:s0 24 Mar 21 11:29 containers

Error

$ podman run -p 5000:5000 -it centos bash
bash: error while loading shared libraries: libtinfo.so.6: cannot change memory protections

journalCTL

Mar 21 15:58:43 fedora audit[13656]: AVC avc: denied { read } for pid=13656 comm="bash" path="/usr/lib64/libtinfo.so.6.1" dev="dm-1" ino=49555 scontext=system_u:system_r:container_t:s0:c693,c759 tcontext=unconfined_u:object_r:container_var_lib_t:s0 tclass=file permissive=0
Mar 21 15:58:43 fedora objective_noyce[13654]: bash: error while loading shared libraries: libtinfo.so.6: cannot change memory protections
Mar 21 15:58:43 fedora podman[13620]: 2023-03-21 15:58:43.298953158 +0100 CET m=+0.460734705 container died 86048bdd9672d88751181690ab305657f75da60453df943df718d33bb64d2b8b (image=quay.io/centos/centos:latest, name=objective_noyce, org.label-schema.vendor=CentOS, org.label-schema.build-date=20201204, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Base Image, org.label-schema.schema-version=1.0)
Mar 21 15:58:43 fedora podman[13658]: 2023-03-21 15:58:43.517813776 +0100 CET m=+0.188416488 container cleanup 86048bdd9672d88751181690ab305657f75da60453df943df718d33bb64d2b8b (image=quay.io/centos/centos:latest, name=objective_noyce, org.label-schema.vendor=CentOS, org.label-schema.build-date=20201204, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Base Image, org.label-schema.schema-version=1.0)

Solution

After correctly applying the SELinux labels, please run

podman system reset

@rhatdan
Copy link
Member

rhatdan commented Mar 21, 2023

Please open a new discussion.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Aug 29, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Aug 29, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests