Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: Set size limit for auto tmpfs mounts in systemd mode #17037

Closed
rahbari opened this issue Jan 9, 2023 · 10 comments · Fixed by #17207
Closed

[Feature]: Set size limit for auto tmpfs mounts in systemd mode #17037

rahbari opened this issue Jan 9, 2023 · 10 comments · Fixed by #17207
Assignees
Labels
Good First Issue This issue would be a good issue for a first time contributor to undertake. kind/feature Categorizes issue or PR as related to a new feature. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@rahbari
Copy link

rahbari commented Jan 9, 2023

Feature request description

When starting a container in systemd mode podman mounts tmpfs file systems on some directories like /run, /run/lock with half the host system memory.
image
is there any way to limit this size like /dev/shm
this actually makes --memory= and --storage-opt size= somehow meaningless because apps inside container can access more memory and disk space while it's running.

@rahbari rahbari added the kind/feature Categorizes issue or PR as related to a new feature. label Jan 9, 2023
@mheon
Copy link
Member

mheon commented Jan 9, 2023

Easy enough to do, would just need to figure out good sizes for the extra mounts.

@rahbari
Copy link
Author

rahbari commented Jan 9, 2023

is it possible to set it now? for now even if it is shared a guest can fill half of the host memory.
It's nice to be configurable with a minimum limit, maybe based on defined --memory

@mheon
Copy link
Member

mheon commented Jan 9, 2023

No, but the changes in the code are very easy (just have to add a mount option to https://github.com/containers/podman/blob/main/libpod/container_internal_linux.go#L217 and the option mount structs in that function). Making it configurable would be a bit more complex as it'd need a containers.conf field, but also not particularly difficult.

@rhatdan
Copy link
Member

rhatdan commented Jan 9, 2023

First, they are only able to access 1/2 of the available memory for all TMPFS. not half or system memory, when run with a Memory CGROUP.

Each one does not get half. All tmpfs within the MEmory CGroup can not go over 50%.

@rahbari
Copy link
Author

rahbari commented Jan 10, 2023

@rhatdan yes i know, but any container can fill that space which is not good. I created a 30gb file in one those directories in a container with 1gb memory and 1gb disk limit and half the memory of the system got filled.

@rhatdan
Copy link
Member

rhatdan commented Jan 10, 2023

So you want --systemd-shm-size? Which would be applied to all tmpfs mounted via systemd flag ignored otherwize?

@rhatdan
Copy link
Member

rhatdan commented Jan 10, 2023

# podman run --systemd=always fedora mount | grep tmpfs.*rw
tmpfs on /dev type tmpfs (rw,nosuid,noexec,context="system_u:object_r:container_file_t:s0:c472,c905",size=65536k,mode=755,inode64)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,relatime,context="system_u:object_r:container_file_t:s0:c472,c905",inode64)
tmpfs on /run type tmpfs (rw,nosuid,nodev,relatime,context="system_u:object_r:container_file_t:s0:c472,c905",inode64)
tmpfs on /etc/hostname type tmpfs (rw,seclabel,size=13120028k,nr_inodes=819200,mode=755,inode64)
tmpfs on /run/.containerenv type tmpfs (rw,seclabel,size=13120028k,nr_inodes=819200,mode=755,inode64)
tmpfs on /run/secrets type tmpfs (rw,seclabel,size=13120028k,nr_inodes=819200,mode=755,inode64)
shm on /dev/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,context="system_u:object_r:container_file_t:s0:c472,c905",size=64000k,inode64)
tmpfs on /etc/resolv.conf type tmpfs (rw,seclabel,size=13120028k,nr_inodes=819200,mode=755,inode64)
tmpfs on /etc/hosts type tmpfs (rw,seclabel,size=13120028k,nr_inodes=819200,mode=755,inode64)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,relatime,context="system_u:object_r:container_file_t:s0:c472,c905",inode64)
tmpfs on /var/log/journal type tmpfs (rw,nosuid,nodev,relatime,context="system_u:object_r:container_file_t:s0:c472,c905",inode64)
devtmpfs on /proc/kcore type devtmpfs (rw,seclabel,size=4096k,nr_inodes=1048576,mode=755,inode64)
devtmpfs on /proc/keys type devtmpfs (rw,seclabel,size=4096k,nr_inodes=1048576,mode=755,inode64)
devtmpfs on /proc/latency_stats type devtmpfs (rw,seclabel,size=4096k,nr_inodes=1048576,mode=755,inode64)
devtmpfs on /proc/timer_list type devtmpfs (rw,seclabel,size=4096k,nr_inodes=1048576,mode=755,inode64)

Here is the read/write tmpfs available in --systemd=always mode.

@rahbari
Copy link
Author

rahbari commented Jan 19, 2023

@rhatdan yes, that would be great.

@rhatdan rhatdan added the Good First Issue This issue would be a good issue for a first time contributor to undertake. label Jan 19, 2023
@danishprakash
Copy link
Contributor

@rhatdan mind If I take a look at this?

@rhatdan
Copy link
Member

rhatdan commented Jan 20, 2023

I love volunteers.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 1, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 1, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Good First Issue This issue would be a good issue for a first time contributor to undertake. kind/feature Categorizes issue or PR as related to a new feature. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants