Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open mounts all over the place on userdata/shm #9191

Closed
xrow opened this issue Feb 2, 2021 · 44 comments · Fixed by #9240
Closed

Open mounts all over the place on userdata/shm #9191

xrow opened this issue Feb 2, 2021 · 44 comments · Fixed by #9240
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@xrow
Copy link

xrow commented Feb 2, 2021

/kind bug

Description

It seems podman is not able to clean up layers. I have no pod runnning but hundreds of mounts. The system gets slow and akward. A reboot helps.

shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/b7d38b15781e1cdfd7a2801a9782ba08cec8830430269a240b1bff0c0dbdaea4/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/ad9982869d9083252ce5740850c91e09258ccbceab899d0f8b1684341b6ca440/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/32de96aa5678c7d1e828fc9749b2cc2af69fdb0181611a6403336981dcd0599a/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/ab57b11b29913f903cefb10b421cd44be06562bde1c7764b22987650bd0c33ae/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/f9ec808492d5f6f4d2611815612271e56460f62c55690a35aa22b7950faed307/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/1d4c2557fb49999ccafad42fbe322a6eafcc3b97084d656324340e65f736b387/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/6e34cb240ad464f33385ffcecde3d966e6e347021c758192706b1a5090b41540/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/6dc80979b859d678765d14441cebcfa4cdf0a9bd60e3bba79994d996bf48e653/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/56762539661e3db998423486908532e6f7c6600071b149ace96cb330bbf00ed1/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/fd0f46ec33abce37d4cde7cd691824968b77ee5f374e281266e3c17bea55af32/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/91487dcefa77870323cd65583806bfea9a93080c7189fabbae1fa92a1e3aa841/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/95a859fbd42caa2b4df44c03a30144ca55ba5d4c1036c51322c732f19bcbf257/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/ba2d61725cde9a491b46d37a66e96659e4f97d2c36ffce65ce8c9543243f7185/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/324c57cce77485b312a80bc35136786bf8036c1b220e67ce1dcdd7a3c3b614d7/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/312f2ac095322ec827d032c917a60b9c40a8a255b6e95eed3c78c89a0b054f5d/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/f08234d585f29585acd12adb3546d1640109bc32cc9ce8e2cb41b292beaec902/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/a9769d1c0282ac157821a0416cb8b5f6166ffaaae4606a719fc9bc599287834b/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/b6783ec8b9d739c678790ad430e6b8e3becf1d42aca7ee594e1e840b4d715279/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/4a896270c1a0446166dd03bd2c5fd23fc1d2a1a2df73220ca5e3f105b23fdfa6/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/a0f50dbd77d93876e205c9f528c77d1fc0a4391345adda81b97de33560f0bddf/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/e104bb85bbb01fa6a3179532a10ab5b854271b6b58a52b0df90a81dbeb1c51a2/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/65a1bd97a6231c2339bd6598c0431c351c7b047b1785b83727c78109cdf250c5/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/88fc1edb3b4a9962e5661ce7a38849f2880cabec1e5ce33913f0a10d614b592e/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/9eaa1f8ebdd1b2a6ff87ddd5c424c2374f13c867e8be8a9568c054dabc3b472e/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/d5b5184389a2a1a27355cfff2d4a9bcae28d37834c95a163d913f8ba449d5443/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/b0401d42f70ba09365b32156580605ced3180468b6c07396b095d139fb01c2cf/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/cdd98cf0090c536046605d654e7008275c3eeb98cf6a9e06a308adf847b9c4cc/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/3ce771fa961b1d7916cc79c9b00ca9f0f29628cb358b89e5144559a3c261a4b9/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/d9f7ee9e0c16ca61b623e09a4d652a9e643c0ff260019f60be8faeb1b0672a85/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/7e18f3297faa52efab0a16839186b71c4677bd4a943cc01c46d4d636a0587328/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/272e9f457a12c6a6802adc1baacda058f2b697c2650e5779dfab3eb299aa7ce2/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/8a88ccde66efbd4e4bd4745474f7ff1bd42ba7b2461be649603341e18fd79a68/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/f65ec736e995c8d0200f5c04269b5ef32103c63470e7f0ba1fd73e181168b006/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/2f9eef9d6116df46c13d26c967e2b3e289cfdad2064e02014d1e682ffd4de901/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/f98d370f78a87e4ad022f6ab57c00e7a42c442a4d21f3a2d60885aba9b1020a6/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/d34b762aacada02d4184e61e2bc729beb4800c71f1242016a2a3be71bdc38278/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/59e9046391d87ade927c7694be47fd39f0d09e628493424f57598133f419ce70/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/9046deb98837de786fe30dff1a9f4ab92ebcd938c5e01ce1677e5dda5ce6ba42/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/9e2c96d3398ee691b44f7a62bc96f808a72ba62cd20ff9d8c7fa3e3771540621/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/51b2a39df496336801dd462174fe1f64f27f4eead5f58349111fb80427632f48/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/97b5110b7e149c538f336cafbf78a19a4dfaf031ad8350638d3aeb8b41baec12/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/a63003c74eba526c7d5d8e9833c8932be68af1d81dab993359a249775512660d/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/916756f6669006f44e72bb5f456531f7a4cf6c95b21049723841101424fa6211/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/10f04a4408146a70a24fc75cfa8c3a25ad706423344e2a11cdd654d0f2f6cd8c/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/90e328450dc8ca449e1f9b7cbd032b4909fbdb5fc5164b5324659e86789d92a5/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/62965ca1c798bb7a3fff4bdd1f720894428ff63fc194bc97505bd8b9cf4337fe/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/f2287b7e04e26c47783fad0fb1b44c93dee366aa313eb4a21327fb4aca0848dd/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/4ff897d1510a2d513b7bc67d5e27d52bb4410c1b592891544ee1ca441f47aa96/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/b0e9118a5da8a946b05fa1941f9db994071f350586e1c25f159bda1c68067416/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/169a21c9dea45090dfb7480a4752c11642bfa6fe6c70148ca6576834ece2d321/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/a87aed3f40800b4950123625de883ae118366362ec6621e881bcd47f52799292/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/71f5efc1962c6f7e4c06c0398482ef689f93492d9039e85fc2e724e149b4273c/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/7ea6451d85865a74f54a08f7f8e08ff4f5e8310ab6e45bd93844ac300a894ee8/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/c86451fd9994bc75f13c6dc98c73052fb134bdf419ea47778e723477f4470022/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/7c1270b3ce590d01cef1f68579f64d5b7f57a2c5a1a08fa8d1e8d63ca4e66219/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/1280f488f2fb6be07beff35da96ac6a32b759f6c95583d2bcf8da23230ffa179/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/11402ef813a425ca903922d8f6284dc56dde713cd8a63ab30893da6e652c86be/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/099ecb721e7cc0bcf4619b4bcf7f9c522f9a330e47fad016632ada6d31e241ed/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/9a0a6c759fe4fafdd1f8d1cc062f1915d62b78a25e2f85fb4b628eddce43f4ce/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/b55351e1aad0b0b6e7b0b44ebabdb1285324c0ace9b2c2c6720fe36fb7bce4cc/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/80bec29f1a54637e78a44e659bd0dfd0f8c1e9174ff0dad68e0fae4ccab93086/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/96b0f711baac62dc6d3f82466b51dd3e2684712e16ade8e26269bf36384e5a2f/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/8ea66382306d77def51a4100d5e844950754446b1cda7a0daa65abcfd9269345/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/557805b86392ce0f7dcc543025cc0ab9e4188a18d1c208666eff8067df475973/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/20984ecead828da486fc505d1bdfb19da5b4a1f584c13e8be137224d07426c26/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/ec5d09efa74bf0b9785ccd72d4b461b8be09bd4b1beef6dd18bf30f1a1d352cc/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/93a1997c541a154db2d1ddbe9199f7c16890c1fe2e43502e4747041817a6c358/userdata/shm
fuse-overlayfs 461148160 67227484 393920676 15% /var/lib/containers/storage/overlay/6a55d90560ae89d1b5dd605c9240060a3548d9072df2d37423fb9807b063fb88/merged
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/6a7910bde66510ede36e3bf65a0b79bb1818b685d6da05a0103f8d1698754de8/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/851e4918433b5aa8cca0aa50061f7a2de2480fafbcedcd3cfaf389a67cf0cab2/userdata/shm
fuse-overlayfs 461148160 67227484 393920676 15% /var/lib/containers/storage/overlay/ee108ee23ceac5da14efc0f41ff44df76e547a8b68b05a509b0850ab18f1c4c3/merged
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/c2ba14cc6b8ce5d5e87a41a7a217fac5ff4a99d2fc0af863ef95870f6a1ff49b/userdata/shm
shm 64000 84 63916 1% /var/lib/containers/storage/overlay-containers/6ba5f13b59fdcaf23bd2461122ceecf396f4f85f1fabdf49ff4a643966237a5b/userdata/shm
fuse-overlayfs 461148160 67227484 393920676 15% /var/lib/containers/storage/overlay/a8dd5b9e2fd01b85d8136f2a482c7928c8a6eb3a60c503e761156bf44aa8e794/merged

Steps to reproduce the issue:

Actunally I do not know why this happens. I just use podman together with gitlab runner over the podman API.

  1. Use podman a lot, probably also with processes taht use shm inside the container.

Describe the results you received:

Over time podman becomes unresponsive. "df" will give:

1edb3b4a9962e5661ce7a38849f2880cabec1e5ce33913f0a10d614b592e/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/9eaa1f8ebdd1b2a6ff87ddd5c424c2374f13c867e8be8a9568c054dabc3b472e/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/d5b5184389a2a1a27355cfff2d4a9bcae28d37834c95a163d913f8ba449d5443/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/b0401d42f70ba09365b32156580605ced3180468b6c07396b095d139fb01c2cf/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/cdd98cf0090c536046605d654e7008275c3eeb98cf6a9e06a308adf847b9c4cc/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/3ce771fa961b1d7916cc79c9b00ca9f0f29628cb358b89e5144559a3c261a4b9/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/d9f7ee9e0c16ca61b623e09a4d652a9e643c0ff260019f60be8faeb1b0672a85/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/7e18f3297faa52efab0a16839186b71c4677bd4a943cc01c46d4d636a0587328/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/272e9f457a12c6a6802adc1baacda058f2b697c2650e5779dfab3eb299aa7ce2/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/8a88ccde66efbd4e4bd4745474f7ff1bd42ba7b2461be649603341e18fd79a68/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/f65ec736e995c8d0200f5c04269b5ef32103c63470e7f0ba1fd73e181168b006/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/2f9eef9d6116df46c13d26c967e2b3e289cfdad2064e02014d1e682ffd4de901/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/f98d370f78a87e4ad022f6ab57c00e7a42c442a4d21f3a2d60885aba9b1020a6/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/d34b762aacada02d4184e61e2bc729beb4800c71f1242016a2a3be71bdc38278/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/59e9046391d87ade927c7694be47fd39f0d09e628493424f57598133f419ce70/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/9046deb98837de786fe30dff1a9f4ab92ebcd938c5e01ce1677e5dda5ce6ba42/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/9e2c96d3398ee691b44f7a62bc96f808a72ba62cd20ff9d8c7fa3e3771540621/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/51b2a39df496336801dd462174fe1f64f27f4eead5f58349111fb80427632f48/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/97b5110b7e149c538f336cafbf78a19a4dfaf031ad8350638d3aeb8b41baec12/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/a63003c74eba526c7d5d8e9833c8932be68af1d81dab993359a249775512660d/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/916756f6669006f44e72bb5f456531f7a4cf6c95b21049723841101424fa6211/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/10f04a4408146a70a24fc75cfa8c3a25ad706423344e2a11cdd654d0f2f6cd8c/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/90e328450dc8ca449e1f9b7cbd032b4909fbdb5fc5164b5324659e86789d92a5/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/62965ca1c798bb7a3fff4bdd1f720894428ff63fc194bc97505bd8b9cf4337fe/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/f2287b7e04e26c47783fad0fb1b44c93dee366aa313eb4a21327fb4aca0848dd/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/4ff897d1510a2d513b7bc67d5e27d52bb4410c1b592891544ee1ca441f47aa96/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/b0e9118a5da8a946b05fa1941f9db994071f350586e1c25f159bda1c68067416/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/169a21c9dea45090dfb7480a4752c11642bfa6fe6c70148ca6576834ece2d321/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/a87aed3f40800b4950123625de883ae118366362ec6621e881bcd47f52799292/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/71f5efc1962c6f7e4c06c0398482ef689f93492d9039e85fc2e724e149b4273c/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/7ea6451d85865a74f54a08f7f8e08ff4f5e8310ab6e45bd93844ac300a894ee8/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/c86451fd9994bc75f13c6dc98c73052fb134bdf419ea47778e723477f4470022/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/7c1270b3ce590d01cef1f68579f64d5b7f57a2c5a1a08fa8d1e8d63ca4e66219/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/1280f488f2fb6be07beff35da96ac6a32b759f6c95583d2bcf8da23230ffa179/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/11402ef813a425ca903922d8f6284dc56dde713cd8a63ab30893da6e652c86be/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/099ecb721e7cc0bcf4619b4bcf7f9c522f9a330e47fad016632ada6d31e241ed/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/9a0a6c759fe4fafdd1f8d1cc062f1915d62b78a25e2f85fb4b628eddce43f4ce/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/b55351e1aad0b0b6e7b0b44ebabdb1285324c0ace9b2c2c6720fe36fb7bce4cc/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/80bec29f1a54637e78a44e659bd0dfd0f8c1e9174ff0dad68e0fae4ccab93086/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/96b0f711baac62dc6d3f82466b51dd3e2684712e16ade8e26269bf36384e5a2f/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/8ea66382306d77def51a4100d5e844950754446b1cda7a0daa65abcfd9269345/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/557805b86392ce0f7dcc543025cc0ab9e4188a18d1c208666eff8067df475973/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/20984ecead828da486fc505d1bdfb19da5b4a1f584c13e8be137224d07426c26/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/ec5d09efa74bf0b9785ccd72d4b461b8be09bd4b1beef6dd18bf30f1a1d352cc/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/93a1997c541a154db2d1ddbe9199f7c16890c1fe2e43502e4747041817a6c358/userdata/shm
fuse-overlayfs 461148160 67227484 393920676 15% /var/lib/containers/storage/overlay/6a55d90560ae89d1b5dd605c9240060a3548d9072df2d37423fb9807b063fb88/merged
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/6a7910bde66510ede36e3bf65a0b79bb1818b685d6da05a0103f8d1698754de8/userdata/shm
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/851e4918433b5aa8cca0aa50061f7a2de2480fafbcedcd3cfaf389a67cf0cab2/userdata/shm
fuse-overlayfs 461148160 67227484 393920676 15% /var/lib/containers/storage/overlay/ee108ee23ceac5da14efc0f41ff44df76e547a8b68b05a509b0850ab18f1c4c3/merged
shm 64000 0 64000 0% /var/lib/containers/storage/overlay-containers/c2ba14cc6b8ce5d5e87a41a7a217fac5ff4a99d2fc0af863ef95870f6a1ff49b/userdata/shm
shm 64000 84 63916 1% /var/lib/containers/storage/overlay-containers/6ba5f13b59fdcaf23bd2461122ceecf396f4f85f1fabdf49ff4a643966237a5b/userdata/shm
fuse-overlayfs 461148160 67227484 393920676 15% /var/lib/containers/storage/overlay/a8dd5b9e2fd01b85d8136f2a482c7928c8a6eb3a60c503e761156bf44aa8e794/merged
shm 64000 84 63916 1% /var/lib/containers/storage/overlay-containers/984863663a3075f20f60d614c442099cfea2c7816e71a38b6321363258bd2c4e/userdata/shm
fuse-overlayfs 461148160 67227484 393920676 15% /var/lib/containers/storage/overlay/1b14e5f72bfa3fe8d2e89e982ca8f3ccf5ce8c58b76402476c6460a4b45f2625/merged
shm 64000 84 63916 1% /var/lib/containers/storage/overlay-containers/c92f4a5e42b00fe3d6ed4e6052bc170786e0756ed8ce0c4f1f927b190e0d2fd8/userdata/shm
shm 64000 84 63916 1% /var/lib/containers/storage/overlay-containers/406ef9981e7882ae25881aa88b494a6a791e8cf18325012fd76218c79b34e619/userdata/shm
fuse-overlayfs 461148160 67227484 393920676 15% /var/lib/containers/storage/overlay/c0dd2a5f8a815a80dcb9489a904a92598ef151ce40498e1b41ed2d0e2370005f/merged
fuse-overlayfs 461148160 67227484 393920676 15% /var/lib/containers/storage/overlay/cda762cc0a1542cbbbcb169e604558e364a35af32a41b18dabfe795796b535ce/merged
shm 64000 84 63916 1% /var/lib/containers/storage/overlay-containers/35b56188d7371130c0a24722ae3b04f0b40353ff5e0448f9257720064249d08e/userdata/shm
fuse-overlayfs 461148160 67227484 393920676 15% /var/lib/containers/storage/overlay/6f38a02f2f82b5bfa1abd7043d701d8633b10e5f2ec5bc348b372b577db44659/merged
shm 64000 84 63916 1% /var/lib/containers/storage/overlay-containers/c5856cc3f7e85fb92ac715f0d1f574f94a44f310a5851dc7838b81000b5582bb/userdata/shm
fuse-overlayfs 461148160 67227484 393920676 15% /var/lib/containers/storage/overlay/2f5d0660bcd0af04cd869a8935c9e0f952e21ec74d5ba9809b630ef797bd6227/merged
fuse-overlayfs 461148160 67227484 393920676 15% /var/lib/containers/storage/overlay/ece51234dd70e563b467c7d86efd2ce250a0455fc62279da502fe1ef5c8870ae/merged

Describe the results you expected:

Podman shall cleanup nicely.

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

[root@server005 ~]# podman version
Version:      2.2.1
API Version:  2.1.0
Go Version:   go1.15.5
Built:        Thu Jan 21 03:45:02 2021
OS/Arch:      linux/amd64

Output of podman info --debug:

[root@server005 ~]# podman info --debug
host:
  arch: amd64
  buildahVersion: 1.18.0
  cgroupManager: systemd
  cgroupVersion: v1
  conmon:
    package: conmon-2.0.25-1.el8.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.25, commit: a5a3a4f0087fa0281b1c43eacd1116d9153233a6'
  cpus: 12
  distribution:
    distribution: '"centos"'
    version: "8"
  eventLogger: journald
  hostname: server005.dc02.xrow.net
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 4.18.0-257.el8.x86_64
  linkmode: dynamic
  memFree: 98860855296
  memTotal: 101076951040
  ociRuntime:
    name: runc
    package: runc-1.0.0-145.rc91.git24a3cf8.el8.x86_64
    path: /usr/bin/runc
    version: 'runc version spec: 1.0.2-dev'
  os: linux
  remoteSocket:
    exists: true
    path: /run/podman/podman.sock
  rootless: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 21474832384
  swapTotal: 21474832384
  uptime: 5m 28.26s
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - registry.centos.org
  - docker.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 20
    paused: 0
    running: 0
    stopped: 20
  graphDriverName: overlay
  graphOptions:
    overlay.ignore_chown_errors: "true"
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-1.4.0-1.el8.x86_64
      Version: |-
        fusermount3 version: 3.2.1
        fuse-overlayfs: version 1.4
        FUSE library version 3.2.1
        using FUSE kernel interface version 7.26
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 50
  runRoot: /var/run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 2.1.0
  Built: 1611197102
  BuiltTime: Thu Jan 21 03:45:02 2021
  GitCommit: ""
  GoVersion: go1.15.5
  OsArch: linux/amd64
  Version: 2.2.1

Package info (e.g. output of rpm -q podman or apt list podman):

[root@server005 ~]# rpm -q podman
podman-2.2.1-1.el8.x86_64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?

Yes. Nightly is even worse.

Additional environment details (AWS, VirtualBox, physical, etc.):

Baremetal

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Feb 2, 2021
@mheon
Copy link
Member

mheon commented Feb 2, 2021

Can you run a container and, after it exits, run podman inspect --format '{{ .State.Status }}' on it and see if it is in Stopped or Exited state?

@xrow
Copy link
Author

xrow commented Feb 2, 2021

[root@server005 ~]# podman inspect a3251ac01907 --format '{{ .State.Status }}'
exited

@xrow
Copy link
Author

xrow commented Feb 2, 2021

If I take one it from that shm list it gives me 2 different results:

[root@server005 ~]# podman inspect a86d19efac7770ccf04ff7d7e34bc7262230a9b933f456cdea38b41667f9d41c --format '{{ .State.Status }}'
Error: error inspecting object: no such object: "a86d19efac7770ccf04ff7d7e34bc7262230a9b933f456cdea38b41667f9d41c"

or

[root@server005 ~]# podman inspect bd3d7f88a33f9b6ba15e9a02db27213e60325c17150f602be9292a88aae62a34 --fmat '{{ .State.Status }}'
configured

It showed configured state after it showed running... Is this an expected result at all?

@mheon
Copy link
Member

mheon commented Feb 2, 2021

Hmmm. That could be a symptom of Podman losing track of container state - which usually happens when we detect a reboot in error (something wiped our temporary files directory). Sometimes we've seen systemd do this (systemd-tmpfiles will "clean up" our directory, and delete files we use)

@xrow
Copy link
Author

xrow commented Feb 3, 2021

What next do you want me to run more tests? Is it a bug that is already here and that i missed?

@mheon
Copy link
Member

mheon commented Feb 3, 2021

First thing to do would be to check if any containers are still running - especially any of the containers that say configured. Checking ps to see if the container process is running in the background would be helpful. Trying to remote such a container should also fail with errors about how it is still mounted.

@xrow
Copy link
Author

xrow commented Feb 3, 2021

Nothing is running. I should mention in my gitlab runner setup i am using DinD or PinP.

[root@server005 ~]# podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[root@server005 ~]# mount | grep shm | wc
262 1572 49376

@mheon
Copy link
Member

mheon commented Feb 3, 2021

If we've lost track of them, podman ps won't work - you'll have to use the actual ps binary. Doing a ps and grepping for the command of containers that are in the Configured state would identify these.

@xrow
Copy link
Author

xrow commented Feb 3, 2021

Like this? The are 430 shm mounts and 0-1 podman

[root@server005 ~]# mount | grep shm | wc
430 2580 81128
[root@server005 ~]# ps -ax | grep podman
3227385 ? Ssl 3:12 /usr/bin/podman --log-level=info system service
3277267 ? Ssl 0:00 /usr/bin/conmon --api-version 1 -c 71c4474a0f3abc2236f345d36870a6d5a79a052f85398f07bf4c9f3d642e95c2 -u 71c4474a0f3abc2236f345d36870a6d5a79a052f85398f07bf4c9f3d642e95c2 -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/71c4474a0f3abc2236f345d36870a6d5a79a052f85398f07bf4c9f3d642e95c2/userdata -p /var/run/containers/storage/overlay-containers/71c4474a0f3abc2236f345d36870a6d5a79a052f85398f07bf4c9f3d642e95c2/userdata/pidfile -n runner-vdq3sydx-project-16870131-concurrent-9-bf017d419c3c1af8-build-2 --exit-dir /var/run/libpod/exits --socket-dir-path /var/run/libpod/socket -s -l k8s-file:/var/lib/containers/storage/overlay-containers/71c4474a0f3abc2236f345d36870a6d5a79a052f85398f07bf4c9f3d642e95c2/userdata/ctr.log --log-level info --runtime-arg --log-format=json --runtime-arg --log --runtime-arg=/var/run/containers/storage/overlay-containers/71c4474a0f3abc2236f345d36870a6d5a79a052f85398f07bf4c9f3d642e95c2/userdata/oci-log -i --conmon-pidfile /var/run/containers/storage/overlay-containers/71c4474a0f3abc2236f345d36870a6d5a79a052f85398f07bf4c9f3d642e95c2/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /var/run/containers/storage --exit-command-arg --log-level --exit-command-arg info --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /var/run/libpod --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.ignore_chown_errors=true --exit-command-arg --storage-opt --exit-command-arg overlay.mount_program=/usr/bin/fuse-overlayfs --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 71c4474a0f3abc2236f345d36870a6d5a79a052f85398f07bf4c9f3d642e95c2
3382567 pts/1 S+ 0:00 grep --color=auto podman

@mheon
Copy link
Member

mheon commented Feb 3, 2021

And podman ps shows no running containers?

If so, that is 100% evidence that something is wiping Podman's state.

@mheon
Copy link
Member

mheon commented Feb 3, 2021

To verify that tmp path is in use, can you do a podman info --log-level=debug and provide the full output? That will help determine what might be doing this.

@xrow
Copy link
Author

xrow commented Feb 3, 2021

And podman ps shows no running containers? Yes

[root@server005 ~]# podman ps
CONTAINER ID  IMAGE   COMMAND  CREATED  STATUS  PORTS   NAMES
[root@server005 ~]# podman info --log-level=debug
INFO[0000] podman filtering at log level debug          
DEBU[0000] Called info.PersistentPreRunE(podman info --log-level=debug) 
DEBU[0000] Reading configuration file "/usr/share/containers/containers.conf" 
DEBU[0000] Merged system config "/usr/share/containers/containers.conf": &{Containers:{Devices:[] Volumes:[] ApparmorProfile:containers-default-0.29.0 Annotations:[] CgroupNS:host Cgroups:enabled DefaultCapabilities:[CHOWN DAC_OVERRIDE FOWNER FSETID KILL NET_BIND_SERVICE SETFCAP SETGID SETPCAP SETUID SYS_CHROOT] DefaultSysctls:[net.ipv4.ping_group_range=0 0] DefaultUlimits:[nproc=4194304:4194304] DefaultMountsFile: DNSServers:[] DNSOptions:[] DNSSearches:[] EnableKeyring:true EnableLabeling:false Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm] EnvHost:false HTTPProxy:false Init:false InitPath: IPCNS:private LogDriver:k8s-file LogSizeMax:-1 NetNS:bridge NoHosts:false PidsLimit:2048 PidNS:private SeccompProfile:/usr/share/containers/seccomp.json ShmSize:65536k TZ: Umask:0022 UTSNS:private UserNS:host UserNSSize:65536} Engine:{ImageBuildFormat:oci CgroupCheck:false CgroupManager:systemd ConmonEnvVars:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] ConmonPath:[/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] DetachKeys:ctrl-p,ctrl-q EnablePortReservation:true Env:[] EventsLogFilePath:/var/run/libpod/events/events.log EventsLogger:journald HooksDir:[/usr/share/containers/oci/hooks.d] ImageDefaultTransport:docker:// InfraCommand: InfraImage:k8s.gcr.io/pause:3.2 InitPath:/usr/libexec/podman/catatonit LockType:shm MultiImageArchive:false Namespace: NetworkCmdPath: NoPivotRoot:false NumLocks:2048 OCIRuntime:runc OCIRuntimes:map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] PullPolicy:missing Remote:false RemoteURI: RemoteIdentity: ActiveService: ServiceDestinations:map[] RuntimePath:[] RuntimeSupportsJSON:[crun runc] RuntimeSupportsNoCgroups:[crun] RuntimeSupportsKVM:[kata kata-runtime kata-qemu kata-fc] SetOptions:{StorageConfigRunRootSet:false StorageConfigGraphRootSet:false StorageConfigGraphDriverNameSet:false StaticDirSet:false VolumePathSet:false TmpDirSet:false} SignaturePolicyPath:/etc/containers/policy.json SDNotify:false StateType:3 StaticDir:/var/lib/containers/storage/libpod StopTimeout:10 TmpDir:/var/run/libpod VolumePath:/var/lib/containers/storage/volumes} Network:{CNIPluginDirs:[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] DefaultNetwork:podman NetworkConfigDir:/etc/cni/net.d/}} 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /var/lib/containers/storage 
DEBU[0000] Using run root /var/run/containers/storage   
DEBU[0000] Using static dir /var/lib/containers/storage/libpod 
DEBU[0000] Using tmp dir /var/run/libpod                
DEBU[0000] Using volume path /var/lib/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] overlay: ignore_chown_errors=true            
DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs 
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false 
DEBU[0000] Initializing event backend journald          
DEBU[0000] using runtime "/usr/bin/runc"                
WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument 
WARN[0000] Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument 
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist 
WARN[0000] Default CNI network name podman is unchangeable 
INFO[0000] Setting parallel job count to 37             
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf" 
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/shortnames.conf" 
host:
  arch: amd64
  buildahVersion: 1.18.0
  cgroupManager: systemd
  cgroupVersion: v1
  conmon:
    package: conmon-2.0.25-1.el8.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.25, commit: a5a3a4f0087fa0281b1c43eacd1116d9153233a6'
  cpus: 12
  distribution:
    distribution: '"centos"'
    version: "8"
  eventLogger: journald
  hostname: server005.dc02.xrow.net
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 4.18.0-257.el8.x86_64
  linkmode: dynamic
  memFree: 4809969664
  memTotal: 101076951040
  ociRuntime:
    name: runc
    package: runc-1.0.0-145.rc91.git24a3cf8.el8.x86_64
    path: /usr/bin/runc
    version: 'runc version spec: 1.0.2-dev'
  os: linux
  remoteSocket:
    exists: true
    path: /run/podman/podman.sock
  rootless: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 21451501568
  swapTotal: 21474832384
  uptime: 42h 31m 30.26s (Approximately 1.75 days)
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - registry.centos.org
  - docker.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 11
    paused: 0
    running: 0
    stopped: 11
  graphDriverName: overlay
  graphOptions:
    overlay.ignore_chown_errors: "true"
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-1.4.0-1.el8.x86_64
      Version: |-
        fusermount3 version: 3.2.1
        fuse-overlayfs: version 1.4
        FUSE library version 3.2.1
        using FUSE kernel interface version 7.26
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 265
  runRoot: /var/run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 2.1.0
  Built: 1611197102
  BuiltTime: Thu Jan 21 03:45:02 2021
  GitCommit: ""
  GoVersion: go1.15.5
  OsArch: linux/amd64
  Version: 2.2.1

DEBU[0000] Called info.PersistentPostRunE(podman info --log-level=debug) 

@mheon
Copy link
Member

mheon commented Feb 3, 2021

Alright. What I think is happening is that something (not Podman) is deleting /var/run/libpod/alive while Podman is still running. We use this file to detect system reboots, and on detecting a reboot will perform a state wipe. Podman itself will never remove that file, so it has to be something else. We have seen cases where this something is systemd-tmpfiles.

I recommend adding https://github.com/containers/podman/blob/master/contrib/tmpfile/podman.conf to /usr/lib/tmpfiles.d. If systemd-tmpfiles is deleting our temporary files, this will prevent it from doing so.

@xrow
Copy link
Author

xrow commented Feb 3, 2021

The file was already there wiht different contents. See below. I added the file from git.

[root@server005 ~]# cat /usr/lib/tmpfiles.d/podman.conf 
# /tmp/podman-run-* directory can contain content for Podman containers that have run
# for many days. This following line prevents systemd from removing this content.
x /tmp/podman-run-.*
d /run/podman 0700 root root

@rhatdan
Copy link
Member

rhatdan commented Feb 3, 2021

Well you have to find what is blowing these files away. This is not a Podman bug but most likely a systemd process or cron job that is cleaning up content in /tmp directory. Not much Podman can do about this.

@edsantiago
Copy link
Member

Could this be related to the /var/run vs /run mess?

@mheon
Copy link
Member

mheon commented Feb 3, 2021

Umm. Podman itself shouldn't care because they're linked to each other. My initial impression is no.

@xrow
Copy link
Author

xrow commented Feb 3, 2021

Hmmm ok my feeling is I need to provide a better reproducable use case. Even for myself. So far I am just seeing those things without knowing how.

One last question. I was mentioning i was using podman in podman. Could this be the cause of dispearing items?

@rhatdan
Copy link
Member

rhatdan commented Feb 3, 2021

Any chance you are using "shared" on your volume mounts?

@mheon
Copy link
Member

mheon commented Feb 3, 2021

If some of your outer containers mount in /var/lib/containers/ (or subdirectories) but not /var/run/podman that could be a cause

@mheon
Copy link
Member

mheon commented Feb 3, 2021

(By outer containers, meaning the containers that Podman is running inside)

@xrow
Copy link
Author

xrow commented Feb 3, 2021

This are two test case about one S2I podman in podman build that adds one extra persistent shm layer to my maschine. This is sort of the response to @mheon and proposing /var/run/podman to be added. Since my builds are working over the podman API I suspect podman is using sort of "Case 1" to start a new process.

I hope i did it correctly and this is some unexpected result of PinP. Can use advice?

#!/bin/bash

echo "SHM LAYERS BEFORE"

mount | grep shm | wc

podman rm --force test

rm -Rf test
mkdir test

cd test

cat << EOF > Dockerfile
FROM ubi8/php-74

CMD /usr/libexec/s2i/run
EOF

cd ..

echo "Case 1 with volumes"

podman run --device /dev/fuse \
--volume /tmp:/var/lib/containers  \
--volume /var/run/podman:/var/run/podman \
--privileged \
-it --name test -v ./test:/opt/app-root/src registry.gitlab.com/xrow-public/ci-tools/tools:3.0 \
 bash -c "podman --version; cd /opt/app-root/src; podman build --events-backend=file --format docker . || true; sleep 10; ps -ax "

echo "Case 2 without"

podman rm --force test

podman run --device /dev/fuse \
--privileged \
-it --name test -v ./test:/opt/app-root/src registry.gitlab.com/xrow-public/ci-tools/tools:3.0 \
 bash -c "podman --version; cd /opt/app-root/src; podman build --events-backend=file --format docker . ||
 true; sleep 10; ps -ax "

echo "SHM LAYERS AFTER"

mount | grep shm | wc

@xrow
Copy link
Author

xrow commented Feb 3, 2021

By the way my result from my test script looks like this:

It also happens if the moutn is --volume /var/lib/containers:/var/lib/containers instead of --volume /tmp:/var/lib/containers \

[root@server005 ~]# sh test.sh 
SHM LAYERS BEFORE
     29     174    5339
0ec3051c7809d452959d9e61e4d73f3fccd0bd3e9bbfd8946263f70627d1b6dc
Case 1 with volumes
podman version 2.2.1
STEP 1: FROM ubi8/php-74
STEP 2: CMD /usr/libexec/s2i/run
--> Using cache 5539b3fcf8b9c27652b474236cafc57edd64f3b86e9b6ae36d366bff771087d6
--> 5539b3fcf8b
5539b3fcf8b9c27652b474236cafc57edd64f3b86e9b6ae36d366bff771087d6
    PID TTY      STAT   TIME COMMAND
      1 pts/0    Ss+    0:00 bash -c podman --version; cd /opt/app-root/src; podman build --events-backen
     72 pts/0    R+     0:00 ps -ax
Case 2 without
5828996c8097c3c4a1bb5f63140ce504e8ab202d6bc93844d14f3757234c6971
podman version 2.2.1
STEP 1: FROM ubi8/php-74
Completed short name "ubi8/php-74" with unqualified-search registries (origin: /etc/containers/registries.conf)
Getting image source signatures
Copying blob 620696f92fec done  
Copying blob a108724c930f done  
Copying blob 480c3e9a5295 done  
Copying blob d9e72d058dc5 done  
Copying blob cca21acb641a done  
Copying config dcdace2982 done  
Writing manifest to image destination
Storing signatures
Error: error creating build container: 4 errors occurred while pulling:
 * Error initializing source docker://registry.fedoraproject.org/ubi8/php-74:latest: Error reading manifest latest in registry.fedoraproject.org/ubi8/php-74: manifest unknown: manifest unknown
 * Error committing the finished image: error adding layer with blob "sha256:cca21acb641a96561e0cf9a0c1c7b7ffbaaefc92185bd8a9440f6049c838e33b": Error processing tar file(exit status 1): open /root/buildinfo/.wh..wh..opq: invalid argument
 * Error initializing source docker://registry.centos.org/ubi8/php-74:latest: Error reading manifest latest in registry.centos.org/ubi8/php-74: manifest unknown: manifest unknown
 * Error initializing source docker://ubi8/php-74:latest: Error reading manifest latest in docker.io/ubi8/php-74: errors:
denied: requested access to the resource is denied
unauthorized: authentication required

    PID TTY      STAT   TIME COMMAND
      1 pts/0    Ss+    0:00 bash -c podman --version; cd /opt/app-root/src; podman build --events-backen
     90 pts/0    R+     0:00 ps -ax
SHM LAYERS AFTER
     30     180    5528

@rhatdan
Copy link
Member

rhatdan commented Feb 4, 2021

First off what is this for?

--volume /var/run/podman:/var/run/podman \

No reason to do this.

@mheon
Copy link
Member

mheon commented Feb 4, 2021

I would only do that if you are actually mounting /var/lib/containers/ into the container to share host storage.

@rhatdan
Copy link
Member

rhatdan commented Feb 4, 2021

First I would not use /tmp for my images. I would use /var/lib/mycontainers or something like that on permanent storage.

This would at least eliminate something coming up and cleaning up /tmp. Also /tmp is probably a tmpfs and storing lots of images there is probably not a great idea.

@xrow
Copy link
Author

xrow commented Feb 4, 2021

I would only do that if you are actually mounting /var/lib/containers/ into the container to share host storage.

@mheon yes I wanted to share. Plus else I get other errors.

Let me try to do and updated script.

@xrow
Copy link
Author

xrow commented Feb 4, 2021

@rhatdan --volume /var/run/podman:/var/run/podman was a response to the comment:
If some of your outer containers mount in /var/lib/containers/ (or subdirectories) but not /var/run/podman that could be a cause

@rhatdan
Copy link
Member

rhatdan commented Feb 4, 2021

I prefer to share /var/lib/containers as a readonly share rather then a read/write share.

I cover some of these topics here.

https://developers.redhat.com/blog/2019/08/14/best-practices-for-running-buildah-in-a-container/

@rhatdan
Copy link
Member

rhatdan commented Feb 4, 2021

If you are sharing /var/lib/containers within the host then I believe that any containers run on the host will leak into the container, and this could prevent the unmount of the shm on the host, until the container exits.

@rhatdan
Copy link
Member

rhatdan commented Feb 4, 2021

Ok I got the leak.

# podman run --privileged -v /var/lib/containers:/var/lib/containers quay.io/podman/stable podman run alpine echo hello
hello

Causes the leak.

@rhatdan
Copy link
Member

rhatdan commented Feb 4, 2021

I am not sure I can stop the leak.

@xrow
Copy link
Author

xrow commented Feb 4, 2021

@rhatdan Ok meaning you could reproduce this now? You need no more help from my side?

@xrow
Copy link
Author

xrow commented Feb 4, 2021

This there a wokraround? because actually I am using podman over the new API.

@rhatdan
Copy link
Member

rhatdan commented Feb 4, 2021

Why do you want to run a container with /var/lib/containers volume mounted into it? I am trying to figure out how to prevent this. Will be talking to kernel engineer later.

@xrow
Copy link
Author

xrow commented Feb 4, 2021

Actually I do not want this, but the same bug is happening when you trigger PODMAN over the API and have running podman inside. See https://gitlab.com/gitlab-org/gitlab-runner/-/issues/27119

@rhatdan
Copy link
Member

rhatdan commented Feb 5, 2021

Turns out the issue is the podman inside of the container, sees its environment is a fresh boot, and launches a storage reset. This causes the database of the external container to be reset and causes it to not cleanup properly.
Spent the day looking at leaked mounts, mount propagation ... And problem ends up being a lot simpler.

@xrow
Copy link
Author

xrow commented Feb 5, 2021

Sounds cool... Im ready to test what ever you ship.

@rhatdan
Copy link
Member

rhatdan commented Feb 5, 2021

You could just build the new command and test it.

Here is the command I used to test it. From within the podman github check out directory.

./bin/podman run -ti --rm --privileged -v ./bin/podman:/usr/bin/podman -v /var/lib/containers:/var/lib/containers quay.io/podman/stable podman run alpine echo hello

@xrow
Copy link
Author

xrow commented Feb 5, 2021

@rhatdan This there a nightly rpm I can use for testing or do I need to setup a build environment for testing? I can`t find a binary download soemwhere.

@xrow
Copy link
Author

xrow commented Feb 5, 2021

@rhatdan I do have additional questions and one for a supported rh customer of mine here in Germany. In what containtertools release will planned backport be available?

Next towards testing. Do I need to update just the binary in the container or both including the one on the host?

@rhatdan
Copy link
Member

rhatdan commented Feb 5, 2021

podman 3.0 will be in RHEL8.4 with this fix.

You only need the new podman inside of the container.

@rhatdan
Copy link
Member

rhatdan commented Feb 5, 2021

Currently we don't make new versions available for binaries, although you should be able to get a new version of Fedora podman rc3 if this fix gets in.

@xrow
Copy link
Author

xrow commented Feb 5, 2021

ok I will test when i have access...

rhatdan added a commit to rhatdan/podman that referenced this issue Feb 16, 2021
Currently if the host shares container storage with a container
running podman, the podman inside of the container resets the
storage on the host. This can cause issues on the host, as
well as causes the podman command running the container, to
fail to unmount /dev/shm.

podman run -ti --rm --privileged -v /var/lib/containers:/var/lib/containers quay.io/podman/stable podman run alpine echo hello
	* unlinkat /var/lib/containers/storage/overlay-containers/a7f3c9deb0656f8de1d107e7ddff2d3c3c279c11c1635f233a0bffb16051fb2c/userdata/shm: device or resource busy
	* unlinkat /var/lib/containers/storage/overlay-containers/a7f3c9deb0656f8de1d107e7ddff2d3c3c279c11c1635f233a0bffb16051fb2c/userdata/shm: device or resource busy

Since podman is volume mounting in the graphroot, it will add a flag to
/run/.containerenv to tell podman inside of container whether to reset storage or not.

Since the inner podman is running inside of the container, no reason to assume this is a fresh reboot, so if "container" environment variable is set then skip
reset of storage.

Also added tests to make sure /run/.containerenv is runnig correctly.

Fixes: containers#9191

Signed-off-by: Daniel J Walsh <[email protected]>
mheon pushed a commit to mheon/libpod that referenced this issue Feb 18, 2021
Currently if the host shares container storage with a container
running podman, the podman inside of the container resets the
storage on the host. This can cause issues on the host, as
well as causes the podman command running the container, to
fail to unmount /dev/shm.

podman run -ti --rm --privileged -v /var/lib/containers:/var/lib/containers quay.io/podman/stable podman run alpine echo hello
	* unlinkat /var/lib/containers/storage/overlay-containers/a7f3c9deb0656f8de1d107e7ddff2d3c3c279c11c1635f233a0bffb16051fb2c/userdata/shm: device or resource busy
	* unlinkat /var/lib/containers/storage/overlay-containers/a7f3c9deb0656f8de1d107e7ddff2d3c3c279c11c1635f233a0bffb16051fb2c/userdata/shm: device or resource busy

Since podman is volume mounting in the graphroot, it will add a flag to
/run/.containerenv to tell podman inside of container whether to reset storage or not.

Since the inner podman is running inside of the container, no reason to assume this is a fresh reboot, so if "container" environment variable is set then skip
reset of storage.

Also added tests to make sure /run/.containerenv is runnig correctly.

Fixes: containers#9191

Signed-off-by: Daniel J Walsh <[email protected]>

<MH: Fixed cherry-pick conflicts>

Signed-off-by: Matthew Heon <[email protected]>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 22, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 22, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants