-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
If you run podman -d containers in a loop, some containers try to use existing names #11735
Comments
I cannot reproduce. I increased the number but it never failed for me. |
@Luap99 @ericcurtin Since we have finite number of @ericcurtin For your use case using custom $UUID is much better since you are spawning so many containers names don't make sense at all here ( which you are already doing ) --- a/libpod/runtime.go
+++ b/libpod/runtime.go
@@ -913,7 +913,7 @@ func (r *Runtime) Info() (*define.Info, error) {
// generateName generates a unique name for a container or pod.
func (r *Runtime) generateName() (string, error) {
for {
- name := namesgenerator.GetRandomName(0)
+ name := namesgenerator.GetRandomName(1)
// Make sure container with this name does not exist
if _, err := r.state.LookupContainer(name); err == nil {
continue |
I can see how this might be a problem if the |
I may have run that loop with an ampersand at the end of the docker run statement. |
Yes |
Definitely could, but the current runtime lock is in memory only, not
shared - we’d need to replace it with a file lock. I’m a bit concerned
about what that would do to create performance since it would be almost
entirely serialized, plus the added cost of the new lock.
…On Mon, Sep 27, 2021 at 04:50 Paul Holzinger ***@***.***> wrote:
Yes for i in $(seq 1 100); do podman run -d alpine echo & done reproduces
for me.
@mheon <https://github.com/mheon> Should we look the runtime before we
get the random name and only unlock after we wrote the container to the db?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#11735 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AB3AOCA2YECKVU3ZIVFE7RDUEAV57ANCNFSM5EVSNIQQ>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
|
I am not too worried about the lock. Container creation may involve pulling an image and certainly mounting the image where we're touching various lock files many times. @Luap99 time to tackle it? |
Please do a perf comparison if you do. Serializing small parts is not the
same as serializing the entire process.
Maybe we should add name retry logic later in the process, if c/storage
says the name is taken when we try to create the container there.
…On Mon, Sep 27, 2021 at 08:15 Valentin Rothberg ***@***.***> wrote:
I am not too worried about the lock. Container creation may involve
pulling an image and certainly mounting the image where we're touching
various lock files many times.
@Luap99 <https://github.com/Luap99> time to tackle it?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#11735 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AB3AOCHQZKISJ7IYHXZQNNLUEBN4HANCNFSM5EVSNIQQ>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
|
That is a good idea. In any case, these checks will performed so we'd spare one lookup etc. |
A friendly reminder that this issue had no activity for 30 days. |
I don't believe anyone has looked into this yet. Anyone interested in fixing this problem? |
Yes, I have some cycles. Thank you for the reminder! |
Address the TOCTOU when generating random names by having at most 10 attempts to assign a random name when creating a pod or container. [NO TESTS NEEDED] since I do not know a way to force a conflict with randomly generated names in a reasonable time frame. Fixes: containers#11735 Signed-off-by: Valentin Rothberg <[email protected]>
Address the TOCTOU when generating random names by having at most 10 attempts to assign a random name when creating a pod or container. [NO TESTS NEEDED] since I do not know a way to force a conflict with randomly generated names in a reasonable time frame. Fixes: containers#11735 Signed-off-by: Valentin Rothberg <[email protected]>
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
Error: error creating container storage: the container name "frosty_thompson" is already in use by "22a61c8f4ae4192e2070d519257c00f8b29a2eb56c06ff4493d0e77a26f0ef88". You have to remove that container to be able to reuse that name.: that name is already in use
Steps to reproduce the issue:
Run something like
for i in $(seq 1 100); do podman run -d fedora bash; done
, the more iterations, the more likely it is to happen.Describe the results you received:
Container creation will fail because of attempt to use duplicate name. My workaround is to use
--name $(uuidgen)
with podman but that's just a workaround.Describe the results you expected:
Additional information you deem important (e.g. issue happens only occasionally):
Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/master/troubleshooting.md)
No
Additional environment details (AWS, VirtualBox, physical, etc.):
Fedora 34 physical
The text was updated successfully, but these errors were encountered: