Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] --wait shouldn't have any effect when using a single service #10200

Closed
leonardoheld opened this issue Jan 24, 2023 · 3 comments · Fixed by #10209
Closed

[BUG] --wait shouldn't have any effect when using a single service #10200

leonardoheld opened this issue Jan 24, 2023 · 3 comments · Fixed by #10209
Assignees
Labels

Comments

@leonardoheld
Copy link

Description

Pretty much the title, if you use "--wait" with a single service, it hangs indefinitely. This makes sense: if you --wait and you only have one service, it'll hang forever because it won't receive a Healthy or Running from anyone else (I think...).

Steps To Reproduce

Run docker compose up --wait and docker compose up on the provided docker-compose.yml.
Adding a restart always or adding another service that will emit a Healthy or Running status clears the problem, hence my conjecture.

Compose Version

Docker Compose version v2.14.1

Docker Environment

➜  test docker info
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Docker Buildx (Docker Inc., v0.9.1-docker)
  compose: Docker Compose (Docker Inc., v2.14.1)
  scan: Docker Scan (Docker Inc., v0.23.0)

Server:
 Containers: 20
  Running: 1
  Paused: 0
  Stopped: 19
 Images: 107
 Server Version: 20.10.22
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: false
  userxattr: true
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 5b842e528e99d4d4c1686467debf2bd4b88ecd86
 runc version: v1.1.4-0-g5fd4c4d
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
  rootless
  cgroupns
 Kernel Version: 5.15.0-58-generic
 Operating System: Ubuntu 22.04.1 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 20
 Total Memory: 15.37GiB
 Name: toradex
 ID: R7MP:XSGO:BLYX:7HHW:OFHR:U3S4:B4AN:B4AL:54BV:KRXB:MKXX:3MSF
 Docker Root Dir: /home/ljh/.local/share/docker
 Debug Mode: false
 Username: leonardoheldattoradex
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
WARNING: No cpu shares support
WARNING: No cpuset support
WARNING: No io.weight support
WARNING: No io.weight (per device) support
WARNING: No io.max (rbps) support
WARNING: No io.max (wbps) support
WARNING: No io.max (riops) support
WARNING: No io.max (wiops) support

Anything else?

No response

@laurazard
Copy link
Contributor

laurazard commented Jan 25, 2023

This isn't quite accurate, although it's a bit complicated –

If using a Compose file such as:

services:
 a:
  image: alpine
  command: top

Running compose up --wait does work, as the condition running_or_healthy is applied, and the container stays alive (since top doesn't exit) long enough for it's status to be running when we make the inspect call.

However, if using a Compose file such as:

services:
 a:
  image: alpine
  command: echo hello world

Running compose up --wait indeed hangs forever, as the container exits almost immediately and by the time Compose makes the container inspect call, it's already dead.

This is a racy issue, and we'll have to look into whether we can rearchitect this to fix it.

In the meantime however, compose up --wait does work and has value, even for single service Compose files.

@ndeloof
Copy link
Contributor

ndeloof commented Jan 26, 2023

thanks to @laurazard diagnostic, I've proposed #10209

@leonardoheld
Copy link
Author

Hi y'all!
Laura's explanation makes a lot more sense than mine, thanks for understanding the problem even with my bad description. I had to deal with a hellish Yocto recipe situation to test it, but Nicolas' fixes seem to work fine for the cases I tried :-)

Shall we keep this issue open until #10209 is merged?

Thank you again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants