Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fails to handle the service_healthy condition of a depends_on element #866

Open
candleindark opened this issue Mar 6, 2024 · 7 comments
Labels
bug Something isn't working

Comments

@candleindark
Copy link

Describe the bug
podman-compose fails to handle the service_healthy condition of a depends_on element.

To Reproduce
Steps to reproduce the behavior:

  1. Create the following docker-compose.yml file
services:
  base:
    image: docker.io/debian
    command: [ "tail", "-f", "/dev/null" ]
    healthcheck:
      test: [ "CMD", "false" ]
      interval: 30s
      timeout: 30s
      retries: 3
      start_period: 1m

  dependent:
    image: docker.io/debian
    depends_on:
      base:
        condition: service_healthy
    command: [ "tail", "-f", "/dev/null" ]
  1. Run podman-compose -f docker-compose.yml up -d in the containing directory

Expected behavior
The container corresponding to the dependent service never starts since the base can never be healthy.

Actual behavior
The container corresponding to the dependent service always starts.

Output

podman-compose version
podman-compose version: 1.0.6
['podman', '--version', '']
using podman version: 4.3.1
podman-compose version 1.0.6
podman --version 
podman version 4.3.1
exit code: 0

Environment:

  • OS: Debian GNU/Linux 12 (bookworm)
  • podman version: 4.3.1
  • podman compose version: 1.0.6
@pfeileon
Copy link

Could be a duplicate, as the correct implementation of healthchecks is actually a 5-year-old issue: #23

@flixman
Copy link

flixman commented Oct 7, 2024

I am hitting now the same bug: I have a container that is supposed to start when 4 previous containers are up and healthy, and instead it starts right away :-/

@chaserhkj
Copy link

This feature is currently completely unimplemented in podman-compose according to code here. The script is just getting the keys from the deps dict and dropping all sub tree items, effectively treating all deps as service_started

As for #23 it is about the implementation of healthcheck directive, which I think is mostly there, albeit the fact that it won't work with containers w/o access to /bin/sh

I'll try get a PR for this if I had time working on this in the next week.

@flixman
Copy link

flixman commented Nov 24, 2024

@chaserhkj I have just run into this issue also. Are you working on a PR already? I can give it a look, but I am not familiar with the code base of podman-compose (meaning will take me a while), but I do not want to crash into your work.

Update: After giving it a thought, I think I would update the code in the reference you provided, to return a list of a string or dictionaries, depending on the content of the file. Then, update this block so that the task first verifies that any healthcheck dependencies get fulfilled, before running the service.

What do you think about this approach?

@chaserhkj
Copy link

I haven't got any chance to look into this further yet, please go ahead if you'd like to work on it.

For your implementation plans, I would generally prefer to handle it the same way we handle normal dependencies here. But since podman does not support specifying healthcheck dependencies, I think we need to check it on our end whatsoever.

We probably can call podman wait --condition healthy in the block you mentioned to achieve this. (implemented in containers/podman#18974)

@flixman
Copy link

flixman commented Nov 27, 2024

@chaserhkj I think this is not as simple as it seems. For what I have seen, the dependencies are actually managed in this block. As currently the dependencies are unconditional, this block works ok because it is just starting all of them in one go.

What I have implemented is that, instead of a string, the _deps set contains ServiceDependency objects (hashable), and updated the corresponding location where _deps is used to refer to the name of the dependency, to prevent breaking current functionality.

Update: I think I am looking at this the wrong way. Given that I have a set with the dependencies for which the service needs to be healthy, and they need to be all fulfilled for the main service to work, I can just concatenate the names and run podman wait once.

This is the work in progress: #1078

@devurandom
Copy link

For what I have seen, the dependencies are actually managed in this block. As currently the dependencies are unconditional, this block works ok because it is just starting all of them in one go.

My experience with dependencies is different, far from what I would call "works ok":

Is this specific to my setup?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants