-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Podman socket: Build API produces different results then Docker's Build API #16360
Comments
Please try a more recent Podman, I believe this has been fixed |
Alright, i am not sure if this is latest podman but i have installed podman desktop, initialized a VM and it still fails, but podman engine reports v4.2.1 - is this correct? If so, latest also fails. Have made a repo to reproduce here - https://github.com/jakegt1/podman-16360-repro |
@flouthoc PTAL |
seems that it's related to this - #12063. Seems the problem (not that i truly understand how the API works) is that the response for the final two lines (aux and stream) come out in the same line. In Docker api they are separate. |
I have made a tentative MR for this, although not sure how to run the tests/make a merge request for it. Seems that the only change needs to be is to flush aux before stream, rather than together. |
I have at least fixed my use case. |
@jakegt1 Would you like to open a PR or is it a hack for your use-case ? |
It seems logical to me this fixes it but I'm not that confident as I've only just started touching the code. I can see why the original person who did this thought this should work (they are still passed on different lines, so surely they should come out differently.) It's plausible to me this is actually some kind of "bug" in docker-py, rather then a bug in podman API - but the "bug" doesn't arise with docker because there are no cases where two lines are sent at once. So considering that it is a bug in podman's API because it doesn't work the same as docker's api, but shouldn't really be a bug. I can open a PR, should I write some test first though? |
@jakegt1 Sure open a PR a test would be nice. |
Closes containers#16360 Signed-off-by: Jake Torrance <[email protected]> Signed-off-by: Jake Torrance <[email protected]>
Have made the MR. I think the test is quite a typical use case someone would do using APIClient in docker py. It fails without the change I made. Podman also fails building with jenkin's docker-plugin currently, so i think this is a good test as other consumers would be likely to do similar code. As there is no case where docker will flush more than one object at once (which makes sense anyway to simplify API consumption), I think this is a good change. |
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
As i am migrating from OL7 -> OL8 (RHEL based), i am wanting to keep my current code working - for that i am relying on podman socket to keep my docker-compose / docker py usage functional.
Docker-compose seems fine but docker-py is not, there is an issue where the return values from the build API is different:
podman/pkg/api/handlers/compat/images_build.go
Line 803 in 40e8bcb
The final response includes 2 lines rather than a single line, as this
aux
object is added. This isn't really an issue and i can workaround it, but was wondering if it would be changed to be in line with docker API.Steps to reproduce the issue:
Use docker py to build an image with docker and podman socket
See that the final response from the api is different
Describe the results you received:
{"aux":{"ID":"sha256:2d49cc4bc02bb457a9459736c461c52944e68e2a40cf2bb1207494b2b72c5f82"}}\n{"stream":"Successfully built 2d49cc4bc02b\n"}\n
Describe the results you expected:
{"stream":"Successfully built 2d49cc4bc02b\n"}\n
Additional information you deem important (e.g. issue happens only occasionally):
Output of
podman version
:Output of
podman info
:Package info (e.g. output of
rpm -q podman
orapt list podman
orbrew info podman
):Have tested with OL9 which is 4.2.x podman
Additional environment details (AWS, VirtualBox, physical, etc.):
Cloud based vm
The text was updated successfully, but these errors were encountered: