Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Podman socket: Build API produces different results then Docker's Build API #16360

Closed
jakegt1 opened this issue Oct 31, 2022 · 11 comments · Fixed by #16418
Closed

Podman socket: Build API produces different results then Docker's Build API #16360

jakegt1 opened this issue Oct 31, 2022 · 11 comments · Fixed by #16418
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@jakegt1
Copy link
Contributor

jakegt1 commented Oct 31, 2022

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

As i am migrating from OL7 -> OL8 (RHEL based), i am wanting to keep my current code working - for that i am relying on podman socket to keep my docker-compose / docker py usage functional.

Docker-compose seems fine but docker-py is not, there is an issue where the return values from the build API is different:

m.Aux = []byte(fmt.Sprintf(`{"ID":"sha256:%s"}`, imageID))

The final response includes 2 lines rather than a single line, as this aux object is added. This isn't really an issue and i can workaround it, but was wondering if it would be changed to be in line with docker API.

Steps to reproduce the issue:

  1. Use docker py to build an image with docker and podman socket

  2. See that the final response from the api is different

Describe the results you received:
{"aux":{"ID":"sha256:2d49cc4bc02bb457a9459736c461c52944e68e2a40cf2bb1207494b2b72c5f82"}}\n{"stream":"Successfully built 2d49cc4bc02b\n"}\n

Describe the results you expected:
{"stream":"Successfully built 2d49cc4bc02b\n"}\n

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

Client:       Podman Engine
Version:      4.1.1
API Version:  4.1.1
Go Version:   go1.17.12
Built:        Wed Oct 26 15:12:06 2022
OS/Arch:      linux/amd64

Output of podman info:

host:
  arch: amd64
  buildahVersion: 1.26.2
  cgroupControllers: []
  cgroupManager: cgroupfs
  cgroupVersion: v1
  conmon:
    package: conmon-2.1.3-1.module+el8.6.0+20857+bf01bdf2.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.3, commit: 0e2e290c54cc97af44a8b96f004ff81e8abd8956'
  cpuUtilization:
    idlePercent: 98.48
    systemPercent: 0.21
    userPercent: 1.31
  cpus: 104
  distribution:
    distribution: '"ol"'
    variant: server
    version: "8.6"
  eventLogger: file
  hostname: cicd-bm-standard-ol8
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1001
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1001
      size: 1
    - container_id: 1
      host_id: 165536
      size: 65536
  kernel: 5.4.17-2136.310.7.1.el8uek.x86_64
  linkmode: dynamic
  logDriver: k8s-file
  memFree: 630402412544
  memTotal: 809608286208
  networkBackend: cni
  ociRuntime:
    name: runc
    package: runc-1.1.3-2.module+el8.6.0+20857+bf01bdf2.x86_64
    path: /usr/bin/runc
    version: |-
      runc version 1.1.3
      spec: 1.0.2-dev
      go: go1.17.12
      libseccomp: 2.5.2
  os: linux
  remoteSocket:
    path: /run/user/1001/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /bin/slirp4netns
    package: slirp4netns-1.2.0-2.module+el8.6.0+20857+bf01bdf2.x86_64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.4.0
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.2
  swapFree: 8539598848
  swapTotal: 8539598848
  uptime: 4h 35m 58.6s (Approximately 0.17 days)
plugins:
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - container-registry.oracle.com
  - docker.io
store:
  configFile: /home/oracle/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/oracle/.local/share/containers/storage
  graphRootAllocated: 2187084447744
  graphRootUsed: 178915131392
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 0
  runRoot: /run/user/1001/containers
  volumePath: /home/oracle/.local/share/containers/storage/volumes
version:
  APIVersion: 4.1.1
  Built: 1666797126
  BuiltTime: Wed Oct 26 15:12:06 2022
  GitCommit: ""
  GoVersion: go1.17.12
  Os: linux
  OsArch: linux/amd64
  Version: 4.1.1

Package info (e.g. output of rpm -q podman or apt list podman or brew info podman):

podman-4.1.1-7.module+el8.6.0+20857+bf01bdf2.x86_64

Have tested with OL9 which is 4.2.x podman

Additional environment details (AWS, VirtualBox, physical, etc.):
Cloud based vm

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Oct 31, 2022
@mheon
Copy link
Member

mheon commented Oct 31, 2022

Please try a more recent Podman, I believe this has been fixed

@jakegt1
Copy link
Contributor Author

jakegt1 commented Oct 31, 2022

Alright, i am not sure if this is latest podman but i have installed podman desktop, initialized a VM and it still fails, but podman engine reports v4.2.1 - is this correct?

If so, latest also fails. Have made a repo to reproduce here - https://github.com/jakegt1/podman-16360-repro

@rhatdan
Copy link
Member

rhatdan commented Oct 31, 2022

@flouthoc PTAL

@jakegt1
Copy link
Contributor Author

jakegt1 commented Oct 31, 2022

seems that it's related to this - #12063.

Seems the problem (not that i truly understand how the API works) is that the response for the final two lines (aux and stream) come out in the same line. In Docker api they are separate.

@jakegt1
Copy link
Contributor Author

jakegt1 commented Nov 1, 2022

I have made a tentative MR for this, although not sure how to run the tests/make a merge request for it. Seems that the only change needs to be is to flush aux before stream, rather than together.

@jakegt1
Copy link
Contributor Author

jakegt1 commented Nov 1, 2022

I have at least fixed my use case.

@flouthoc
Copy link
Collaborator

flouthoc commented Nov 2, 2022

@jakegt1 Would you like to open a PR or is it a hack for your use-case ?

@jakegt1
Copy link
Contributor Author

jakegt1 commented Nov 2, 2022

It seems logical to me this fixes it but I'm not that confident as I've only just started touching the code. I can see why the original person who did this thought this should work (they are still passed on different lines, so surely they should come out differently.)

It's plausible to me this is actually some kind of "bug" in docker-py, rather then a bug in podman API - but the "bug" doesn't arise with docker because there are no cases where two lines are sent at once. So considering that it is a bug in podman's API because it doesn't work the same as docker's api, but shouldn't really be a bug.

I can open a PR, should I write some test first though?

@jakegt1
Copy link
Contributor Author

jakegt1 commented Nov 3, 2022

@flouthoc

@flouthoc
Copy link
Collaborator

flouthoc commented Nov 3, 2022

@jakegt1 Sure open a PR a test would be nice.

jakegt1 added a commit to jakegt1/podman that referenced this issue Nov 6, 2022
@jakegt1
Copy link
Contributor Author

jakegt1 commented Nov 6, 2022

Have made the MR.

I think the test is quite a typical use case someone would do using APIClient in docker py. It fails without the change I made.

Podman also fails building with jenkin's docker-plugin currently, so i think this is a good test as other consumers would be likely to do similar code. As there is no case where docker will flush more than one object at once (which makes sense anyway to simplify API consumption), I think this is a good change.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 11, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 11, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants