Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker compatiblity: local images cannot be used with testcontainers #15306

Closed
asbachb opened this issue Aug 13, 2022 · 29 comments
Closed

Docker compatiblity: local images cannot be used with testcontainers #15306

asbachb opened this issue Aug 13, 2022 · 29 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@asbachb
Copy link

asbachb commented Aug 13, 2022

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Steps to reproduce the issue:
Podman fails to resolve local image when testcontainers request it via socket.

asbachb@nixos-t14s  ~  podman image list | grep aaa
localhost/aaa/hello-world              1.0              4e404aca34bd  42 hours ago   18.8 kB

Java test class example

@Testcontainers
public class TestcontainerIT {
    
    @Container
    GenericContainer container = new GenericContainer(DockerImageName.parse("localhost/aaa/hello-world:1.0"));
    
    
    @Test
    public void test() throws Exception {
        System.out.println("");
    }
}

Testcontainers output

[main] INFO org.testcontainers.utility.ImageNameSubstitutor - Image name substitution will be performed by: DefaultImageNameSubstitutor (composite of 'ConfigurationFileImageNameSubstitutor' and 'PrefixingImageNameSubstitutor')
[main] INFO org.testcontainers.dockerclient.DockerClientProviderStrategy - Loaded org.testcontainers.dockerclient.UnixSocketClientProviderStrategy from ~/.testcontainers.properties, will try it first
[main] INFO org.testcontainers.dockerclient.DockerClientProviderStrategy - Found Docker environment with local Unix socket (unix:///var/run/docker.sock)
[main] INFO org.testcontainers.DockerClientFactory - Docker host IP address is localhost
[main] INFO org.testcontainers.DockerClientFactory - Connected to docker: 
  Server Version: 4.2.0
  API Version: 1.41
  Operating System: nixos
  Total Memory: 31328 MB
[main] INFO 🐳 [testcontainers/ryuk:0.3.3] - Creating container for image: testcontainers/ryuk:0.3.3
[main] INFO org.testcontainers.utility.RegistryAuthLocator - Failure when attempting to lookup auth config. Please ignore if you don't have images in an authenticated registry. Details: (dockerImageName: testcontainers/ryuk:0.3.3, configFile: /home/asbachb/.docker/config.json. Falling back to docker-java default behaviour. Exception message: /home/asbachb/.docker/config.json (No such file or directory)
[main] INFO 🐳 [testcontainers/ryuk:0.3.3] - Container testcontainers/ryuk:0.3.3 is starting: 5ab272fb85fa90bece4f59a1612b5f26d3e11ee508e8282523f6b9af427b3e42
[main] INFO 🐳 [testcontainers/ryuk:0.3.3] - Container testcontainers/ryuk:0.3.3 started in PT0.491840678S
[main] INFO org.testcontainers.utility.RyukResourceReaper - Ryuk started - will monitor and terminate Testcontainers containers on JVM exit
[main] INFO org.testcontainers.DockerClientFactory - Checking the system...
[main] INFO org.testcontainers.DockerClientFactory - ✔︎ Docker server version should be at least 1.6.0
[main] INFO 🐳 [aaa/hello-world:1.0] - Pulling docker image: aaa/hello-world:1.0. Please be patient; this may take some time but only needs to be done once.
[docker-java-stream-455719528] INFO 🐳 [aaa/hello-world:1.0] - Starting to pull image
[docker-java-stream-455719528] ERROR com.github.dockerjava.api.async.ResultCallbackTemplate - Error during callback
java.lang.NullPointerException: Cannot invoke "String.matches(String)" because the return value of "com.github.dockerjava.api.model.PullResponseItem.getStatus()" is null
	at com.github.dockerjava.api.command.PullImageResultCallback.checkForDockerSwarmResponse(PullImageResultCallback.java:48)
	at com.github.dockerjava.api.command.PullImageResultCallback.onNext(PullImageResultCallback.java:35)
	at org.testcontainers.images.LoggedPullImageResultCallback.onNext(LoggedPullImageResultCallback.java:48)
	at org.testcontainers.images.TimeLimitedLoggedPullImageResultCallback.onNext(TimeLimitedLoggedPullImageResultCallback.java:73)
	at org.testcontainers.images.TimeLimitedLoggedPullImageResultCallback.onNext(TimeLimitedLoggedPullImageResultCallback.java:24)
	at org.testcontainers.shaded.com.github.dockerjava.core.exec.AbstrAsyncDockerCmdExec$1.onNext(AbstrAsyncDockerCmdExec.java:41)
	at org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder$JsonSink.accept(DefaultInvocationBuilder.java:315)
	at org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder$JsonSink.accept(DefaultInvocationBuilder.java:298)
	at org.testcontainers.shaded.com.github.dockerjava.core.DefaultInvocationBuilder.lambda$executeAndStream$1(DefaultInvocationBuilder.java:275)
	at java.base/java.lang.Thread.run(Thread.java:833)

podman socket log

Aug 13 14:02:52 nixos-t14s systemd[1]: Started Podman API Service.
Aug 13 14:02:52 nixos-t14s podman[5358]: time="2022-08-13T14:02:52+04:00" level=info msg="/nix/store/y4zrkqsxi0x4z6pcyi9g8xidw2b88vf2-podman-4.2.0/bin/podman filtering at log level info"
Aug 13 14:02:52 nixos-t14s podman[5358]: time="2022-08-13T14:02:52+04:00" level=info msg="Setting parallel job count to 49"
Aug 13 14:02:52 nixos-t14s podman[5358]: time="2022-08-13T14:02:52+04:00" level=info msg="Using systemd socket activation to determine API endpoint"
Aug 13 14:02:52 nixos-t14s podman[5358]: time="2022-08-13T14:02:52+04:00" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"/run/podman/podman.sock\""
Aug 13 14:02:52 nixos-t14s podman[5358]: time="2022-08-13T14:02:52+04:00" level=info msg="API service listening on \"/run/podman/podman.sock\""
Aug 13 14:02:52 nixos-t14s podman[5358]: time="2022-08-13T14:02:52+04:00" level=warning msg="IdleTracker: StateClosed transition by connection marked un-managed" X-Reference-Id=0xc0005297c0
Aug 13 14:02:53 nixos-t14s podman[5358]: 2022-08-13 14:02:53.059212345 +0400 +04 m=+0.193852535 container died ba0b05233771394a7071e471b7442b841b2b9315b82a0699aa7151fdf80ad55a (image=docker.io/testcontainers/ryuk:0.3.3, name=testcontainers-ryuk-8a2f26d8-48d8-4eb6-9b0a-9fe97bebb451, health_status=)
Aug 13 14:02:53 nixos-t14s podman[5358]: @ - - [13/Aug/2022:14:02:53 +0400] "GET /v1.32/info HTTP/1.1" 200 2190 "" "Apache-HttpClient/5.0.3 (Java/17.0.3)"
Aug 13 14:02:53 nixos-t14s podman[5358]: @ - - [13/Aug/2022:14:02:53 +0400] "GET /v1.32/version HTTP/1.1" 200 780 "" "Apache-HttpClient/5.0.3 (Java/17.0.3)"
Aug 13 14:02:53 nixos-t14s podman[5358]: @ - - [13/Aug/2022:14:02:53 +0400] "GET /v1.32/images/json HTTP/1.1" 200 1843 "" "Apache-HttpClient/5.0.3 (Java/17.0.3)"
Aug 13 14:02:53 nixos-t14s podman[5358]: @ - - [13/Aug/2022:14:02:53 +0400] "GET /v1.32/images/testcontainers%2Fryuk:0.3.3/json HTTP/1.1" 200 1992 "" "Apache-HttpClient/5.0.3 (Java/17.0.3)"
Aug 13 14:02:53 nixos-t14s podman[5358]:
Aug 13 14:02:53 nixos-t14s podman[5358]: 2022-08-13 14:02:53.415811412 +0400 +04 m=+0.550451533 container create 5ab272fb85fa90bece4f59a1612b5f26d3e11ee508e8282523f6b9af427b3e42 (image=docker.io/testcontainers/ryuk:0.3.3, name=testcontainers-ryuk-a6929220-5e29-4341-b842-993fe5e87358, health_status=, org.testcontainers=true)
Aug 13 14:02:53 nixos-t14s podman[5358]: @ - - [13/Aug/2022:14:02:53 +0400] "POST /v1.32/containers/create?name=testcontainers-ryuk-a6929220-5e29-4341-b842-993fe5e87358 HTTP/1.1" 201 88 "" "Apache-HttpClient/5.0.3 (Java/17.0.3)"
Aug 13 14:02:53 nixos-t14s dnsmasq[3426]: read /run/containers/cni/dnsname/podman/addnhosts - 3 addresses
Aug 13 14:02:53 nixos-t14s podman[5358]: time="2022-08-13T14:02:53+04:00" level=info msg="Running conmon under slice machine.slice and unitName libpod-conmon-5ab272fb85fa90bece4f59a1612b5f26d3e11ee508e8282523f6b9af427b3e42.scope"
Aug 13 14:02:53 nixos-t14s podman[5358]: time="2022-08-13T14:02:53+04:00" level=info msg="Got Conmon PID as 5583"
Aug 13 14:02:53 nixos-t14s podman[5358]: 2022-08-13 14:02:53.547504477 +0400 +04 m=+0.682144668 container init 5ab272fb85fa90bece4f59a1612b5f26d3e11ee508e8282523f6b9af427b3e42 (image=docker.io/testcontainers/ryuk:0.3.3, name=testcontainers-ryuk-a6929220-5e29-4341-b842-993fe5e87358, health_status=, org.testcontainers=true)
Aug 13 14:02:53 nixos-t14s podman[5358]: @ - - [13/Aug/2022:14:02:53 +0400] "GET /_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Aug 13 14:02:53 nixos-t14s podman[5358]: 2022-08-13 14:02:53.555161782 +0400 +04 m=+0.689801484 container start 5ab272fb85fa90bece4f59a1612b5f26d3e11ee508e8282523f6b9af427b3e42 (image=docker.io/testcontainers/ryuk:0.3.3, name=testcontainers-ryuk-a6929220-5e29-4341-b842-993fe5e87358, health_status=, org.testcontainers=true)
Aug 13 14:02:53 nixos-t14s podman[5358]: @ - - [13/Aug/2022:14:02:53 +0400] "POST /v1.32/containers/5ab272fb85fa90bece4f59a1612b5f26d3e11ee508e8282523f6b9af427b3e42/start HTTP/1.1" 204 0 "" "Apache-HttpClient/5.0.3 (Java/17.0.3)"
Aug 13 14:02:53 nixos-t14s podman[5358]: @ - - [13/Aug/2022:14:02:53 +0400] "GET /v1.32/containers/5ab272fb85fa90bece4f59a1612b5f26d3e11ee508e8282523f6b9af427b3e42/json HTTP/1.1" 200 4787 "" "Apache-HttpClient/5.0.3 (Java/17.0.3)"
Aug 13 14:02:53 nixos-t14s podman[5358]: time="2022-08-13T14:02:53+04:00" level=info msg="Request Failed(Not Found): failed to find image aaa/hello-world:1.0: docker.io/aaa/hello-world:1.0: No such image"
Aug 13 14:02:53 nixos-t14s podman[5358]: @ - - [13/Aug/2022:14:02:53 +0400] "GET /v1.32/images/aaa%2Fhello-world:1.0/json HTTP/1.1" 404 213 "" "Apache-HttpClient/5.0.3 (Java/17.0.3)"
Aug 13 14:02:53 nixos-t14s podman[5358]: Trying to pull docker.io/aaa/hello-world:1.0...
Aug 13 14:02:53 nixos-t14s podman[5358]: @ - - [13/Aug/2022:14:02:53 +0400] "GET /v1.32/containers/5ab272fb85fa90bece4f59a1612b5f26d3e11ee508e8282523f6b9af427b3e42/logs?stdout=true&stderr=true&follow=true&since=0 HTTP/1.1" 200 192 "" "Apache-HttpClient/5.0.3 (Java/17.0.3)"
Aug 13 14:02:58 nixos-t14s podman[5358]: @ - - [13/Aug/2022:14:02:53 +0400] "POST /v1.32/images/create?fromImage=aaa%2Fhello-world&tag=1.0 HTTP/1.1" 200 464 "" "Apache-HttpClient/5.0.3 (Java/17.0.3)"
Aug 13 14:02:58 nixos-t14s podman[5358]: 2022-08-13 14:02:53.731278598 +0400 +04 m=+0.865918719 image pull  docker.io/aaa/hello-world:1.0
Aug 13 14:03:08 nixos-t14s podman[5358]: @ - - [13/Aug/2022:14:03:08 +0400] "GET /v1.29/containers/json?all=1&filters=%7B%22label%22%3A%7B%22org.testcontainers.sessionId%3Da6929220-5e29-4341-b842-993fe5e87358%22%3Atrue%2C%22org.testcontainers%3Dtrue%22%3Atrue%7D%7D&limit=0 HTTP/1.1" 200 3 "" "Go-http-client/1.1"
Aug 13 14:03:08 nixos-t14s podman[5358]: @ - - [13/Aug/2022:14:03:08 +0400] "POST /v1.29/networks/prune?filters=%7B%22label%22%3A%7B%22org.testcontainers.sessionId%3Da6929220-5e29-4341-b842-993fe5e87358%22%3Atrue%2C%22org.testcontainers%3Dtrue%22%3Atrue%7D%7D HTTP/1.1" 200 23 "" "Go-http-client/1.1"
Aug 13 14:03:08 nixos-t14s podman[5358]: @ - - [13/Aug/2022:14:03:08 +0400] "POST /v1.29/volumes/prune?filters=%7B%22label%22%3A%7B%22org.testcontainers.sessionId%3Da6929220-5e29-4341-b842-993fe5e87358%22%3Atrue%2C%22org.testcontainers%3Dtrue%22%3Atrue%7D%7D HTTP/1.1" 200 41 "" "Go-http-client/1.1"
Aug 13 14:03:08 nixos-t14s podman[5358]: time="2022-08-13T14:03:08+04:00" level=info msg="Request Failed(Internal Server Error): specifying \"dangling\" filter more than once with different values is not supported"
Aug 13 14:03:08 nixos-t14s podman[5358]: @ - - [13/Aug/2022:14:03:08 +0400] "POST /v1.29/images/prune?filters=%7B%22dangling%22%3A%7B%22false%22%3Atrue%7D%2C%22label%22%3A%7B%22org.testcontainers.sessionId%3Da6929220-5e29-4341-b842-993fe5e87358%22%3Atrue%2C%22org.testcontainers%3Dtrue%22%3Atrue%7D%7D HTTP/1.1" 500 209 "" "Go-http-client/1.1"
Aug 13 14:03:09 nixos-t14s podman[5358]: time="2022-08-13T14:03:09+04:00" level=info msg="Request Failed(Internal Server Error): specifying \"dangling\" filter more than once with different values is not supported"
Aug 13 14:03:09 nixos-t14s podman[5358]: @ - - [13/Aug/2022:14:03:09 +0400] "POST /v1.29/images/prune?filters=%7B%22dangling%22%3A%7B%22false%22%3Atrue%7D%2C%22label%22%3A%7B%22org.testcontainers.sessionId%3Da6929220-5e29-4341-b842-993fe5e87358%22%3Atrue%2C%22org.testcontainers%3Dtrue%22%3Atrue%7D%7D HTTP/1.1" 500 209 "" "Go-http-client/1.1"
Aug 13 14:03:10 nixos-t14s podman[5358]: time="2022-08-13T14:03:10+04:00" level=info msg="Request Failed(Internal Server Error): specifying \"dangling\" filter more than once with different values is not supported"
Aug 13 14:03:10 nixos-t14s podman[5358]: @ - - [13/Aug/2022:14:03:10 +0400] "POST /v1.29/images/prune?filters=%7B%22dangling%22%3A%7B%22false%22%3Atrue%7D%2C%22label%22%3A%7B%22org.testcontainers.sessionId%3Da6929220-5e29-4341-b842-993fe5e87358%22%3Atrue%2C%22org.testcontainers%3Dtrue%22%3Atrue%7D%7D HTTP/1.1" 500 209 "" "Go-http-client/1.1"
Aug 13 14:03:11 nixos-t14s podman[5358]: time="2022-08-13T14:03:11+04:00" level=info msg="Request Failed(Internal Server Error): specifying \"dangling\" filter more than once with different values is not supported"
Aug 13 14:03:11 nixos-t14s podman[5358]: @ - - [13/Aug/2022:14:03:11 +0400] "POST /v1.29/images/prune?filters=%7B%22dangling%22%3A%7B%22false%22%3Atrue%7D%2C%22label%22%3A%7B%22org.testcontainers.sessionId%3Da6929220-5e29-4341-b842-993fe5e87358%22%3Atrue%2C%22org.testcontainers%3Dtrue%22%3Atrue%7D%7D HTTP/1.1" 500 209 "" "Go-http-client/1.1"
Aug 13 14:03:12 nixos-t14s podman[5358]: time="2022-08-13T14:03:12+04:00" level=info msg="Request Failed(Internal Server Error): specifying \"dangling\" filter more than once with different values is not supported"
Aug 13 14:03:12 nixos-t14s podman[5358]: @ - - [13/Aug/2022:14:03:12 +0400] "POST /v1.29/images/prune?filters=%7B%22dangling%22%3A%7B%22false%22%3Atrue%7D%2C%22label%22%3A%7B%22org.testcontainers.sessionId%3Da6929220-5e29-4341-b842-993fe5e87358%22%3Atrue%2C%22org.testcontainers%3Dtrue%22%3Atrue%7D%7D HTTP/1.1" 500 209 "" "Go-http-client/1.1"
Aug 13 14:03:13 nixos-t14s podman[5358]: time="2022-08-13T14:03:13+04:00" level=info msg="Request Failed(Internal Server Error): specifying \"dangling\" filter more than once with different values is not supported"
Aug 13 14:03:13 nixos-t14s podman[5358]: @ - - [13/Aug/2022:14:03:13 +0400] "POST /v1.29/images/prune?filters=%7B%22dangling%22%3A%7B%22false%22%3Atrue%7D%2C%22label%22%3A%7B%22org.testcontainers.sessionId%3Da6929220-5e29-4341-b842-993fe5e87358%22%3Atrue%2C%22org.testcontainers%3Dtrue%22%3Atrue%7D%7D HTTP/1.1" 500 209 "" "Go-http-client/1.1"
Aug 13 14:03:14 nixos-t14s podman[5358]: time="2022-08-13T14:03:14+04:00" level=info msg="Request Failed(Internal Server Error): specifying \"dangling\" filter more than once with different values is not supported"
Aug 13 14:03:14 nixos-t14s podman[5358]: @ - - [13/Aug/2022:14:03:14 +0400] "POST /v1.29/images/prune?filters=%7B%22dangling%22%3A%7B%22false%22%3Atrue%7D%2C%22label%22%3A%7B%22org.testcontainers.sessionId%3Da6929220-5e29-4341-b842-993fe5e87358%22%3Atrue%2C%22org.testcontainers%3Dtrue%22%3Atrue%7D%7D HTTP/1.1" 500 209 "" "Go-http-client/1.1"
Aug 13 14:03:15 nixos-t14s podman[5358]: time="2022-08-13T14:03:15+04:00" level=info msg="Request Failed(Internal Server Error): specifying \"dangling\" filter more than once with different values is not supported"
Aug 13 14:03:15 nixos-t14s podman[5358]: @ - - [13/Aug/2022:14:03:15 +0400] "POST /v1.29/images/prune?filters=%7B%22dangling%22%3A%7B%22false%22%3Atrue%7D%2C%22label%22%3A%7B%22org.testcontainers.sessionId%3Da6929220-5e29-4341-b842-993fe5e87358%22%3Atrue%2C%22org.testcontainers%3Dtrue%22%3Atrue%7D%7D HTTP/1.1" 500 209 "" "Go-http-client/1.1"
Aug 13 14:03:16 nixos-t14s podman[5358]: time="2022-08-13T14:03:16+04:00" level=info msg="Request Failed(Internal Server Error): specifying \"dangling\" filter more than once with different values is not supported"
Aug 13 14:03:16 nixos-t14s podman[5358]: @ - - [13/Aug/2022:14:03:16 +0400] "POST /v1.29/images/prune?filters=%7B%22dangling%22%3A%7B%22false%22%3Atrue%7D%2C%22label%22%3A%7B%22org.testcontainers.sessionId%3Da6929220-5e29-4341-b842-993fe5e87358%22%3Atrue%2C%22org.testcontainers%3Dtrue%22%3Atrue%7D%7D HTTP/1.1" 500 209 "" "Go-http-client/1.1"
Aug 13 14:03:17 nixos-t14s podman[5358]: time="2022-08-13T14:03:17+04:00" level=info msg="Request Failed(Internal Server Error): specifying \"dangling\" filter more than once with different values is not supported"
Aug 13 14:03:17 nixos-t14s podman[5358]: @ - - [13/Aug/2022:14:03:17 +0400] "POST /v1.29/images/prune?filters=%7B%22dangling%22%3A%7B%22false%22%3Atrue%7D%2C%22label%22%3A%7B%22org.testcontainers.sessionId%3Da6929220-5e29-4341-b842-993fe5e87358%22%3Atrue%2C%22org.testcontainers%3Dtrue%22%3Atrue%7D%7D HTTP/1.1" 500 209 "" "Go-http-client/1.1"
Aug 13 14:03:22 nixos-t14s systemd[1]: podman.service: Deactivated successfully.

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

Client:       Podman Engine
Version:      4.2.0
API Version:  4.2.0
Go Version:   go1.18.4
Built:        Tue Jan  1 04:00:00 1980
OS/Arch:      linux/amd64

Output of podman info:

host:
  arch: amd64
  buildahVersion: 1.27.0
  cgroupControllers:
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: Unknown
    path: /nix/store/b7affffkal5bx7bsnrc10kpswjmr2z2d-conmon-2.1.3/bin/conmon
    version: 'conmon version 2.1.3, commit: '
  cpuUtilization:
    idlePercent: 98.75
    systemPercent: 0.21
    userPercent: 1.05
  cpus: 16
  distribution:
    codename: raccoon
    distribution: nixos
    version: "22.11"
  eventLogger: journald
  hostname: nixos-t14s
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 100
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.19.0
  linkmode: dynamic
  logDriver: journald
  memFree: 27650396160
  memTotal: 32850124800
  networkBackend: cni
  ociRuntime:
    name: crun
    package: Unknown
    path: /nix/store/0ilkx07arbcp0267lbfq0bdnf9qij294-crun-1.5/bin/crun
    version: |-
      crun version 1.5
      commit: 1.5
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_AUDIT_WRITE,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_MKNOD,CAP_NET_BIND_SERVICE,CAP_NET_RAW,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: ""
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /nix/store/jgj1i03v37rj3bzsyyn7722x1bgvzm12-slirp4netns-1.2.0/bin/slirp4netns
    package: Unknown
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.4
  swapFree: 0
  swapTotal: 0
  uptime: 0h 27m 40.00s
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - docker.io
store:
  configFile: /home/asbachb/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/asbachb/.local/share/containers/storage
  graphRootAllocated: 772921810944
  graphRootUsed: 366314885120
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 56
  runRoot: /run/user/1000/containers
  volumePath: /home/asbachb/.local/share/containers/storage/volumes
version:
  APIVersion: 4.2.0
  Built: 315532800
  BuiltTime: Tue Jan  1 04:00:00 1980
  GitCommit: ""
  GoVersion: go1.18.4
  Os: linux
  OsArch: linux/amd64
  Version: 4.2.0

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)

Yes

Additional environment details (AWS, VirtualBox, physical, etc.):
physical

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Aug 13, 2022
@rhatdan
Copy link
Member

rhatdan commented Aug 13, 2022

@vrothberg PTAL. We should find this in the local storage?

@vrothberg
Copy link
Member

vrothberg commented Aug 13, 2022 via email

@asbachb
Copy link
Author

asbachb commented Aug 13, 2022

Is it possible somehow to target local image storage instead of a registry with docker api?

@vrothberg
Copy link
Member

vrothberg commented Aug 13, 2022 via email

@asbachb
Copy link
Author

asbachb commented Aug 13, 2022

@vrothberg From my understanding I already used the full image name:

I tried with localhost/aaa/hello-world:1.0 and aaa/hello-world:1.0 which both does not resolve the local image.

@vrothberg
Copy link
Member

vrothberg commented Aug 13, 2022 via email

@asbachb
Copy link
Author

asbachb commented Aug 13, 2022

I made a test repo showing the issue: https://github.com/asbachb/podman-testcontainers

@asbachb
Copy link
Author

asbachb commented Aug 14, 2022

@vrothberg Could you give some more details how you tested the full image name?

@vrothberg
Copy link
Member

Apologies for the late reply, I was traveling last week.

Actually, I am not sure why it's not working on your end.

curl -XGET --unix-socket /run/user/1000/podman/podman.sock http:/v1.32/images/aaa%2Fhello-world:1.0/json

and

curl -XGET --unix-socket /run/user/1000/podman/podman.sock http:/v1.32/images/localhost%2Faaa%2Fhello-world:1.0/json

Works for me.

@asbachb
Copy link
Author

asbachb commented Aug 22, 2022

@vrothberg No need for apologies ;)

I did some further investigation and recognized that /run/ser/1000/podman/podman.sock behaves differently than /var/run/docker.sock

[nix-shell:~]$ curl -XGET --unix-socket /run/user/1000/podman/podman.sock http:/v1.32/images/aaa%2Fhello-world:1.0/json
{"Id":"sha256:4e404aca34bd8257c9b08a2d6dfeb4f59102fa90f1ac2bfeec857657783db45f","RepoTags":["localhost/aaa/hello-world:1.0"],"RepoDigests":["localhost/aaa/hello-world@sha256:6a6afb8cc611df0c1cf7818ee5b6ae177b95d7cba376bb902fc67672ecf587fa"],"Parent":"feb5d9fea6a5e9606aa995e879d862b825965ba48de054caab5ef356dc6b3412","Comment":"","Created":"2022-08-11T16:07:33.012781066Z","Container":"","ContainerConfig":{"Hostname":"4e404aca34b","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":null,"Cmd":null,"Image":"","Volumes":null,"WorkingDir":"","Entrypoint":null,"OnBuild":null,"Labels":null},"DockerVersion":"","Author":"","Config":{"Hostname":"","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"],"Cmd":["/hello"],"Image":"","Volumes":null,"WorkingDir":"","Entrypoint":null,"OnBuild":null,"Labels":{"io.buildah.version":"1.26.1"}},"Architecture":"amd64","Os":"linux","Size":18801,"VirtualSize":18801,"GraphDriver":{"Data":{"LowerDir":"/home/asbachb/.local/share/containers/storage/overlay/e07ee1baac5fae6a26f30cabfe54a36d3402f96afda318fe0a96cec4ca393359/diff","UpperDir":"/home/asbachb/.local/share/containers/storage/overlay/9c6ea37341006cbbcc9e37bf1ffe04b8bc0a53ec9e4c35082872d91d5e4029ce/diff","WorkDir":"/home/asbachb/.local/share/containers/storage/overlay/9c6ea37341006cbbcc9e37bf1ffe04b8bc0a53ec9e4c35082872d91d5e4029ce/work"},"Name":"overlay"},"RootFS":{"Type":"layers","Layers":["sha256:e07ee1baac5fae6a26f30cabfe54a36d3402f96afda318fe0a96cec4ca393359","sha256:9624a5a5fdb04406c0759643002249345dc804ecc36d010e4f05c7fc5d3b7a43"]},"Metadata":{"LastTagTime":"0001-01-01T00:00:00Z"}}

[nix-shell:~]$ curl -XGET --unix-socket /var/run/docker.sock http:/v1.32/images/aaa%2Fhello-world:1.0/json{"cause":"failed to find image aaa/hello-world:1.0: docker.io/aaa/hello-world:1.0: No such image","message":"failed to find image aaa/hello-world:1.0: docker.io/aaa/hello-world:1.0: No such image","response":404}

[nix-shell:~]$ ls -l /var/run/docker.sock
lrwxrwxrwx 1 root root 23 Aug 22 11:28 /var/run/docker.sock -> /run/podman/podman.sock

[nix-shell:~]$ ls -l /run/podman/podman.sock
srw-rw---- 1 root podman 0 Aug 22 11:28 /run/podman/podman.sock

[nix-shell:~]$ ls -l /run/user/1000/podman/podman.sock
srw-rw---- 1 asbachb users 0 Aug 22 11:31 /run/user/1000/podman/podman.sock

@vrothberg
Copy link
Member

@asbachb, can you also share the output of podman images and docker images? Just to be sure that both have the same image.

@asbachb
Copy link
Author

asbachb commented Aug 22, 2022

@vrothberg just tried to clean my system a little bit. These are the images which could not be remove somehow:

 asbachb@nixos-t14s  ~  podman images --all
REPOSITORY                         TAG         IMAGE ID      CREATED      SIZE
<none>                             <none>      b7da1c8450f3  2 weeks ago  271 MB
<none>                             <none>      2098b6b136cf  2 weeks ago  271 MB
<none>                             <none>      37ca1790740f  2 weeks ago  271 MB
<none>                             <none>      c31593d9c1a8  2 weeks ago  271 MB
<none>                             <none>      53c7f3b38c03  3 weeks ago  271 MB
<none>                             <none>      226d43b46339  3 weeks ago  271 MB
<none>                             <none>      2579260e4b1f  3 weeks ago  271 MB
<none>                             <none>      593cef60250a  3 weeks ago  271 MB
docker.io/library/eclipse-temurin  17-jre      dfbdb43d129b  3 weeks ago  271 MB
 asbachb@nixos-t14s  ~  docker images --all
REPOSITORY                         TAG         IMAGE ID      CREATED      SIZE
<none>                             <none>      b7da1c8450f3  2 weeks ago  271 MB
<none>                             <none>      2098b6b136cf  2 weeks ago  271 MB
<none>                             <none>      37ca1790740f  2 weeks ago  271 MB
<none>                             <none>      c31593d9c1a8  2 weeks ago  271 MB
<none>                             <none>      53c7f3b38c03  3 weeks ago  271 MB
<none>                             <none>      226d43b46339  3 weeks ago  271 MB
<none>                             <none>      2579260e4b1f  3 weeks ago  271 MB
<none>                             <none>      593cef60250a  3 weeks ago  271 MB
docker.io/library/eclipse-temurin  17-jre      dfbdb43d129b  3 weeks ago  271 MB

@vrothberg
Copy link
Member

Thanks, @asbachb.

What I wanted to figure out was which images were present when running http:/v1.32/images/aaa%2Fhello-world:1.0/json against the endpoints.

@asbachb
Copy link
Author

asbachb commented Aug 22, 2022

@vrothberg

 asbachb@nixos-t14s  ~  docker images --all | grep aaa
localhost/aaa/hello-world          1.0         9ae4f6030c86  29 minutes ago  18.8 kB
 asbachb@nixos-t14s  ~  podman images --all | grep aaa
localhost/aaa/hello-world          1.0         9ae4f6030c86  29 minutes ago  18.8 kB

[nix-shell:~]$ curl -XGET --unix-socket /run/podman/podman.sock http:/v1.32/images/localhost%2Faaa%2Fhello-world:1.0/json | jq
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   233  100   233    0     0   4609      0 --:--:-- --:--:-- --:--:--  4660
{
  "cause": "failed to find image localhost/aaa/hello-world:1.0: localhost/aaa/hello-world:1.0: No such image",
  "message": "failed to find image localhost/aaa/hello-world:1.0: localhost/aaa/hello-world:1.0: No such image",
  "response": 404
}

[nix-shell:~]$ curl -XGET --unix-socket /run/user/1000/podman/podman.sock http:/v1.32/images/localhost%2Faaa%2Fhello-world:1.0/json | jq
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1839  100  1839    0     0  47779      0 --:--:-- --:--:-- --:--:-- 48394
{
  "Id": "sha256:9ae4f6030c86cd7500159c002e15c33761597c4304f5492391928594eb405ff7",
  "RepoTags": [
    "localhost/aaa/hello-world:1.0"
  ],
  "RepoDigests": [
    "localhost/aaa/hello-world@sha256:0eae6bda33576b1a139425f697150a5559b9a40c65296605e5f4fc1029770337"
  ],
  "Parent": "feb5d9fea6a5e9606aa995e879d862b825965ba48de054caab5ef356dc6b3412",
  "Comment": "",
  "Created": "2022-08-22T12:21:02.818201952Z",
  "Container": "",
  "ContainerConfig": {
    "Hostname": "9ae4f6030c8",
    "Domainname": "",
    "User": "",
    "AttachStdin": false,
    "AttachStdout": false,
    "AttachStderr": false,
    "Tty": false,
    "OpenStdin": false,
    "StdinOnce": false,
    "Env": null,
    "Cmd": null,
    "Image": "",
    "Volumes": null,
    "WorkingDir": "",
    "Entrypoint": null,
    "OnBuild": null,
    "Labels": null
  },
  "DockerVersion": "",
  "Author": "",
  "Config": {
    "Hostname": "",
    "Domainname": "",
    "User": "",
    "AttachStdin": false,
    "AttachStdout": false,
    "AttachStderr": false,
    "Tty": false,
    "OpenStdin": false,
    "StdinOnce": false,
    "Env": [
      "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
    ],
    "Cmd": [
      "/hello"
    ],
    "Image": "",
    "Volumes": null,
    "WorkingDir": "",
    "Entrypoint": null,
    "OnBuild": null,
    "Labels": {
      "io.buildah.version": "1.27.0"
    }
  },
  "Architecture": "amd64",
  "Os": "linux",
  "Size": 18801,
  "VirtualSize": 18801,
  "GraphDriver": {
    "Data": {
      "LowerDir": "/home/asbachb/.local/share/containers/storage/overlay/e07ee1baac5fae6a26f30cabfe54a36d3402f96afda318fe0a96cec4ca393359/diff",
      "UpperDir": "/home/asbachb/.local/share/containers/storage/overlay/9c6ea37341006cbbcc9e37bf1ffe04b8bc0a53ec9e4c35082872d91d5e4029ce/diff",
      "WorkDir": "/home/asbachb/.local/share/containers/storage/overlay/9c6ea37341006cbbcc9e37bf1ffe04b8bc0a53ec9e4c35082872d91d5e4029ce/work"
    },
    "Name": "overlay"
  },
  "RootFS": {
    "Type": "layers",
    "Layers": [
      "sha256:e07ee1baac5fae6a26f30cabfe54a36d3402f96afda318fe0a96cec4ca393359",
      "sha256:9624a5a5fdb04406c0759643002249345dc804ecc36d010e4f05c7fc5d3b7a43"
    ]
  },
  "Metadata": {
    "LastTagTime": "0001-01-01T00:00:00Z"
  }
}

@vrothberg
Copy link
Member

curl -XGET --unix-socket /run/podman/podman.sock

Can you do that against the docker socket? Thanks a lot for collaborating. I am sure we'll find the source of the issue.

@asbachb
Copy link
Author

asbachb commented Aug 22, 2022

[nix-shell:~]$ curl -XGET --unix-socket /run/podman/podman.sock http:/v1.32/images/localhost%2Faaa%2Fhello-world:1.0/json | jq
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   233  100   233    0     0   4609      0 --:--:-- --:--:-- --:--:--  4660
{
  "cause": "failed to find image localhost/aaa/hello-world:1.0: localhost/aaa/hello-world:1.0: No such image",
  "message": "failed to find image localhost/aaa/hello-world:1.0: localhost/aaa/hello-world:1.0: No such image",
  "response": 404
}
Aug 22 17:03:50 nixos-t14s podman[14225]: time="2022-08-22T17:03:50+04:00" level=info msg="/nix/store/f5saxg50gkwkqawhga7qfh8h059kzl9a-podman-4.2.0/bin/podman filtering at log level info"
Aug 22 17:03:50 nixos-t14s podman[14225]: time="2022-08-22T17:03:50+04:00" level=info msg="Setting parallel job count to 49"
Aug 22 17:03:50 nixos-t14s podman[14225]: time="2022-08-22T17:03:50+04:00" level=info msg="Using systemd socket activation to determine API endpoint"
Aug 22 17:03:50 nixos-t14s podman[14225]: time="2022-08-22T17:03:50+04:00" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"/run/podman/podman.sock\""
Aug 22 17:03:50 nixos-t14s podman[14225]: time="2022-08-22T17:03:50+04:00" level=info msg="API service listening on \"/run/podman/podman.sock\""
Aug 22 17:03:50 nixos-t14s podman[14225]: time="2022-08-22T17:03:50+04:00" level=info msg="Request Failed(Not Found): failed to find image localhost/aaa/hello-world:1.0: localhost/aaa/hello-world:1.0: No such image"
Aug 22 17:03:50 nixos-t14s podman[14225]: @ - - [22/Aug/2022:17:03:50 +0400] "GET /images/localhost%2Faaa%2Fhello-world:1.0/json HTTP/1.1" 404 233 "" "curl/7.84.0"

@vrothberg
Copy link
Member

/run/podman/podman.sock that is using the rootful Podman socket. Can you try with /var/run/docker.sock and also list the images there?

@asbachb
Copy link
Author

asbachb commented Aug 22, 2022

My distribution maps /var/run/docker.sock to /run/podman/podman.sock. Is that the way the socket should be symlinked?

[nix-shell:~]$ ls -l /var/run/docker.sock
lrwxrwxrwx 1 root root 23 Aug 22 11:28 /var/run/docker.sock -> /run/podman/podman.sock

@mheon
Copy link
Member

mheon commented Aug 22, 2022

For mapping root Docker to root Podman, that does seem appropriate. The difference could be that we do not support the Docker group, so non-root users that are part of that group cannot access the root Podman socket (which was a terrible idea anyways, with the socket effectively being passwordless root access to the system).

@vrothberg
Copy link
Member

@asbachb can you run sudo podman images? Rootless and rootful Podman do not share images and containers.

@asbachb
Copy link
Author

asbachb commented Aug 23, 2022

@vrothberg I guess that's the problem: I create that image with rootless podman. Testcontainers is using docker compat socket which links to root podman socket - Which does not know about that image.

@rhatdan
Copy link
Member

rhatdan commented Aug 23, 2022

Since this does not seem to be a bug in Podman, closing, conversation can continue.

@rhatdan rhatdan closed this as completed Aug 23, 2022
@asbachb
Copy link
Author

asbachb commented Aug 23, 2022

I wonder if it's user expectation that rootles and rootful podman should share the same image files?

@asbachb
Copy link
Author

asbachb commented Aug 23, 2022

Just for future reference: On NixOS docker.sock is mapped to rootful podman.sock. So in order to have an image it needs to be written into rootful image store.

When you want to run testcontainers with rootless podman it makes more sense to manually configure that socket:

            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-failsafe-plugin</artifactId>
                <configuration>
                    <environmentVariables>
                        <DOCKER_HOST>unix:///var/run/user/1000/podman/podman.sock</DOCKER_HOST>
                    </environmentVariables>
                </configuration>
            </plugin>

@vrothberg
Copy link
Member

vrothberg commented Oct 11, 2022 via email

@fedinskiy
Copy link

@vrothberg could you, please, tell more about this compat_api? The only documentation[1] about containers.conf, which I was able to found, doesn't mention such option.

I have the same problem as OP, I made a small reproducer[2] for it and followed this manual[3] to install and configure podman.

[1] https://man.archlinux.org/man/containers.conf.5.en
[2] https://github.com/fedinskiy/reproducer/tree/reproducer/podman-testcontainers
[3] https://quarkus.io/blog/quarkus-devservices-testcontainers-podman/

@vrothberg
Copy link
Member

@fedinskiy, it looks like the option isn't documented in containers.conf. I will open a PR to address that.

I'd appreciate a reproducer that uses Podman/Docker directly. Setting up some external tools such as test-containers or Quarkus is very time consuming as I need to dig into this code (and I don't speak Java anymore).

@vrothberg
Copy link
Member

The option is documented in containers.conf (see https://github.com/containers/common/blob/main/pkg/config/containers.conf#L369-L372).

@vrothberg
Copy link
Member

It's also turned on by default.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 12, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 12, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests

5 participants