Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to access macOS host from container via host.containers.internal #11642

Closed
cpopp opened this issue Sep 18, 2021 · 5 comments · Fixed by #11649
Closed

Unable to access macOS host from container via host.containers.internal #11642

cpopp opened this issue Sep 18, 2021 · 5 comments · Fixed by #11649
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@cpopp
Copy link

cpopp commented Sep 18, 2021

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

I'm trying to access a service listening on a TCP port on my macOS host from a container by using host.containers.internal but it does not seem to work. The same steps using host.docker.internal work with Docker.

Steps to reproduce the issue:

  1. Listen for socket connections on the macOS host (nc -lk 4444)

  2. Verify Docker is able to connect from a container (docker run --rm docker.io/subfuzion/netcat -zv host.docker.internal 4444)

  3. Start a Podman machine (podman machine init and podman machine start)

  4. See the Podman fails to connect to the port (podman run --rm docker.io/subfuzion/netcat -zv host.containers.internal 4444)

Describe the results you received:

Docker prints that a connection succeeded and Podman prints that the network is unreachable.

Describe the results you expected:

That the container running with Podman would be able to successfully establish a connection to the host.

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

Client:
Version:      3.3.1
API Version:  3.3.1
Go Version:   go1.17
Built:        Mon Aug 30 14:15:26 2021
OS/Arch:      darwin/amd64

Server:
Version:      3.3.1
API Version:  3.3.1
Go Version:   go1.16.6
Built:        Mon Aug 30 15:46:36 2021
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.22.3
  cgroupControllers: []
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.0.29-2.fc34.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.29, commit: '
  cpus: 2
  distribution:
    distribution: fedora
    version: "34"
  eventLogger: journald
  hostname: localhost
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.13.13-200.fc34.x86_64
  linkmode: dynamic
  memFree: 3641667584
  memTotal: 4104507392
  ociRuntime:
    name: crun
    package: crun-1.0-1.fc34.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.0
      commit: 139dc6971e2f1d931af520188763e984d6cdfbf8
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.12-2.fc34.x86_64
    version: |-
      slirp4netns version 1.1.12
      commit: 7a104a101aa3278a2152351a082a6df71f57c9a3
      libslirp: 4.4.0
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.0
  swapFree: 0
  swapTotal: 0
  uptime: 7m 3.8s
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /var/home/core/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/home/core/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 2
  runRoot: /run/user/1000/containers
  volumePath: /var/home/core/.local/share/containers/storage/volumes
version:
  APIVersion: 3.3.1
  Built: 1630356396
  BuiltTime: Mon Aug 30 20:46:36 2021
  GitCommit: ""
  GoVersion: go1.16.6
  OsArch: linux/amd64
  Version: 3.3.1

Package info (e.g. output of rpm -q podman or apt list podman):

gathered from rpm -q podman after podman machine ssh

podman-3.3.1-1.fc34.x86_64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/master/troubleshooting.md)

Yes

Additional environment details (AWS, VirtualBox, physical, etc.):

physical macOS host

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Sep 18, 2021
@guillaumerose
Copy link
Contributor

Hello,

Good catch! I added a fix for that and it should land in the next gvproxy release.
containers/gvisor-tap-vsock@1108ea4

Thanks

@Luap99
Copy link
Member

Luap99 commented Sep 20, 2021

@guillaumerose I think we have to patch podman as well, podman will add this entry to /etc/hosts so the container will use this instead of the dns server.
I will open a PR to fix this.

@Luap99 Luap99 self-assigned this Sep 20, 2021
@guillaumerose
Copy link
Contributor

@Luap99 /etc/hosts can work but I suspect we will need to make the subnet of gvproxy configurable. 192.168.127.0/24 is a good random subnet but at some point we will get an issue for that.
I am not sure it's good idea to hardcode in the /etc/hosts file that host.containers.internal 192.168.127.254 though.

@Luap99
Copy link
Member

Luap99 commented Sep 20, 2021

@guillaumerose I think you misunderstand me, right now we use /etc/hosts for this name. I want to patch podman to not add this entry to /etc/hosts when we run inside podman machine so that the container will use the respone from the gvproxy dns server.

@guillaumerose
Copy link
Contributor

Oh ok I undertand now. Sorry, thanks!

Luap99 added a commit to Luap99/libpod that referenced this issue Sep 20, 2021
Let the gvproxy dns server handle the host.containers.internal entry.
Support for this is already added to gvproxy. [1]

To make sure the container uses the dns response from gvproxy we should
not add host.containers.internal to /etc/hosts in this case.

[NO TESTS NEEDED] podman machine has no tests :/

Fixes containers#11642

[1] containers/gvisor-tap-vsock@1108ea4

Signed-off-by: Paul Holzinger <[email protected]>
Luap99 added a commit to Luap99/libpod that referenced this issue Dec 6, 2021
Let the gvproxy dns server handle the host.containers.internal entry.
Support for this is already added to gvproxy. [1]

To make sure the container uses the dns response from gvproxy we should
not add host.containers.internal to /etc/hosts in this case.

[NO NEW TESTS NEEDED] podman machine has no tests

Fixes containers#11642

[1] containers/gvisor-tap-vsock@1108ea4

Signed-off-by: Paul Holzinger <[email protected]>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 21, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants