Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

host.lima.internal should be defined *inside* running containers #491

Closed
jandubois opened this issue Dec 22, 2021 · 21 comments
Closed

host.lima.internal should be defined *inside* running containers #491

jandubois opened this issue Dec 22, 2021 · 21 comments
Labels
enhancement New feature or request guest/alpine Guest: Alpine

Comments

@jandubois
Copy link
Member

A note though - host.lima.internal should be defined inside running containers. That's what docker desktop has always done, it provides that name resolution. People don't really want to get to the host just from inside the lima container, they want to get to the host from inside containers they're running.

Originally posted by @rfay in #389 (comment)

@jandubois jandubois added the enhancement New feature or request label Dec 22, 2021
@jandubois
Copy link
Member Author

I thought this would already work automatically, but I can see how this might fail on non-systemd setups (Alpine) when e.g. coredns just forwards DNS queries to the nameserver from /etc/resolv.conf, which in this case would forward to the host resolver and not the guest resolver.

@rfay Did you observe this issue with Alpine (e.g. Rancher Desktop or the latest colima), or also with other guests?

@AkihiroSuda
Copy link
Member

RFC containerd/nerdctl#355

@rfay
Copy link
Contributor

rfay commented Dec 22, 2021

Yeah, same thing on colima: you see host.lima.internal resolved inside colima ssh but not inside a running container.

Docker Desktop has always provided this name resolution (for host.docker.internal of course)

On linux ddev figures out the correct host IP address from docker inspect bridge, but that doesn't seem to work with lima.

I'm just running any old Debian container,

docker run debian:buster ping -c 1 host.lima.internal
ping: host.lima.internal: Name or service not known

@jandubois
Copy link
Member Author

I'm just running any old Debian container,

Yes, but what is the guest OS? I think colima used to use Ubuntu but has switched to Alpine (or is just about to make the switch). My questions is if this happens even when the guest runs systemd-resolved, or not?

@rfay
Copy link
Contributor

rfay commented Dec 22, 2021

The guest OS here is debian the way I think about it. Host is macOS arm64. But colima ssh gives "Alpine Linux 3.14.3". It's a confusing world where you have two levels of host.

@jandubois
Copy link
Member Author

jandubois commented Dec 22, 2021

I would use this terminology:

  • host: macOS
  • guest: Alpine
  • container: Debian

Anyways, you have confirmed that you observe the issue on an Alpine guest. I still think it should work correctly on any systemd-based guest:

  • on Alpine /etc/resolv.conf points to a nameserver running on the macOS host (to be able to resolve names via VPN). That means any names defined in /etc/hosts in the guest are invisible.

  • with systemd, the nameserver points to 127.0.0.53, which is implemented by systemd-resolved. It will try to resolve names inside the guest (so can see /etc/hosts), but still forwards DNS to the server on the macOS host (to resolve names via VPN).

At least that's what I think is going on, so we need a solution just for the Alpine use case, which doesn't have systemd.

@jandubois jandubois added the guest/alpine Guest: Alpine label Dec 22, 2021
@rfay
Copy link
Contributor

rfay commented Dec 22, 2021

And, just checked on Rancher Desktop, it's also Alpine 3.14.3. Same issues.

@jandubois
Copy link
Member Author

RFC containerd/nerdctl#355

I'm not convinced about this proposal, as it would only work for nerdctl. E.g. it would not work for k8s because coredns would just delegate DNS lookup to the nameserver from /etc/resolv.conf. Would the feature be useful for nerdctl outside the context of lima?

I rather think that the lima DNS forwarder needs to know about the /etc/hosts names and resolve them internally.

We could do this by forwarding the names via the guest agent, but I think it would be simpler to add a hostnames section to lima.yaml:

hostnames:
# - host.lima.internal (always defined)
- host.docker.internal
- my-custom-hostname

I don't know if there is a valid use case for adding other local names with fixed IP addresses to the guest. I would argue they should be added on the host and then will work automatically. This mechanism is only for adding aliases for the host inside the guest.

If the internal DNS server can resolve them, then we also no longer need to add them to /etc/hosts inside the guest at all.

@jandubois
Copy link
Member Author

This mechanism is only for adding aliases for the host inside the guest.

I just realized that we might still want to define additional aliases for external IP addresses provided via vmnet, so the mechanism needs to be more flexible. Will think through this at a later time.

@swalkinshaw
Copy link

Did #650 fix this?

@jandubois
Copy link
Member Author

Did #650 fix this?

Thanks for the reminder, it does indeed fix this issue, so I'm going to close this.

I just realized that we might still want to define additional aliases for external IP addresses provided via vmnet, so the mechanism needs to be more flexible. Will think through this at a later time.

I'm not sure why I thought we might want this. #650 doesn't provide any names for additional interfaces, but it defines the hostname (lima-$INSTANCE) with SLIRP IP of the instance itself (192.168.5.15).

@rfay
Copy link
Contributor

rfay commented Feb 22, 2022

host.lima.internal is there now, but really host.docker.internal should be there for compatibility. Seems a shame to skip that.

@jandubois
Copy link
Member Author

host.lima.internal is there now, but really host.docker.internal should be there for compatibility. Seems a shame to skip that.

You can easily do this yourself, and it is defined for examples/docker.yaml. Both colima and Rancher Desktop define it as well.

Not sure why it should be defined for other configurations.

@rfay
Copy link
Contributor

rfay commented Feb 22, 2022

Yes, ddev automatically adds host.docker.internal already, but AFAIK neither colima nor Rancher Desktop do it (I just tested with colima, it doesn't, haven't tested with Rancher Desktop in the last few weeks). It's a shame to just add a one-off like host.lima.internal when host.docker.internal has been the standard for years.

@jandubois
Copy link
Member Author

jandubois commented Feb 22, 2022

neither colima nor Rancher Desktop do it (I just tested with colima, it doesn't, haven't tested with Rancher Desktop in the last few weeks).

It is my understanding that both colima and Rancher Desktop add host.docker.internal in /etc/hosts already, but that is not available inside containers when using the host resolver. This new mechanism makes it possible to add arbitrary names to the host resolver, so they resolve both inside the guest and inside containers. It will require new releases to make use of this feature.

It's a shame to just add a one-off like host.lima.internal when host.docker.internal has been the standard for years.

I fail to see the issue. lima is a general tool providing Linux VMs. Why does it need to include special settings for Docker Desktop compatibility even in VMs that are unrelated to Docker? It is a tool that provides the generalized capabilities, and other tools built on top of it (like colima and Rancher Desktop) can add whatever specializations they want (like installing moby and defining host.docker.internal).

It is really trivial to do, and you can even do it yourself in a $LIMA_HOME/_config/override.yaml file with:

hostResolver:
  hosts:
    host.docker.internal: host.lima.internal

This should work right now with current releases of lima and colima and will define the alias in all your instances under that $LIMA_HOME.

@rfay
Copy link
Contributor

rfay commented Feb 22, 2022

You're the maintainer, of course, but loads of existing code and docs point out "host.docker.internal", and lima is a replacement for docker, so in many ways it makes sense for it to provide compatibility. It does in many, many other ways, so much be a priority.

Yes, I think that "host.docker.internal" may be there but not inside a running container, which is the only place (that I know of) that it's useful.

Again, ddev, which I maintain, has had to add host.docker..internal itself on linux for years, so I added it for colima/lima/rancher deesktop, no big deal. The point is, when aiming for compatibility as a design goal, which lima has done wonderfully, this is the small stuff and easy to do.

@jandubois
Copy link
Member Author

jandubois commented Feb 22, 2022

Hi @rfay, I just realized that one reason I was pushing back here is that you kept repeating "it is a shame", a condemnation of a moral failure on my part, which was taking me aback:

It is a shame when a country is unwilling to protect its children from being murdered at school. But this issue is more akin to your favourite restaurant not providing a plastic straw with your soft drink unless you explicitly ask for it. The restaurant has a reason for the policy, and while you may not agree with the reasons behind it, the policy is not a moral failure of the restaurant. It is just an inconvenience to you. You still get your straw, free of charge, you just have to ask for it. No need for moral judgement and shaming...

Maybe I'm overly sensitive to this (and there is a wider context beyond software behind this, but I don't want to go into it here), so back to technicalities:

lima is a replacement for docker

As I wrote before, lima has a much wider purpose than being a replacement for docker. E.g. one could consider it a replacement for vagrant, and your request is more like asking that vagrant should unconditionally inject host.docker.internal into each VM it manages.

Yes, I think that "host.docker.internal" may be there but not inside a running container

This is not correct:

$ limactl start examples/docker.yaml
? Creating an instance "docker" Proceed with the default configuration
[...]
$ docker info | grep Name:
 Name: lima-docker
$ docker run --rm busybox ping -c 1 host.docker.internal
PING host.docker.internal (192.168.5.2): 56 data bytes
64 bytes from 192.168.5.2: seq=0 ttl=254 time=0.568 ms
[...]

For colima and Rancher Desktop you will need new releases before you see this working (they are not yet available at this point in time).

So the only thing that remains is the request that users of containerd/podman/aptainer, or just plain Linux VMs also all get host.docker.internal pre-configured.

I continue to fail to see the need and don't really think we have to promote the docker name outside the docker context, but I also don't see much harm in defining it either.

So if @AkihiroSuda agrees with you that we should globally define this alias in the lima core, then I will create a PR for it.

@AkihiroSuda
Copy link
Member

So if @AkihiroSuda agrees with you that we should globally define this alias in the lima core, then I will create a PR for it.

No 😛

@rfay
Copy link
Contributor

rfay commented Feb 22, 2022

@jandubois thanks for letting me know about your reaction.

I just realized that one reason I was pushing back here is that you kept repeating "it is a shame", a condemnation of a moral failure on my part, which was taking me aback

I've never heard of that reaction to the phrase "it's a shame" or "what a shame". To me that's a "transparent metaphor" which has nothing at all to do with shame. It's not transparent for me any more though! I appreciate you letting me know how it affects you.

At a note of my own background reaction to this issue... I was subscribed to the docker (linux) issue about adding resolution of host.docker.internal for all those years, which is still running and still confusing people. As you know, I worked around the lack years ago for linux (and now Lima) but wow, has there been so much angst and confusion about that over many years.

Lima is a wonderful project with oh-so-responsive maintainers. My hope is that it can grow and mature with good governance and without using up the maintainers.

Do you plan to do GitHub Sponsors or another way to financially support the project? ddev has recently set up GitHub Sponsors and one of the stated goals is to send 10% to upstream projects, and Lima would be an obvious target.

@jandubois
Copy link
Member Author

I've never heard of that reaction to the phrase "it's a shame" or "what a shame". To me that's a "transparent metaphor" which has nothing at all to do with shame.

Thank you for letting me know; I guess I've been misunderstanding the phrase. In my mind in the bad-worse-worst transition it worked like "it is unfortunate", "it is a disgrace", "it is a shame". I'm going to remember that it doesn't necessarily have the literal meaning. Sorry for the misunderstanding!

Do you plan to do GitHub Sponsors or another way to financially support the project?

I can't speak for @AkihiroSuda, but personally I'm not looking for support and would refuse it, if offered: I work for SUSE on Rancher Desktop, and probably 80% of what I do for Lima could be considered in support of it, so I'm already being paid for my work. If anything would be setup for Lima, then I would not want to be involved in the administration of it either.

@rfay
Copy link
Contributor

rfay commented Feb 22, 2022

I didn't know you were paid for your work, but appreciate the work and its quality, and Rancher's support of it! It's such a delight to work with responsive maintainers. Anyway, Lima is a great upstream for ddev now, and lots of people are trying to get farther away from Docker Desktop of course, so it's great for them. And this looks like it will get better all the time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request guest/alpine Guest: Alpine
Projects
None yet
Development

No branches or pull requests

4 participants