-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
curl command is not working in k3s container #763
Comments
I have this too, I couldn't even install |
Install k3s (on Civo instance) ^ hangs, then gives i/o timeout. |
@erikwilson what do you need from us on this one? It's blocking my use of k3s. |
Hmmm, I feel like it might be helpful for us to have a bug reporting tool to collect system info. Is there a firewall enabled? (if yes, does temporarily disabling the firewall resolve the issue?) otherwise... How is k3s installed? |
@alexellis I can't reproduce this on Ubuntu 18.04.2 $ uname -a |
I've just been thinking about this and it may be "Docker + OpenStack" specific. (I'm not normally network focused, so some of the following terminology may be wrong) OpenStack uses some overhead in the packet size, so we need to reconfigure Docker on OpenStack instances to use a 50 byte smaller MTU. For some history on the bug: https://medium.com/@sylwit/how-we-spent-a-full-day-figuring-out-a-mtu-issue-with-docker-4d81fdfe2caf For the fix for Civo (and other OpenStack, and where affected, virtualised instances) is: https://www.civo.com/learn/fixing-networking-for-docker I'll put this fix in to our managed K3s service, but if you're installing manually or using @alexellis hope this fixes it for you. |
Actually, I've just realised K3s doesn't use Docker - so this may be a complete red herring or the fix may be similar? |
There's some info on setting the MTU for flannel at flannel-io/flannel#841 (and again saying use networkMTU - 50), but I don't know how this applies to K3s. |
|
From the host:
|
Sorry for the verbose output 😄
|
We at Civo use a 172.31.0.0 address for our "pseudo-public" interface. There's more description around this networking style and why we did it at - https://www.civo.com/blog/changes-in-ip-address-usage-at-civo |
I wonder if ens3 routing is conflicting with the pod & service cidr configured for k3s. |
starting k3s with running |
I've just switched our managed K3s service to install with those flags (as we aren't in close beta yet). Below is some more debug:
Process is running with flags, so network check:
Checking to see if we can download files over HTTP:
I remembered from when I've done stuff like this (I don't do it that often) that sometimes you need to
So it's all working? Just for extra stuff, I did the
and it's off in to the ether of never coming back, but it certainly left our network and the |
Interesting, looks like it is working! Thought about the need for |
Weirdly the second (and further nodes) aren't connecting to the cluster now though. I'm installing them with (effectively):
Is that right? Before now it didn't have an |
Never mind, realised the agent doesn't need those flags (I assume it pulls them from the server). |
Just launched a new cluster with the new CIDR and it all seems OK for me - @alexellis ?
|
The Here's a bare-metal Intel NUC at my house with Ubuntu 18.04:
(I believe it's using Weave net.) |
With no changes to the Civo/OpenStack VM, it worked at least once:
The second time it failed:
Someone asked about firewalls / security groups. There is only a security group on incoming traffic to block VXLAN from the outside.
|
@erikwilson in light of this issue from the Civo blog about MTUs, which Andy mentioned, can we configure an MTU setting for containerd? Perhaps since networking is out of scope for containerd, this would actually be done in CNI with Flannel? |
I'm seeing this:
|
With those values though, it looks like this is an old cluster without the 192 change made this afternoon?
…On 28 Aug 2019, 18:53 +0100, Alex Ellis ***@***.***>, wrote:
I'm seeing this:
***@***.***:~# cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.42.0.0/16
FLANNEL_SUBNET=10.42.0.1/24
FLANNEL_MTU=1400
FLANNEL_IPMASQ=true
***@***.***:~#
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
|
The MTU stuff is being setup in flannel and is applying a -50, looking at the network configurations I think it is okay. I would expect it to fail consistently. The weird thing is the error |
I edited the file to |
You guys can use my sample curl image to try out other sites:
From my home network this takes |
Ah, I think the |
I've just checked from a container and it seems to take a while to resolve a DNS entry and then consistently resolves it. However, I just checked the underlying instance and it shows the same problem. I'll have our engineers look in to it tomorrow, I have a hacky fix for it but would like them to try to properly fix it first. |
Might be worth checking for DNSSEC errors in I think there is a 5s timeout for DNS queries, so also could be upstream DNS response times. |
Hi I am experiencing the same issue. Did you manage to solve it? |
We're not experiencing it any more, but unfortunately I can't remember what changed. Sorry I can't help. |
Okay should I create separate issue for this or piggyback on this one? |
Okay in my case I didn't really solve the issue but when using cilium instead of flannel it's not happening. |
@jmichalek132 Thank you, using cilium solved all the timeouts for me. |
I think this can be closed, certainly hasn't been an issue for a while. |
This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 180 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions. |
Describe the bug
curl command does not work in k3s container
The text was updated successfully, but these errors were encountered: