Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to reach nginx on all nodes when using nodeport #53

Closed
clincha opened this issue Nov 12, 2022 · 9 comments · Fixed by #54
Closed

Unable to reach nginx on all nodes when using nodeport #53

clincha opened this issue Nov 12, 2022 · 9 comments · Fixed by #54
Labels
ansible bug Something isn't working kubernetes

Comments

@clincha
Copy link
Collaborator

clincha commented Nov 12, 2022

When I try and reach the NGINX pod from nodes that aren't hosting it I get an HTTP error.

[clincha@bri-runner-01 ~]$ curl 192.168.1.24:30689
curl: (7) Failed to connect to 192.168.1.24 port 30689: No route to host
[clincha@bri-runner-01 ~]$ curl 192.168.1.21:30689
curl: (7) Failed to connect to 192.168.1.21 port 30689: No route to host
[clincha@bri-runner-01 ~]$ curl 192.168.1.22:30689
curl: (7) Failed to connect to 192.168.1.22 port 30689: No route to host
[clincha@bri-runner-01 ~]$ curl 192.168.1.2:30689
3^C
[clincha@bri-runner-01 ~]$ curl 192.168.1.23:30689
<html>
<head><title>400 The plain HTTP request was sent to HTTPS port</title></head>
<body>
<center><h1>400 Bad Request</h1></center>
<center>The plain HTTP request was sent to HTTPS port</center>
<hr><center>nginx/1.23.2</center>
</body>
</html>
[clincha@bri-runner-01 ~]$
@clincha
Copy link
Collaborator Author

clincha commented Nov 12, 2022

This was recommended but did nothing

systemctl stop kubelet
systemctl stop cri-o
iptables --flush
iptables -tnat --flush
systemctl start kubelet
systemctl start cri-o

@clincha
Copy link
Collaborator Author

clincha commented Nov 12, 2022

Had to pin it to a node. Not happy about that...

@clincha clincha closed this as completed Nov 12, 2022
@clincha
Copy link
Collaborator Author

clincha commented Nov 12, 2022

Urg this is happening to all services not just NGINX

@clincha clincha reopened this Nov 12, 2022
@clincha
Copy link
Collaborator Author

clincha commented Nov 12, 2022

kubernetes/kubernetes#100434

This suggests that moving to Flannel instead of Calico should work. Giving that a go now

@clincha
Copy link
Collaborator Author

clincha commented Nov 12, 2022

Remove Calico

kubectl delete -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml
kubectl delete -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

Install Flannel

kubectl apply -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml

@clincha
Copy link
Collaborator Author

clincha commented Nov 12, 2022

vi /etc/kubernetes/manifests/kube-controller-manager.yaml

https://gist.github.com/rkaramandi/44c7cea91501e735ea99e356e9ae7883

flannel-io/flannel#728

ip link set cni0 down && ip link set flannel.1 down 
ip link delete cni0 && ip link delete flannel.1
systemctl restart cri-o && systemctl restart kubelet

@clincha
Copy link
Collaborator Author

clincha commented Nov 12, 2022

After switching over to Flannel the issue is still not resolved. Apparently Weave has sorted it out so I guess I'll try them next. Although setting sudo iptables -P FORWARD ACCEPT didn't seem to help which was supposed to if Weave was going to help

@clincha
Copy link
Collaborator Author

clincha commented Nov 12, 2022

This worked https://stackoverflow.com/questions/62540512/k8s-1-18-1-api-not-reachable-since-update-to-1-18-1

I needed to set the NET_ADMIN to privileged in the flannel YAML

@clincha
Copy link
Collaborator Author

clincha commented Nov 12, 2022

It's so good to see it working. I'll get everything written up properly later but the gist of it is this:

  • I was missing a container network so I installed Flannel
  • Kubernetes wasn't happy because there were still old virtual interfaces going around
  • I needed to set some elevated permissions vs what the default was so Flannel could change iptables rules

@clincha clincha transferred this issue from clincha/proxy Nov 13, 2022
@clincha clincha added bug Something isn't working kubernetes ansible labels Nov 13, 2022
@clincha clincha linked a pull request Nov 13, 2022 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ansible bug Something isn't working kubernetes
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant