-
Notifications
You must be signed in to change notification settings - Fork 716
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kube-dns fails to start with - FailedCreatePodSandBox error #587
Comments
Seems like you haven't install any pod network plugin. |
I'm seeing very similar behavior as of a few hours ago. I can confirm that it seems to occur with flanneld container up and running applied from the standard recommended pod-network plugin yaml. This is a bare-metal install on a blade system. (The restarts are normal for us, as pulling images often takes several tries for some reason on our net).
|
Figured this out. Flanneld requires the portmap binary in when using the default plugin yaml. A pull request is in at utf18/ansible-kubeadm#3 To work around the issue you can grab portmap and place it at /opt/cni/bin/portmap set to 0755. |
@Inevitable, thanks. in the mean time. I got this running with I am curious as to why there are 6 options in the install instructions. More specifically, there is no default choice or a clear reason why I (blissfully ignorant of the nuances of networking) would choose one over the other. Would it make sense to have an officially supported version, or an initial recommendation? |
@Inevitable - how did you figure out this was related to Portmap? I want to confirm this solved the issue for me to, and that the same issue was causing similar behaviour |
Bit of search-fu based on the flanneld pod log lead me to flannel-io/flannel#890 From there just a simple test to see if my situation was the same. |
Thanks @Inevitable for the clear solution! |
I think this is still a bug. It is happening for me with 1.8.4 in GKE... I've tried deleting the host node to get it on a new one and it has the same problem over and over. Deployment config: ---
# ØMQ forwarder
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
maintainers: "Peter Novotnak <[email protected]>,Jane Doe <[email protected]>"
source: https://github.com/myorg/myproj
labels:
name: myprojorwhatever
tier: backend
name: zmq-fwd
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: zmq
spec:
containers:
- name: zmq-fwd
image: gcr.io/myproj/zmq-dev
command:
- invoke
- zmq-forwarder
env:
- name: ZMQ_FORWARDER_PULL
value: 'tcp://*:5556'
- name: ZMQ_FORWARDER_PUB
value: 'tcp://*:5557'
ports:
- containerPort: 5556
name: zmq-fwd-pull
protocol: TCP
- containerPort: 5557
name: zmq-fwd-pub
protocol: TCP
resources:
requests:
cpu: "1"
memory: "100m"
limits:
cpu: "1"
memory: "300m" Associated service: apiVersion: v1
kind: Service
metadata:
name: zmq-fwd
spec:
ports:
- name: zmq-fwd-pull
port: 5556
protocol: TCP
targetPort: zmq-fwd-pull
- name: zmq-fwd-pub
port: 5557
protocol: TCP
targetPort: zmq-fwd-pub
selector:
name: zmq-fwd
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {} The result of a
|
I don't know if I can provide access to our cluster but I can provide uncensored logs to anyone looking into this. |
Ah, in my case this is because the requests/ limits I have configured are written incorrectly. |
Closing, this is a heavily validated area and is typically a conf or setup issue. |
What keywords did you search in kubeadm issues before filing this one?
I was suggested to create a new issue from #507
Is this a BUG REPORT or FEATURE REQUEST?
BUG REPORT
Versions
kubeadm version (use
kubeadm version
):Environment:
kubectl version
):ubuntu/xenial
Linux ubuntu-xenial 4.4.0-101-generic #124-Ubuntu SMP Fri Nov 10 18:29:59 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
What happened?
kube-dns pod is failing to start when creating a cluster with kubeadm
What you expected to happen?
How to reproduce it (as minimally and precisely as possible)?
This gist contains the scripts I use to create the cluster :
To repro, you can just run:
Then to ssh into the machine run
vagrant ssh
Anything else we need to know?
kubelet logs:
The text was updated successfully, but these errors were encountered: