Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kube-dns fails to start with - FailedCreatePodSandBox error #587

Closed
srossross opened this issue Dec 4, 2017 · 11 comments
Closed

kube-dns fails to start with - FailedCreatePodSandBox error #587

srossross opened this issue Dec 4, 2017 · 11 comments

Comments

@srossross
Copy link

What keywords did you search in kubeadm issues before filing this one?

I was suggested to create a new issue from #507

Is this a BUG REPORT or FEATURE REQUEST?

BUG REPORT

Versions

kubeadm version (use kubeadm version):

kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4", GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"clean", BuildDate:"2017-11-20T05:17:43Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Kubernetes version (use kubectl version):
    kubectl version
    Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4", 
    GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"clean", 
    BuildDate:"2017-11-20T05:28:34Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4", 
    GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"clean", 
    BuildDate:"2017-11-20T05:17:43Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
    
  • Cloud provider or hardware configuration: vagrant
  • OS (e.g. from /etc/os-release): ubuntu/xenial
  • Kernel Linux ubuntu-xenial 4.4.0-101-generic #124-Ubuntu SMP Fri Nov 10 18:29:59 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

What happened?

kube-dns pod is failing to start when creating a cluster with kubeadm

kubectl --namespace kube-system describe pods kube-dns-545bc4bfd4-djsm4
...
Events:
  Type     Reason                  Age               From                    Message
  ----     ------                  ----              ----                    -------
  Normal   Scheduled               6m                default-scheduler       Successfully assigned kube-dns-545bc4bfd4-djsm4 to ubuntu-xenial
  Normal   SuccessfulMountVolume   6m                kubelet, ubuntu-xenial  MountVolume.SetUp succeeded for volume "kube-dns-config"
  Normal   SuccessfulMountVolume   6m                kubelet, ubuntu-xenial  MountVolume.SetUp succeeded for volume "kube-dns-token-qlqjp"
  Warning  FailedCreatePodSandBox  6m                kubelet, ubuntu-xenial  Failed create pod sandbox.
  Warning  FailedSync              5m (x11 over 6m)  kubelet, ubuntu-xenial  Error syncing pod
  Normal   SandboxChanged          1m (x26 over 6m)  kubelet, ubuntu-xenial  Pod sandbox changed, it will be killed and re-created.

What you expected to happen?

kube-dns

How to reproduce it (as minimally and precisely as possible)?

This gist contains the scripts I use to create the cluster :

# provision.sh as root
apt-get update
apt-get install -qy docker.io apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-get install -y kubelet kubeadm kubernetes-cni

kubeadm init \
  --pod-network-cidr=10.244.0.0/16 \
  --apiserver-advertise-address=0.0.0.0 \
  --kubernetes-version stable-1.8
su ubuntu sh -c "/bin/mkdir -p /home/ubuntu/.kube"
cp /etc/kubernetes/admin.conf /home/ubuntu/.kube/config
chown $(id ubuntu -u):$(id ubuntu -g) /home/ubuntu/.kube/config

# kubectl.sh as ubuntu user
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
Raw

To repro, you can just run:

git clone https://gist.github.com/srossross/94d8c74165a89a6be967c0f53c6bfd3b reproduce-error
cd reproduce-error
vagrant up

Then to ssh into the machine run vagrant ssh

Anything else we need to know?

kubelet logs:

-- Logs begin at Mon 2017-12-04 22:52:27 UTC, end at Mon 2017-12-04 22:59:31 UTC. --
Dec 04 22:54:30 ubuntu-xenial systemd[1]: Started kubelet: The Kubernetes Node Agent.
Dec 04 22:54:30 ubuntu-xenial systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
Dec 04 22:54:30 ubuntu-xenial systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Dec 04 22:54:30 ubuntu-xenial systemd[1]: Started kubelet: The Kubernetes Node Agent.
Dec 04 22:54:30 ubuntu-xenial kubelet[4258]: I1204 22:54:30.435471    4258 feature_gate.go:156] feature gates: map[]
Dec 04 22:54:30 ubuntu-xenial kubelet[4258]: I1204 22:54:30.435603    4258 controller.go:114] kubelet config controller: starting controller
Dec 04 22:54:30 ubuntu-xenial kubelet[4258]: I1204 22:54:30.435613    4258 controller.go:118] kubelet config controller: validating combination of defaults and flags
Dec 04 22:54:30 ubuntu-xenial kubelet[4258]: error: unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory
Dec 04 22:54:30 ubuntu-xenial systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Dec 04 22:54:30 ubuntu-xenial systemd[1]: kubelet.service: Unit entered failed state.
Dec 04 22:54:30 ubuntu-xenial systemd[1]: kubelet.service: Failed with result 'exit-code'.
Dec 04 22:54:40 ubuntu-xenial systemd[1]: kubelet.service: Service hold-off time over, scheduling restart.
Dec 04 22:54:40 ubuntu-xenial systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Dec 04 22:54:40 ubuntu-xenial systemd[1]: Started kubelet: The Kubernetes Node Agent.
Dec 04 22:54:40 ubuntu-xenial kubelet[4344]: I1204 22:54:40.996819    4344 feature_gate.go:156] feature gates: map[]
Dec 04 22:54:40 ubuntu-xenial kubelet[4344]: I1204 22:54:40.996887    4344 controller.go:114] kubelet config controller: starting controller
Dec 04 22:54:40 ubuntu-xenial kubelet[4344]: I1204 22:54:40.996958    4344 controller.go:118] kubelet config controller: validating combination of defaults and flags
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.019288    4344 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.019355    4344 client.go:95] Start docker client with request timeout=2m0s
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: W1204 22:54:41.022241    4344 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.026147    4344 feature_gate.go:156] feature gates: map[]
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: W1204 22:54:41.026623    4344 server.go:289] --cloud-provider=auto-detect is deprecated. The desired cloud provider should be set explicitly
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.042691    4344 certificate_manager.go:361] Requesting new certificate.
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: E1204 22:54:41.043137    4344 certificate_manager.go:284] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post https://10.0.2.15:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.043793    4344 manager.go:149] cAdvisor running in container: "/sys/fs/cgroup/cpu,cpuacct/system.slice/kubelet.service"
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: W1204 22:54:41.052581    4344 manager.go:157] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: W1204 22:54:41.052986    4344 manager.go:166] unable to connect to CRI-O api service: Get http://%2Fvar%2Frun%2Fcrio.sock/info: dial unix /var/run/crio.sock: connect: no such file or directory
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.061736    4344 fs.go:139] Filesystem UUIDs: map[2017-12-01-19-41-20-00:/dev/sdb 7150905e-db03-4f6c-b8a9-54656d784602:/dev/sda1]
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.061761    4344 fs.go:140] Filesystem partitions: map[tmpfs:{mountpoint:/run major:0 minor:18 fsType:tmpfs blockSize:0} /dev/sda1:{mountpoint:/var/lib/docker/aufs major:8 minor:1 fsType:ext4 blockSize:0}]
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.062556    4344 manager.go:216] Machine: {NumCores:2 CpuFrequency:3099998 MemoryCapacity:1040322560 HugePages:[{PageSize:2048 NumPages:0}] MachineID:65d62946951642019ccccafd7be4edde SystemUUID:4C13101A-7A17-4D8E-B231-5AE52DDAAA3C BootID:e2779b2d-3c39-4af0-b830-57f311b72e7d Filesystems:[{Device:tmpfs DeviceMajor:0 DeviceMinor:18 Capacity:104034304 Type:vfs Inodes:126992 HasInodes:true} {Device:/dev/sda1 DeviceMajor:8 DeviceMinor:1 Capacity:10340831232 Type:vfs Inodes:1280000 HasInodes:true}] DiskMap:map[8:0:{Name:sda Major:8 Minor:0 Size:10737418240 Scheduler:deadline} 8:16:{Name:sdb Major:8 Minor:16 Size:10485760 Scheduler:deadline}] NetworkDevices:[{Name:enp0s3 MacAddress:02:b9:83:f0:a4:db Speed:1000 Mtu:1500}] Topology:[{Id:0 Memory:1040322560 Cores:[{Id:0 Threads:[0] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2} {Size:4194304 Type:Unified Level:3}]} {Id:1 Threads:[1] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2} {Size:4194304 Type:Unified Level:3}]}] Caches:[]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.063239    4344 manager.go:222] Version: {KernelVersion:4.4.0-101-generic ContainerOsVersion:Ubuntu 16.04.3 LTS DockerVersion:1.13.1 DockerAPIVersion:1.26 CadvisorVersion: CadvisorRevision:}
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.063652    4344 server.go:422] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.067183    4344 container_manager_linux.go:252] container manager verified user specified cgroup-root exists: /
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.067214    4344 container_manager_linux.go:257] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s}
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.067305    4344 container_manager_linux.go:288] Creating device plugin handler: false
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.067381    4344 kubelet.go:273] Adding manifest file: /etc/kubernetes/manifests
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.067414    4344 kubelet.go:283] Watching apiserver
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: E1204 22:54:41.071975    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.0.2.15:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: E1204 22:54:41.074971    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.0.2.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: E1204 22:54:41.075065    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.0.2.15:6443/api/v1/services?resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: W1204 22:54:41.080048    4344 kubelet_network.go:69] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.080095    4344 kubelet.go:517] Hairpin mode set to "hairpin-veth"
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: W1204 22:54:41.080203    4344 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: W1204 22:54:41.085838    4344 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: W1204 22:54:41.106696    4344 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.106890    4344 docker_service.go:207] Docker cri networking managed by cni
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.129380    4344 docker_service.go:212] Docker Info: &{ID:27WJ:PXFZ:HMW3:JEVZ:LA65:VXJU:UXMA:6FYF:ZHHS:POWO:3NRN:FEMC Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:aufs DriverStatus:[[Root Dir /var/lib/docker/aufs] [Backing Filesystem extfs] [Dirs 0] [Dirperm1 Supported true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[]} MemoryLimit:true SwapLimit:false KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:16 OomKillDisable:true NGoroutines:23 SystemTime:2017-12-04T22:54:41.107866091Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.4.0-101-generic OperatingSystem:Ubuntu 16.04.3 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc4202c5e30 NCPU:2 MemTotal:1040322560 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-xenial Labels:[] ExperimentalBuild:false ServerVersion:1.13.1 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:0xc4200c6c80} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID: Expected:aa8187dbd3b7ad67d8e5e3a15115d3eef43a7ed1} RuncCommit:{ID:N/A Expected:9df8b306d01f59d3a8029be411de015b7304dd8f} InitCommit:{ID:N/A Expected:949e6facb77383876aeff8a6944dde66b3089574} SecurityOptions:[name=apparmor name=seccomp,profile=default]}
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.129953    4344 docker_service.go:225] Setting cgroupDriver to cgroupfs
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.143055    4344 remote_runtime.go:43] Connecting to runtime service unix:///var/run/dockershim.sock
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.144809    4344 kuberuntime_manager.go:178] Container runtime docker initialized, version: 1.13.1, apiVersion: 1.26.0
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: W1204 22:54:41.145387    4344 probe.go:215] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.146531    4344 server.go:718] Started kubelet v1.8.4
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: E1204 22:54:41.147041    4344 kubelet.go:1234] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data for container /
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.147563    4344 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.148007    4344 server.go:128] Starting to listen on 0.0.0.0:10250
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.148613    4344 server.go:296] Adding debug handlers to kubelet server.
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: E1204 22:54:41.151142    4344 event.go:209] Unable to write event: 'Post https://10.0.2.15:6443/api/v1/namespaces/default/events: dial tcp 10.0.2.15:6443: getsockopt: connection refused' (may retry after sleeping)
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.164456    4344 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.164490    4344 status_manager.go:140] Starting to sync pod status with apiserver
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.164499    4344 kubelet.go:1768] Starting kubelet main sync loop.
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.164512    4344 kubelet.go:1779] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: E1204 22:54:41.165220    4344 container_manager_linux.go:603] [ContainerManager]: Fail to get rootfs information unable to find data for container /
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.165243    4344 volume_manager.go:246] Starting Kubelet Volume Manager
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: W1204 22:54:41.169166    4344 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: E1204 22:54:41.170643    4344 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.180786    4344 factory.go:355] Registering Docker factory
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: W1204 22:54:41.180816    4344 manager.go:265] Registration of the rkt container factory failed: unable to communicate with Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: W1204 22:54:41.180912    4344 manager.go:276] Registration of the crio container factory failed: Get http://%2Fvar%2Frun%2Fcrio.sock/info: dial unix /var/run/crio.sock: connect: no such file or directory
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.180923    4344 factory.go:54] Registering systemd factory
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.181271    4344 factory.go:86] Registering Raw factory
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.181578    4344 manager.go:1140] Started watching for new ooms in manager
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.182133    4344 manager.go:311] Starting recovery of all containers
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.238260    4344 manager.go:316] Recovery completed
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.269794    4344 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.271630    4344 kubelet_node_status.go:83] Attempting to register node ubuntu-xenial
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: E1204 22:54:41.272049    4344 kubelet_node_status.go:107] Unable to register node "ubuntu-xenial" with API server: Post https://10.0.2.15:6443/api/v1/nodes: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: E1204 22:54:41.275491    4344 eviction_manager.go:238] eviction manager: unexpected err: failed to get node info: node 'ubuntu-xenial' not found
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.472905    4344 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.475820    4344 kubelet_node_status.go:83] Attempting to register node ubuntu-xenial
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: E1204 22:54:41.476373    4344 kubelet_node_status.go:107] Unable to register node "ubuntu-xenial" with API server: Post https://10.0.2.15:6443/api/v1/nodes: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: E1204 22:54:41.477976    4344 event.go:209] Unable to write event: 'Post https://10.0.2.15:6443/api/v1/namespaces/default/events: dial tcp 10.0.2.15:6443: getsockopt: connection refused' (may retry after sleeping)
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.876699    4344 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: I1204 22:54:41.878763    4344 kubelet_node_status.go:83] Attempting to register node ubuntu-xenial
Dec 04 22:54:41 ubuntu-xenial kubelet[4344]: E1204 22:54:41.879136    4344 kubelet_node_status.go:107] Unable to register node "ubuntu-xenial" with API server: Post https://10.0.2.15:6443/api/v1/nodes: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:42 ubuntu-xenial kubelet[4344]: E1204 22:54:42.075417    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.0.2.15:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:42 ubuntu-xenial kubelet[4344]: E1204 22:54:42.080380    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.0.2.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:42 ubuntu-xenial kubelet[4344]: E1204 22:54:42.090785    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.0.2.15:6443/api/v1/services?resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:42 ubuntu-xenial kubelet[4344]: I1204 22:54:42.679412    4344 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Dec 04 22:54:42 ubuntu-xenial kubelet[4344]: I1204 22:54:42.681376    4344 kubelet_node_status.go:83] Attempting to register node ubuntu-xenial
Dec 04 22:54:42 ubuntu-xenial kubelet[4344]: E1204 22:54:42.681863    4344 kubelet_node_status.go:107] Unable to register node "ubuntu-xenial" with API server: Post https://10.0.2.15:6443/api/v1/nodes: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:43 ubuntu-xenial kubelet[4344]: E1204 22:54:43.077107    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.0.2.15:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:43 ubuntu-xenial kubelet[4344]: E1204 22:54:43.081271    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.0.2.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:43 ubuntu-xenial kubelet[4344]: E1204 22:54:43.091845    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.0.2.15:6443/api/v1/services?resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:44 ubuntu-xenial kubelet[4344]: E1204 22:54:44.078076    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.0.2.15:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:44 ubuntu-xenial kubelet[4344]: E1204 22:54:44.082537    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.0.2.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:44 ubuntu-xenial kubelet[4344]: E1204 22:54:44.093080    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.0.2.15:6443/api/v1/services?resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:44 ubuntu-xenial kubelet[4344]: I1204 22:54:44.282177    4344 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Dec 04 22:54:44 ubuntu-xenial kubelet[4344]: I1204 22:54:44.285534    4344 kubelet_node_status.go:83] Attempting to register node ubuntu-xenial
Dec 04 22:54:44 ubuntu-xenial kubelet[4344]: E1204 22:54:44.285810    4344 kubelet_node_status.go:107] Unable to register node "ubuntu-xenial" with API server: Post https://10.0.2.15:6443/api/v1/nodes: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:45 ubuntu-xenial kubelet[4344]: E1204 22:54:45.078714    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.0.2.15:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:45 ubuntu-xenial kubelet[4344]: E1204 22:54:45.083940    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.0.2.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:45 ubuntu-xenial kubelet[4344]: E1204 22:54:45.093964    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.0.2.15:6443/api/v1/services?resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:46 ubuntu-xenial kubelet[4344]: E1204 22:54:46.083379    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.0.2.15:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:46 ubuntu-xenial kubelet[4344]: E1204 22:54:46.085415    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.0.2.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:46 ubuntu-xenial kubelet[4344]: E1204 22:54:46.095033    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.0.2.15:6443/api/v1/services?resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:46 ubuntu-xenial kubelet[4344]: I1204 22:54:46.165875    4344 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Dec 04 22:54:46 ubuntu-xenial kubelet[4344]: I1204 22:54:46.169131    4344 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Dec 04 22:54:46 ubuntu-xenial kubelet[4344]: I1204 22:54:46.169723    4344 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Dec 04 22:54:46 ubuntu-xenial kubelet[4344]: I1204 22:54:46.171081    4344 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Dec 04 22:54:46 ubuntu-xenial kubelet[4344]: I1204 22:54:46.171966    4344 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Dec 04 22:54:46 ubuntu-xenial kubelet[4344]: I1204 22:54:46.176941    4344 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Dec 04 22:54:46 ubuntu-xenial kubelet[4344]: W1204 22:54:46.178287    4344 status_manager.go:431] Failed to get status for pod "etcd-ubuntu-xenial_kube-system(d76e26fba3bf2bfd215eb29011d55250)": Get https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods/etcd-ubuntu-xenial: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:46 ubuntu-xenial kubelet[4344]: E1204 22:54:46.178431    4344 kubelet.go:1612] Failed creating a mirror pod for "etcd-ubuntu-xenial_kube-system(d76e26fba3bf2bfd215eb29011d55250)": Post https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:46 ubuntu-xenial kubelet[4344]: I1204 22:54:46.178623    4344 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Dec 04 22:54:46 ubuntu-xenial kubelet[4344]: E1204 22:54:46.179474    4344 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 2; ignoring extra CPUs
Dec 04 22:54:46 ubuntu-xenial kubelet[4344]: W1204 22:54:46.180342    4344 status_manager.go:431] Failed to get status for pod "kube-apiserver-ubuntu-xenial_kube-system(61bbfe2414e8482550c5a2bb216e2bb2)": Get https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ubuntu-xenial: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:46 ubuntu-xenial kubelet[4344]: I1204 22:54:46.181465    4344 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Dec 04 22:54:46 ubuntu-xenial kubelet[4344]: E1204 22:54:46.186330    4344 kubelet.go:1612] Failed creating a mirror pod for "kube-controller-manager-ubuntu-xenial_kube-system(9e739eac222404d177c06d9b6eb3683c)": Post https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:46 ubuntu-xenial kubelet[4344]: W1204 22:54:46.186813    4344 status_manager.go:431] Failed to get status for pod "kube-controller-manager-ubuntu-xenial_kube-system(9e739eac222404d177c06d9b6eb3683c)": Get https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ubuntu-xenial: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:46 ubuntu-xenial kubelet[4344]: E1204 22:54:46.188139    4344 kubelet.go:1612] Failed creating a mirror pod for "kube-apiserver-ubuntu-xenial_kube-system(61bbfe2414e8482550c5a2bb216e2bb2)": Post https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:46 ubuntu-xenial kubelet[4344]: W1204 22:54:46.188781    4344 status_manager.go:431] Failed to get status for pod "kube-scheduler-ubuntu-xenial_kube-system(ca97fd23ad8837acfa829af8dfc86a7e)": Get https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ubuntu-xenial: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:46 ubuntu-xenial kubelet[4344]: E1204 22:54:46.229330    4344 kubelet.go:1612] Failed creating a mirror pod for "kube-scheduler-ubuntu-xenial_kube-system(ca97fd23ad8837acfa829af8dfc86a7e)": Post https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:46 ubuntu-xenial kubelet[4344]: I1204 22:54:46.265846    4344 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd" (UniqueName: "kubernetes.io/host-path/d76e26fba3bf2bfd215eb29011d55250-etcd") pod "etcd-ubuntu-xenial" (UID: "d76e26fba3bf2bfd215eb29011d55250")
Dec 04 22:54:46 ubuntu-xenial kubelet[4344]: I1204 22:54:46.266379    4344 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/61bbfe2414e8482550c5a2bb216e2bb2-k8s-certs") pod "kube-apiserver-ubuntu-xenial" (UID: "61bbfe2414e8482550c5a2bb216e2bb2")
Dec 04 22:54:46 ubuntu-xenial kubelet[4344]: I1204 22:54:46.266637    4344 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/61bbfe2414e8482550c5a2bb216e2bb2-ca-certs") pod "kube-apiserver-ubuntu-xenial" (UID: "61bbfe2414e8482550c5a2bb216e2bb2")
Dec 04 22:54:46 ubuntu-xenial kubelet[4344]: W1204 22:54:46.277114    4344 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 04 22:54:46 ubuntu-xenial kubelet[4344]: E1204 22:54:46.277628    4344 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Dec 04 22:54:46 ubuntu-xenial kubelet[4344]: I1204 22:54:46.365796    4344 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/9e739eac222404d177c06d9b6eb3683c-k8s-certs") pod "kube-controller-manager-ubuntu-xenial" (UID: "9e739eac222404d177c06d9b6eb3683c")
Dec 04 22:54:46 ubuntu-xenial kubelet[4344]: I1204 22:54:46.366359    4344 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/9e739eac222404d177c06d9b6eb3683c-ca-certs") pod "kube-controller-manager-ubuntu-xenial" (UID: "9e739eac222404d177c06d9b6eb3683c")
Dec 04 22:54:46 ubuntu-xenial kubelet[4344]: I1204 22:54:46.366710    4344 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/9e739eac222404d177c06d9b6eb3683c-kubeconfig") pod "kube-controller-manager-ubuntu-xenial" (UID: "9e739eac222404d177c06d9b6eb3683c")
Dec 04 22:54:46 ubuntu-xenial kubelet[4344]: I1204 22:54:46.366753    4344 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/9e739eac222404d177c06d9b6eb3683c-flexvolume-dir") pod "kube-controller-manager-ubuntu-xenial" (UID: "9e739eac222404d177c06d9b6eb3683c")
Dec 04 22:54:46 ubuntu-xenial kubelet[4344]: I1204 22:54:46.366791    4344 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/ca97fd23ad8837acfa829af8dfc86a7e-kubeconfig") pod "kube-scheduler-ubuntu-xenial" (UID: "ca97fd23ad8837acfa829af8dfc86a7e")
Dec 04 22:54:47 ubuntu-xenial kubelet[4344]: E1204 22:54:47.084707    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.0.2.15:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:47 ubuntu-xenial kubelet[4344]: E1204 22:54:47.087081    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.0.2.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:47 ubuntu-xenial kubelet[4344]: E1204 22:54:47.096450    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.0.2.15:6443/api/v1/services?resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:47 ubuntu-xenial kubelet[4344]: I1204 22:54:47.486263    4344 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Dec 04 22:54:47 ubuntu-xenial kubelet[4344]: I1204 22:54:47.488695    4344 kubelet_node_status.go:83] Attempting to register node ubuntu-xenial
Dec 04 22:54:47 ubuntu-xenial kubelet[4344]: E1204 22:54:47.489550    4344 kubelet_node_status.go:107] Unable to register node "ubuntu-xenial" with API server: Post https://10.0.2.15:6443/api/v1/nodes: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:48 ubuntu-xenial kubelet[4344]: E1204 22:54:48.085452    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.0.2.15:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:48 ubuntu-xenial kubelet[4344]: E1204 22:54:48.088421    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.0.2.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:48 ubuntu-xenial kubelet[4344]: E1204 22:54:48.096980    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.0.2.15:6443/api/v1/services?resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:49 ubuntu-xenial kubelet[4344]: E1204 22:54:49.086374    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.0.2.15:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:49 ubuntu-xenial kubelet[4344]: E1204 22:54:49.089946    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.0.2.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:49 ubuntu-xenial kubelet[4344]: E1204 22:54:49.097570    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.0.2.15:6443/api/v1/services?resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:50 ubuntu-xenial kubelet[4344]: E1204 22:54:50.087034    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.0.2.15:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:50 ubuntu-xenial kubelet[4344]: E1204 22:54:50.090697    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.0.2.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:50 ubuntu-xenial kubelet[4344]: E1204 22:54:50.098601    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.0.2.15:6443/api/v1/services?resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:51 ubuntu-xenial kubelet[4344]: E1204 22:54:51.087654    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.0.2.15:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:51 ubuntu-xenial kubelet[4344]: E1204 22:54:51.091735    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.0.2.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:51 ubuntu-xenial kubelet[4344]: E1204 22:54:51.099601    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.0.2.15:6443/api/v1/services?resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:51 ubuntu-xenial kubelet[4344]: E1204 22:54:51.275691    4344 eviction_manager.go:238] eviction manager: unexpected err: failed to get node info: node 'ubuntu-xenial' not found
Dec 04 22:54:51 ubuntu-xenial kubelet[4344]: W1204 22:54:51.279452    4344 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 04 22:54:51 ubuntu-xenial kubelet[4344]: E1204 22:54:51.279812    4344 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Dec 04 22:54:51 ubuntu-xenial kubelet[4344]: E1204 22:54:51.478896    4344 event.go:209] Unable to write event: 'Post https://10.0.2.15:6443/api/v1/namespaces/default/events: dial tcp 10.0.2.15:6443: getsockopt: connection refused' (may retry after sleeping)
Dec 04 22:54:52 ubuntu-xenial kubelet[4344]: E1204 22:54:52.088584    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.0.2.15:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:52 ubuntu-xenial kubelet[4344]: E1204 22:54:52.092293    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.0.2.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:52 ubuntu-xenial kubelet[4344]: E1204 22:54:52.100273    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.0.2.15:6443/api/v1/services?resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:53 ubuntu-xenial kubelet[4344]: E1204 22:54:53.089121    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.0.2.15:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:53 ubuntu-xenial kubelet[4344]: E1204 22:54:53.092747    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.0.2.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:53 ubuntu-xenial kubelet[4344]: E1204 22:54:53.100809    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.0.2.15:6443/api/v1/services?resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:53 ubuntu-xenial kubelet[4344]: I1204 22:54:53.895391    4344 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Dec 04 22:54:53 ubuntu-xenial kubelet[4344]: I1204 22:54:53.900634    4344 kubelet_node_status.go:83] Attempting to register node ubuntu-xenial
Dec 04 22:54:53 ubuntu-xenial kubelet[4344]: E1204 22:54:53.901023    4344 kubelet_node_status.go:107] Unable to register node "ubuntu-xenial" with API server: Post https://10.0.2.15:6443/api/v1/nodes: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:54 ubuntu-xenial kubelet[4344]: E1204 22:54:54.089579    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.0.2.15:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:54 ubuntu-xenial kubelet[4344]: E1204 22:54:54.093280    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.0.2.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:54 ubuntu-xenial kubelet[4344]: E1204 22:54:54.101239    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.0.2.15:6443/api/v1/services?resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:55 ubuntu-xenial kubelet[4344]: E1204 22:54:55.090205    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.0.2.15:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:55 ubuntu-xenial kubelet[4344]: E1204 22:54:55.093752    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.0.2.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:55 ubuntu-xenial kubelet[4344]: E1204 22:54:55.101695    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.0.2.15:6443/api/v1/services?resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:56 ubuntu-xenial kubelet[4344]: E1204 22:54:56.091551    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.0.2.15:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:56 ubuntu-xenial kubelet[4344]: E1204 22:54:56.095406    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.0.2.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:56 ubuntu-xenial kubelet[4344]: E1204 22:54:56.103267    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.0.2.15:6443/api/v1/services?resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:56 ubuntu-xenial kubelet[4344]: I1204 22:54:56.234791    4344 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Dec 04 22:54:56 ubuntu-xenial kubelet[4344]: W1204 22:54:56.237301    4344 status_manager.go:431] Failed to get status for pod "kube-apiserver-ubuntu-xenial_kube-system(61bbfe2414e8482550c5a2bb216e2bb2)": Get https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ubuntu-xenial: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:56 ubuntu-xenial kubelet[4344]: E1204 22:54:56.237642    4344 kubelet.go:1612] Failed creating a mirror pod for "kube-apiserver-ubuntu-xenial_kube-system(61bbfe2414e8482550c5a2bb216e2bb2)": Post https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:56 ubuntu-xenial kubelet[4344]: W1204 22:54:56.281430    4344 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 04 22:54:56 ubuntu-xenial kubelet[4344]: E1204 22:54:56.281541    4344 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Dec 04 22:54:57 ubuntu-xenial kubelet[4344]: E1204 22:54:57.092279    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.0.2.15:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:57 ubuntu-xenial kubelet[4344]: E1204 22:54:57.099822    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.0.2.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:57 ubuntu-xenial kubelet[4344]: E1204 22:54:57.103962    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.0.2.15:6443/api/v1/services?resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:57 ubuntu-xenial kubelet[4344]: I1204 22:54:57.239128    4344 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Dec 04 22:54:57 ubuntu-xenial kubelet[4344]: E1204 22:54:57.245856    4344 kubelet.go:1612] Failed creating a mirror pod for "kube-apiserver-ubuntu-xenial_kube-system(61bbfe2414e8482550c5a2bb216e2bb2)": Post https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:58 ubuntu-xenial kubelet[4344]: E1204 22:54:58.093199    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.0.2.15:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:58 ubuntu-xenial kubelet[4344]: E1204 22:54:58.103793    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.0.2.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:58 ubuntu-xenial kubelet[4344]: E1204 22:54:58.105001    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.0.2.15:6443/api/v1/services?resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:59 ubuntu-xenial kubelet[4344]: E1204 22:54:59.094308    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.0.2.15:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:59 ubuntu-xenial kubelet[4344]: E1204 22:54:59.104490    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.0.2.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:54:59 ubuntu-xenial kubelet[4344]: E1204 22:54:59.105626    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.0.2.15:6443/api/v1/services?resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:55:00 ubuntu-xenial kubelet[4344]: E1204 22:55:00.095897    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://10.0.2.15:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:55:00 ubuntu-xenial kubelet[4344]: E1204 22:55:00.105639    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://10.0.2.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dubuntu-xenial&resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:55:00 ubuntu-xenial kubelet[4344]: E1204 22:55:00.113503    4344 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://10.0.2.15:6443/api/v1/services?resourceVersion=0: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:55:00 ubuntu-xenial kubelet[4344]: I1204 22:55:00.261206    4344 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Dec 04 22:55:00 ubuntu-xenial kubelet[4344]: W1204 22:55:00.264205    4344 status_manager.go:431] Failed to get status for pod "kube-scheduler-ubuntu-xenial_kube-system(ca97fd23ad8837acfa829af8dfc86a7e)": Get https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ubuntu-xenial: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:55:00 ubuntu-xenial kubelet[4344]: E1204 22:55:00.264371    4344 kubelet.go:1612] Failed creating a mirror pod for "kube-scheduler-ubuntu-xenial_kube-system(ca97fd23ad8837acfa829af8dfc86a7e)": Post https://10.0.2.15:6443/api/v1/namespaces/kube-system/pods: dial tcp 10.0.2.15:6443: getsockopt: connection refused
Dec 04 22:55:00 ubuntu-xenial kubelet[4344]: I1204 22:55:00.901442    4344 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Dec 04 22:55:00 ubuntu-xenial kubelet[4344]: I1204 22:55:00.902923    4344 kubelet_node_status.go:83] Attempting to register node ubuntu-xenial
Dec 04 22:55:01 ubuntu-xenial kubelet[4344]: E1204 22:55:01.279610    4344 eviction_manager.go:238] eviction manager: unexpected err: failed to get node info: node 'ubuntu-xenial' not found
Dec 04 22:55:01 ubuntu-xenial kubelet[4344]: W1204 22:55:01.298368    4344 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 04 22:55:01 ubuntu-xenial kubelet[4344]: E1204 22:55:01.298648    4344 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Dec 04 22:55:01 ubuntu-xenial kubelet[4344]: I1204 22:55:01.302442    4344 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Dec 04 22:55:06 ubuntu-xenial kubelet[4344]: W1204 22:55:06.300948    4344 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 04 22:55:06 ubuntu-xenial kubelet[4344]: E1204 22:55:06.301048    4344 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Dec 04 22:55:10 ubuntu-xenial kubelet[4344]: I1204 22:55:10.344599    4344 kubelet_node_status.go:280] Setting node annotation to enable volume controller attach/detach
Dec 04 22:55:11 ubuntu-xenial kubelet[4344]: E1204 22:55:11.283906    4344 eviction_manager.go:238] eviction manager: unexpected err: failed to get node info: node 'ubuntu-xenial' not found
Dec 04 22:55:11 ubuntu-xenial kubelet[4344]: W1204 22:55:11.307344    4344 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 04 22:55:11 ubuntu-xenial kubelet[4344]: E1204 22:55:11.309746    4344 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Dec 04 22:55:12 ubuntu-xenial kubelet[4344]: E1204 22:55:12.608029    4344 event.go:200] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ubuntu-xenial.14fd39539d104c68", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ubuntu-xenial", UID:"ubuntu-xenial", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ubuntu-xenial"}, FirstTimestamp:v1.Time{Time:time.Time{sec:63648024881, nsec:146514536, loc:(*time.Location)(0x5329a40)}}, LastTimestamp:v1.Time{Time:time.Time{sec:63648024881, nsec:146514536, loc:(*time.Location)(0x5329a40)}}, Count:1, Type:"Normal"}': 'namespaces "default" not found' (will not retry!)
Dec 04 22:55:12 ubuntu-xenial kubelet[4344]: E1204 22:55:12.663265    4344 event.go:200] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ubuntu-xenial.14fd39539d95d8ba", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ubuntu-xenial", UID:"ubuntu-xenial", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientDisk", Message:"Node ubuntu-xenial status is now: NodeHasSufficientDisk", Source:v1.EventSource{Component:"kubelet", Host:"ubuntu-xenial"}, FirstTimestamp:v1.Time{Time:time.Time{sec:63648024881, nsec:155266746, loc:(*time.Location)(0x5329a40)}}, LastTimestamp:v1.Time{Time:time.Time{sec:63648024881, nsec:155266746, loc:(*time.Location)(0x5329a40)}}, Count:1, Type:"Normal"}': 'namespaces "default" not found' (will not retry!)
Dec 04 22:55:12 ubuntu-xenial kubelet[4344]: E1204 22:55:12.748113    4344 event.go:200] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ubuntu-xenial.14fd39539d95ea1a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ubuntu-xenial", UID:"ubuntu-xenial", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ubuntu-xenial status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ubuntu-xenial"}, FirstTimestamp:v1.Time{Time:time.Time{sec:63648024881, nsec:155271194, loc:(*time.Location)(0x5329a40)}}, LastTimestamp:v1.Time{Time:time.Time{sec:63648024881, nsec:155271194, loc:(*time.Location)(0x5329a40)}}, Count:1, Type:"Normal"}': 'namespaces "default" not found' (will not retry!)
Dec 04 22:55:12 ubuntu-xenial kubelet[4344]: E1204 22:55:12.811832    4344 event.go:200] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ubuntu-xenial.14fd39539d95f1d1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{sec:0, nsec:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ubuntu-xenial", UID:"ubuntu-xenial", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ubuntu-xenial status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ubuntu-xenial"}, FirstTimestamp:v1.Time{Time:time.Time{sec:63648024881, nsec:155273169, loc:(*time.Location)(0x5329a40)}}, LastTimestamp:v1.Time{Time:time.Time{sec:63648024881, nsec:155273169, loc:(*time.Location)(0x5329a40)}}, Count:1, Type:"Normal"}': 'namespaces "default" not found' (will not retry!)
Dec 04 22:55:13 ubuntu-xenial kubelet[4344]: I1204 22:55:13.949393    4344 kubelet_node_status.go:86] Successfully registered node ubuntu-xenial
Dec 04 22:55:13 ubuntu-xenial kubelet[4344]: E1204 22:55:13.951233    4344 kubelet_node_status.go:390] Error updating node status, will retry: error getting node "ubuntu-xenial": nodes "ubuntu-xenial" not found
Dec 04 22:55:16 ubuntu-xenial kubelet[4344]: W1204 22:55:16.311975    4344 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 04 22:55:16 ubuntu-xenial kubelet[4344]: E1204 22:55:16.313411    4344 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Dec 04 22:55:21 ubuntu-xenial kubelet[4344]: W1204 22:55:21.298429    4344 helpers.go:847] eviction manager: no observation found for eviction signal allocatableNodeFs.available
Dec 04 22:55:21 ubuntu-xenial kubelet[4344]: W1204 22:55:21.315173    4344 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 04 22:55:21 ubuntu-xenial kubelet[4344]: E1204 22:55:21.315562    4344 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Dec 04 22:55:26 ubuntu-xenial kubelet[4344]: W1204 22:55:26.329274    4344 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 04 22:55:26 ubuntu-xenial kubelet[4344]: E1204 22:55:26.329587    4344 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Dec 04 22:55:31 ubuntu-xenial kubelet[4344]: W1204 22:55:31.310080    4344 helpers.go:847] eviction manager: no observation found for eviction signal allocatableNodeFs.available
Dec 04 22:55:31 ubuntu-xenial kubelet[4344]: W1204 22:55:31.331225    4344 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 04 22:55:31 ubuntu-xenial kubelet[4344]: E1204 22:55:31.331350    4344 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Dec 04 22:55:31 ubuntu-xenial kubelet[4344]: E1204 22:55:31.580357    4344 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 2; ignoring extra CPUs
Dec 04 22:55:31 ubuntu-xenial kubelet[4344]: I1204 22:55:31.770705    4344 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/3acf53e3-d946-11e7-a964-02b983f0a4db-kube-proxy") pod "kube-proxy-4xsdk" (UID: "3acf53e3-d946-11e7-a964-02b983f0a4db")
Dec 04 22:55:31 ubuntu-xenial kubelet[4344]: I1204 22:55:31.771108    4344 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-mmnt4" (UniqueName: "kubernetes.io/secret/3acf53e3-d946-11e7-a964-02b983f0a4db-kube-proxy-token-mmnt4") pod "kube-proxy-4xsdk" (UID: "3acf53e3-d946-11e7-a964-02b983f0a4db")
Dec 04 22:55:31 ubuntu-xenial kubelet[4344]: I1204 22:55:31.771509    4344 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/3acf53e3-d946-11e7-a964-02b983f0a4db-xtables-lock") pod "kube-proxy-4xsdk" (UID: "3acf53e3-d946-11e7-a964-02b983f0a4db")
Dec 04 22:55:31 ubuntu-xenial kubelet[4344]: I1204 22:55:31.771681    4344 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/3acf53e3-d946-11e7-a964-02b983f0a4db-lib-modules") pod "kube-proxy-4xsdk" (UID: "3acf53e3-d946-11e7-a964-02b983f0a4db")
Dec 04 22:55:33 ubuntu-xenial kubelet[4344]: I1204 22:55:33.984950    4344 kuberuntime_manager.go:899] updating runtime config through cri with podcidr 10.244.0.0/24
Dec 04 22:55:33 ubuntu-xenial kubelet[4344]: I1204 22:55:33.985091    4344 docker_service.go:307] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}
Dec 04 22:55:33 ubuntu-xenial kubelet[4344]: I1204 22:55:33.985302    4344 kubelet_network.go:276] Setting Pod CIDR:  -> 10.244.0.0/24
Dec 04 22:55:36 ubuntu-xenial kubelet[4344]: W1204 22:55:36.333466    4344 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 04 22:55:36 ubuntu-xenial kubelet[4344]: E1204 22:55:36.334289    4344 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Dec 04 22:55:39 ubuntu-xenial kubelet[4344]: E1204 22:55:39.016099    4344 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 2; ignoring extra CPUs
Dec 04 22:55:41 ubuntu-xenial kubelet[4344]: W1204 22:55:41.326697    4344 helpers.go:847] eviction manager: no observation found for eviction signal allocatableNodeFs.available
Dec 04 22:55:41 ubuntu-xenial kubelet[4344]: W1204 22:55:41.336233    4344 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 04 22:55:41 ubuntu-xenial kubelet[4344]: E1204 22:55:41.336599    4344 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Dec 04 22:55:46 ubuntu-xenial kubelet[4344]: W1204 22:55:46.338532    4344 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 04 22:55:46 ubuntu-xenial kubelet[4344]: E1204 22:55:46.338859    4344 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Dec 04 22:55:51 ubuntu-xenial kubelet[4344]: W1204 22:55:51.337149    4344 conversion.go:110] Could not get instant cpu stats: different number of cpus
Dec 04 22:55:51 ubuntu-xenial kubelet[4344]: W1204 22:55:51.339166    4344 helpers.go:847] eviction manager: no observation found for eviction signal allocatableNodeFs.available
Dec 04 22:55:51 ubuntu-xenial kubelet[4344]: W1204 22:55:51.340927    4344 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 04 22:55:51 ubuntu-xenial kubelet[4344]: E1204 22:55:51.341373    4344 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Dec 04 22:55:56 ubuntu-xenial kubelet[4344]: W1204 22:55:56.343873    4344 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 04 22:55:56 ubuntu-xenial kubelet[4344]: E1204 22:55:56.346240    4344 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Dec 04 22:56:01 ubuntu-xenial kubelet[4344]: W1204 22:56:01.343370    4344 conversion.go:110] Could not get instant cpu stats: different number of cpus
Dec 04 22:56:01 ubuntu-xenial kubelet[4344]: W1204 22:56:01.355749    4344 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 04 22:56:01 ubuntu-xenial kubelet[4344]: E1204 22:56:01.356210    4344 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Dec 04 22:56:01 ubuntu-xenial kubelet[4344]: W1204 22:56:01.356395    4344 helpers.go:847] eviction manager: no observation found for eviction signal allocatableNodeFs.available
Dec 04 22:56:06 ubuntu-xenial kubelet[4344]: W1204 22:56:06.357650    4344 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 04 22:56:06 ubuntu-xenial kubelet[4344]: E1204 22:56:06.358258    4344 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Dec 04 22:56:11 ubuntu-xenial kubelet[4344]: W1204 22:56:11.363461    4344 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 04 22:56:11 ubuntu-xenial kubelet[4344]: E1204 22:56:11.364027    4344 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Dec 04 22:56:11 ubuntu-xenial kubelet[4344]: W1204 22:56:11.374240    4344 helpers.go:847] eviction manager: no observation found for eviction signal allocatableNodeFs.available
Dec 04 22:56:16 ubuntu-xenial kubelet[4344]: W1204 22:56:16.365823    4344 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 04 22:56:16 ubuntu-xenial kubelet[4344]: E1204 22:56:16.366023    4344 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Dec 04 22:56:21 ubuntu-xenial kubelet[4344]: W1204 22:56:21.368305    4344 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 04 22:56:21 ubuntu-xenial kubelet[4344]: E1204 22:56:21.368696    4344 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Dec 04 22:56:21 ubuntu-xenial kubelet[4344]: W1204 22:56:21.390084    4344 helpers.go:847] eviction manager: no observation found for eviction signal allocatableNodeFs.available
Dec 04 22:56:24 ubuntu-xenial kubelet[4344]: E1204 22:56:24.726696    4344 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 2; ignoring extra CPUs
Dec 04 22:56:24 ubuntu-xenial kubelet[4344]: I1204 22:56:24.883164    4344 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "flannel-cfg" (UniqueName: "kubernetes.io/configmap/5a7d715a-d946-11e7-a964-02b983f0a4db-flannel-cfg") pod "kube-flannel-ds-v6tnw" (UID: "5a7d715a-d946-11e7-a964-02b983f0a4db")
Dec 04 22:56:24 ubuntu-xenial kubelet[4344]: I1204 22:56:24.883681    4344 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "flannel-token-w696z" (UniqueName: "kubernetes.io/secret/5a7d715a-d946-11e7-a964-02b983f0a4db-flannel-token-w696z") pod "kube-flannel-ds-v6tnw" (UID: "5a7d715a-d946-11e7-a964-02b983f0a4db")
Dec 04 22:56:24 ubuntu-xenial kubelet[4344]: I1204 22:56:24.883963    4344 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "run" (UniqueName: "kubernetes.io/host-path/5a7d715a-d946-11e7-a964-02b983f0a4db-run") pod "kube-flannel-ds-v6tnw" (UID: "5a7d715a-d946-11e7-a964-02b983f0a4db")
Dec 04 22:56:24 ubuntu-xenial kubelet[4344]: I1204 22:56:24.884211    4344 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "cni" (UniqueName: "kubernetes.io/host-path/5a7d715a-d946-11e7-a964-02b983f0a4db-cni") pod "kube-flannel-ds-v6tnw" (UID: "5a7d715a-d946-11e7-a964-02b983f0a4db")
Dec 04 22:56:26 ubuntu-xenial kubelet[4344]: W1204 22:56:26.369777    4344 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 04 22:56:26 ubuntu-xenial kubelet[4344]: E1204 22:56:26.369878    4344 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Dec 04 22:56:31 ubuntu-xenial kubelet[4344]: W1204 22:56:31.371965    4344 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Dec 04 22:56:31 ubuntu-xenial kubelet[4344]: E1204 22:56:31.372074    4344 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Dec 04 22:56:31 ubuntu-xenial kubelet[4344]: W1204 22:56:31.407399    4344 helpers.go:847] eviction manager: no observation found for eviction signal allocatableNodeFs.available
Dec 04 22:56:33 ubuntu-xenial kubelet[4344]: I1204 22:56:33.383028    4344 kuberuntime_manager.go:500] Container {Name:kube-flannel Image:quay.io/coreos/flannel:v0.9.1-amd64 Command:[/opt/bin/flanneld] Args:[--ip-masq --kube-subnet-mgr] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:POD_NAME Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {Name:POD_NAMESPACE Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:run ReadOnly:false MountPath:/run SubPath: MountPropagation:<nil>} {Name:flannel-cfg ReadOnly:false MountPath:/etc/kube-flannel/ SubPath: MountPropagation:<nil>} {Name:flannel-token-w696z ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath: MountPropagation:<nil>}] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,} Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Dec 04 22:56:41 ubuntu-xenial kubelet[4344]: W1204 22:56:41.432892    4344 helpers.go:847] eviction manager: no observation found for eviction signal allocatableNodeFs.available
Dec 04 22:56:51 ubuntu-xenial kubelet[4344]: W1204 22:56:51.442731    4344 conversion.go:110] Could not get instant cpu stats: different number of cpus
Dec 04 22:56:51 ubuntu-xenial kubelet[4344]: W1204 22:56:51.443924    4344 helpers.go:847] eviction manager: no observation found for eviction signal allocatableNodeFs.available
Dec 04 22:57:01 ubuntu-xenial kubelet[4344]: W1204 22:57:01.453871    4344 helpers.go:847] eviction manager: no observation found for eviction signal allocatableNodeFs.available
Dec 04 22:57:11 ubuntu-xenial kubelet[4344]: W1204 22:57:11.469270    4344 helpers.go:847] eviction manager: no observation found for eviction signal allocatableNodeFs.available
Dec 04 22:57:21 ubuntu-xenial kubelet[4344]: W1204 22:57:21.477859    4344 helpers.go:847] eviction manager: no observation found for eviction signal allocatableNodeFs.available
Dec 04 22:57:31 ubuntu-xenial kubelet[4344]: W1204 22:57:31.487534    4344 helpers.go:847] eviction manager: no observation found for eviction signal allocatableNodeFs.available
Dec 04 22:57:34 ubuntu-xenial kubelet[4344]: E1204 22:57:34.661513    4344 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 2; ignoring extra CPUs
Dec 04 22:57:34 ubuntu-xenial kubelet[4344]: I1204 22:57:34.762886    4344 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-dns-config" (UniqueName: "kubernetes.io/configmap/3ad5aebc-d946-11e7-a964-02b983f0a4db-kube-dns-config") pod "kube-dns-545bc4bfd4-hjnmp" (UID: "3ad5aebc-d946-11e7-a964-02b983f0a4db")
Dec 04 22:57:34 ubuntu-xenial kubelet[4344]: I1204 22:57:34.762921    4344 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-dns-token-qlqjp" (UniqueName: "kubernetes.io/secret/3ad5aebc-d946-11e7-a964-02b983f0a4db-kube-dns-token-qlqjp") pod "kube-dns-545bc4bfd4-hjnmp" (UID: "3ad5aebc-d946-11e7-a964-02b983f0a4db")
Dec 04 22:57:35 ubuntu-xenial kubelet[4344]: E1204 22:57:35.333309    4344 cni.go:301] Error adding network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:57:35 ubuntu-xenial kubelet[4344]: E1204 22:57:35.333336    4344 cni.go:250] Error while adding to cni network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:57:35 ubuntu-xenial kubelet[4344]: E1204 22:57:35.425464    4344 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "kube-dns-545bc4bfd4-hjnmp_kube-system" network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:57:35 ubuntu-xenial kubelet[4344]: E1204 22:57:35.425523    4344 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "kube-dns-545bc4bfd4-hjnmp_kube-system(3ad5aebc-d946-11e7-a964-02b983f0a4db)" failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "kube-dns-545bc4bfd4-hjnmp_kube-system" network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:57:35 ubuntu-xenial kubelet[4344]: E1204 22:57:35.425534    4344 kuberuntime_manager.go:633] createPodSandbox for pod "kube-dns-545bc4bfd4-hjnmp_kube-system(3ad5aebc-d946-11e7-a964-02b983f0a4db)" failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "kube-dns-545bc4bfd4-hjnmp_kube-system" network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:57:35 ubuntu-xenial kubelet[4344]: E1204 22:57:35.425601    4344 pod_workers.go:182] Error syncing pod 3ad5aebc-d946-11e7-a964-02b983f0a4db ("kube-dns-545bc4bfd4-hjnmp_kube-system(3ad5aebc-d946-11e7-a964-02b983f0a4db)"), skipping: failed to "CreatePodSandbox" for "kube-dns-545bc4bfd4-hjnmp_kube-system(3ad5aebc-d946-11e7-a964-02b983f0a4db)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-dns-545bc4bfd4-hjnmp_kube-system(3ad5aebc-d946-11e7-a964-02b983f0a4db)\" failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod \"kube-dns-545bc4bfd4-hjnmp_kube-system\" network: failed to find plugin \"portmap\" in path [/opt/flannel/bin /opt/cni/bin]"
Dec 04 22:57:35 ubuntu-xenial kubelet[4344]: W1204 22:57:35.490447    4344 pod_container_deletor.go:77] Container "27edc42c7b071c5b15ab395823c0a61d71e5ef91067248762a701f859b812f1b" not found in pod's containers
Dec 04 22:57:35 ubuntu-xenial kubelet[4344]: W1204 22:57:35.795239    4344 cni.go:265] CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container "27edc42c7b071c5b15ab395823c0a61d71e5ef91067248762a701f859b812f1b"
Dec 04 22:57:35 ubuntu-xenial kubelet[4344]: E1204 22:57:35.795843    4344 cni.go:319] Error deleting network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:57:35 ubuntu-xenial kubelet[4344]: E1204 22:57:35.796714    4344 remote_runtime.go:115] StopPodSandbox "27edc42c7b071c5b15ab395823c0a61d71e5ef91067248762a701f859b812f1b" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod "kube-dns-545bc4bfd4-hjnmp_kube-system" network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:57:35 ubuntu-xenial kubelet[4344]: E1204 22:57:35.796801    4344 kuberuntime_manager.go:781] Failed to stop sandbox {"docker" "27edc42c7b071c5b15ab395823c0a61d71e5ef91067248762a701f859b812f1b"}
Dec 04 22:57:35 ubuntu-xenial kubelet[4344]: E1204 22:57:35.796835    4344 kuberuntime_manager.go:581] killPodWithSyncResult failed: failed to "KillPodSandbox" for "3ad5aebc-d946-11e7-a964-02b983f0a4db" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"kube-dns-545bc4bfd4-hjnmp_kube-system\" network: failed to find plugin \"portmap\" in path [/opt/flannel/bin /opt/cni/bin]"
Dec 04 22:57:35 ubuntu-xenial kubelet[4344]: E1204 22:57:35.796893    4344 pod_workers.go:182] Error syncing pod 3ad5aebc-d946-11e7-a964-02b983f0a4db ("kube-dns-545bc4bfd4-hjnmp_kube-system(3ad5aebc-d946-11e7-a964-02b983f0a4db)"), skipping: failed to "KillPodSandbox" for "3ad5aebc-d946-11e7-a964-02b983f0a4db" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"kube-dns-545bc4bfd4-hjnmp_kube-system\" network: failed to find plugin \"portmap\" in path [/opt/flannel/bin /opt/cni/bin]"
Dec 04 22:57:41 ubuntu-xenial kubelet[4344]: W1204 22:57:41.495608    4344 helpers.go:847] eviction manager: no observation found for eviction signal allocatableNodeFs.available
Dec 04 22:57:51 ubuntu-xenial kubelet[4344]: W1204 22:57:51.475331    4344 cni.go:265] CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container "27edc42c7b071c5b15ab395823c0a61d71e5ef91067248762a701f859b812f1b"
Dec 04 22:57:51 ubuntu-xenial kubelet[4344]: E1204 22:57:51.475520    4344 cni.go:319] Error deleting network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:57:51 ubuntu-xenial kubelet[4344]: E1204 22:57:51.476596    4344 remote_runtime.go:115] StopPodSandbox "27edc42c7b071c5b15ab395823c0a61d71e5ef91067248762a701f859b812f1b" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod "kube-dns-545bc4bfd4-hjnmp_kube-system" network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:57:51 ubuntu-xenial kubelet[4344]: E1204 22:57:51.476671    4344 kuberuntime_manager.go:781] Failed to stop sandbox {"docker" "27edc42c7b071c5b15ab395823c0a61d71e5ef91067248762a701f859b812f1b"}
Dec 04 22:57:51 ubuntu-xenial kubelet[4344]: E1204 22:57:51.476709    4344 kuberuntime_manager.go:581] killPodWithSyncResult failed: failed to "KillPodSandbox" for "3ad5aebc-d946-11e7-a964-02b983f0a4db" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"kube-dns-545bc4bfd4-hjnmp_kube-system\" network: failed to find plugin \"portmap\" in path [/opt/flannel/bin /opt/cni/bin]"
Dec 04 22:57:51 ubuntu-xenial kubelet[4344]: E1204 22:57:51.476730    4344 pod_workers.go:182] Error syncing pod 3ad5aebc-d946-11e7-a964-02b983f0a4db ("kube-dns-545bc4bfd4-hjnmp_kube-system(3ad5aebc-d946-11e7-a964-02b983f0a4db)"), skipping: failed to "KillPodSandbox" for "3ad5aebc-d946-11e7-a964-02b983f0a4db" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"kube-dns-545bc4bfd4-hjnmp_kube-system\" network: failed to find plugin \"portmap\" in path [/opt/flannel/bin /opt/cni/bin]"
Dec 04 22:57:51 ubuntu-xenial kubelet[4344]: W1204 22:57:51.506821    4344 conversion.go:110] Could not get instant cpu stats: different number of cpus
Dec 04 22:57:51 ubuntu-xenial kubelet[4344]: W1204 22:57:51.507313    4344 helpers.go:847] eviction manager: no observation found for eviction signal allocatableNodeFs.available
Dec 04 22:58:01 ubuntu-xenial kubelet[4344]: W1204 22:58:01.529062    4344 conversion.go:110] Could not get instant cpu stats: different number of cpus
Dec 04 22:58:01 ubuntu-xenial kubelet[4344]: W1204 22:58:01.534229    4344 helpers.go:847] eviction manager: no observation found for eviction signal allocatableNodeFs.available
Dec 04 22:58:05 ubuntu-xenial kubelet[4344]: W1204 22:58:05.468838    4344 cni.go:265] CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container "27edc42c7b071c5b15ab395823c0a61d71e5ef91067248762a701f859b812f1b"
Dec 04 22:58:05 ubuntu-xenial kubelet[4344]: E1204 22:58:05.469755    4344 cni.go:319] Error deleting network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:58:05 ubuntu-xenial kubelet[4344]: E1204 22:58:05.470840    4344 remote_runtime.go:115] StopPodSandbox "27edc42c7b071c5b15ab395823c0a61d71e5ef91067248762a701f859b812f1b" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod "kube-dns-545bc4bfd4-hjnmp_kube-system" network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:58:05 ubuntu-xenial kubelet[4344]: E1204 22:58:05.470888    4344 kuberuntime_manager.go:781] Failed to stop sandbox {"docker" "27edc42c7b071c5b15ab395823c0a61d71e5ef91067248762a701f859b812f1b"}
Dec 04 22:58:05 ubuntu-xenial kubelet[4344]: E1204 22:58:05.470968    4344 kuberuntime_manager.go:581] killPodWithSyncResult failed: failed to "KillPodSandbox" for "3ad5aebc-d946-11e7-a964-02b983f0a4db" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"kube-dns-545bc4bfd4-hjnmp_kube-system\" network: failed to find plugin \"portmap\" in path [/opt/flannel/bin /opt/cni/bin]"
Dec 04 22:58:05 ubuntu-xenial kubelet[4344]: E1204 22:58:05.470994    4344 pod_workers.go:182] Error syncing pod 3ad5aebc-d946-11e7-a964-02b983f0a4db ("kube-dns-545bc4bfd4-hjnmp_kube-system(3ad5aebc-d946-11e7-a964-02b983f0a4db)"), skipping: failed to "KillPodSandbox" for "3ad5aebc-d946-11e7-a964-02b983f0a4db" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"kube-dns-545bc4bfd4-hjnmp_kube-system\" network: failed to find plugin \"portmap\" in path [/opt/flannel/bin /opt/cni/bin]"
Dec 04 22:58:11 ubuntu-xenial kubelet[4344]: W1204 22:58:11.544206    4344 helpers.go:847] eviction manager: no observation found for eviction signal allocatableNodeFs.available
Dec 04 22:58:11 ubuntu-xenial kubelet[4344]: E1204 22:58:11.904576    4344 helpers.go:468] PercpuUsage had 0 cpus, but the actual number is 2; ignoring extra CPUs
Dec 04 22:58:11 ubuntu-xenial kubelet[4344]: I1204 22:58:11.982728    4344 reconciler.go:186] operationExecutor.UnmountVolume started for volume "kube-dns-token-qlqjp" (UniqueName: "kubernetes.io/secret/3ad5aebc-d946-11e7-a964-02b983f0a4db-kube-dns-token-qlqjp") pod "3ad5aebc-d946-11e7-a964-02b983f0a4db" (UID: "3ad5aebc-d946-11e7-a964-02b983f0a4db")
Dec 04 22:58:11 ubuntu-xenial kubelet[4344]: I1204 22:58:11.983067    4344 reconciler.go:186] operationExecutor.UnmountVolume started for volume "kube-dns-config" (UniqueName: "kubernetes.io/configmap/3ad5aebc-d946-11e7-a964-02b983f0a4db-kube-dns-config") pod "3ad5aebc-d946-11e7-a964-02b983f0a4db" (UID: "3ad5aebc-d946-11e7-a964-02b983f0a4db")
Dec 04 22:58:11 ubuntu-xenial kubelet[4344]: I1204 22:58:11.983389    4344 operation_generator.go:535] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ad5aebc-d946-11e7-a964-02b983f0a4db-kube-dns-config" (OuterVolumeSpecName: "kube-dns-config") pod "3ad5aebc-d946-11e7-a964-02b983f0a4db" (UID: "3ad5aebc-d946-11e7-a964-02b983f0a4db"). InnerVolumeSpecName "kube-dns-config". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Dec 04 22:58:11 ubuntu-xenial kubelet[4344]: I1204 22:58:11.999684    4344 operation_generator.go:535] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ad5aebc-d946-11e7-a964-02b983f0a4db-kube-dns-token-qlqjp" (OuterVolumeSpecName: "kube-dns-token-qlqjp") pod "3ad5aebc-d946-11e7-a964-02b983f0a4db" (UID: "3ad5aebc-d946-11e7-a964-02b983f0a4db"). InnerVolumeSpecName "kube-dns-token-qlqjp". PluginName "kubernetes.io/secret", VolumeGidValue ""
Dec 04 22:58:12 ubuntu-xenial kubelet[4344]: I1204 22:58:12.092661    4344 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-dns-config" (UniqueName: "kubernetes.io/configmap/9a5ea476-d946-11e7-a964-02b983f0a4db-kube-dns-config") pod "kube-dns-545bc4bfd4-djsm4" (UID: "9a5ea476-d946-11e7-a964-02b983f0a4db")
Dec 04 22:58:12 ubuntu-xenial kubelet[4344]: I1204 22:58:12.092801    4344 reconciler.go:212] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-dns-token-qlqjp" (UniqueName: "kubernetes.io/secret/9a5ea476-d946-11e7-a964-02b983f0a4db-kube-dns-token-qlqjp") pod "kube-dns-545bc4bfd4-djsm4" (UID: "9a5ea476-d946-11e7-a964-02b983f0a4db")
Dec 04 22:58:12 ubuntu-xenial kubelet[4344]: I1204 22:58:12.092834    4344 reconciler.go:290] Volume detached for volume "kube-dns-token-qlqjp" (UniqueName: "kubernetes.io/secret/3ad5aebc-d946-11e7-a964-02b983f0a4db-kube-dns-token-qlqjp") on node "ubuntu-xenial" DevicePath ""
Dec 04 22:58:12 ubuntu-xenial kubelet[4344]: I1204 22:58:12.092849    4344 reconciler.go:290] Volume detached for volume "kube-dns-config" (UniqueName: "kubernetes.io/configmap/3ad5aebc-d946-11e7-a964-02b983f0a4db-kube-dns-config") on node "ubuntu-xenial" DevicePath ""
Dec 04 22:58:12 ubuntu-xenial kubelet[4344]: E1204 22:58:12.756105    4344 cni.go:301] Error adding network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:58:12 ubuntu-xenial kubelet[4344]: E1204 22:58:12.756128    4344 cni.go:250] Error while adding to cni network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:58:12 ubuntu-xenial kubelet[4344]: E1204 22:58:12.833803    4344 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "kube-dns-545bc4bfd4-djsm4_kube-system" network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:58:12 ubuntu-xenial kubelet[4344]: E1204 22:58:12.833848    4344 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "kube-dns-545bc4bfd4-djsm4_kube-system(9a5ea476-d946-11e7-a964-02b983f0a4db)" failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "kube-dns-545bc4bfd4-djsm4_kube-system" network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:58:12 ubuntu-xenial kubelet[4344]: E1204 22:58:12.833861    4344 kuberuntime_manager.go:633] createPodSandbox for pod "kube-dns-545bc4bfd4-djsm4_kube-system(9a5ea476-d946-11e7-a964-02b983f0a4db)" failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "kube-dns-545bc4bfd4-djsm4_kube-system" network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:58:12 ubuntu-xenial kubelet[4344]: E1204 22:58:12.833909    4344 pod_workers.go:182] Error syncing pod 9a5ea476-d946-11e7-a964-02b983f0a4db ("kube-dns-545bc4bfd4-djsm4_kube-system(9a5ea476-d946-11e7-a964-02b983f0a4db)"), skipping: failed to "CreatePodSandbox" for "kube-dns-545bc4bfd4-djsm4_kube-system(9a5ea476-d946-11e7-a964-02b983f0a4db)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-dns-545bc4bfd4-djsm4_kube-system(9a5ea476-d946-11e7-a964-02b983f0a4db)\" failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod \"kube-dns-545bc4bfd4-djsm4_kube-system\" network: failed to find plugin \"portmap\" in path [/opt/flannel/bin /opt/cni/bin]"
Dec 04 22:58:13 ubuntu-xenial kubelet[4344]: W1204 22:58:13.706377    4344 pod_container_deletor.go:77] Container "f79efeabb3ae9964d8daf7419b5f91049d2e3fb517464156505aaf8f0fc8687b" not found in pod's containers
Dec 04 22:58:14 ubuntu-xenial kubelet[4344]: W1204 22:58:14.013768    4344 cni.go:265] CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container "f79efeabb3ae9964d8daf7419b5f91049d2e3fb517464156505aaf8f0fc8687b"
Dec 04 22:58:14 ubuntu-xenial kubelet[4344]: E1204 22:58:14.014073    4344 cni.go:319] Error deleting network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:58:14 ubuntu-xenial kubelet[4344]: E1204 22:58:14.014661    4344 remote_runtime.go:115] StopPodSandbox "f79efeabb3ae9964d8daf7419b5f91049d2e3fb517464156505aaf8f0fc8687b" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod "kube-dns-545bc4bfd4-djsm4_kube-system" network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:58:14 ubuntu-xenial kubelet[4344]: E1204 22:58:14.014685    4344 kuberuntime_manager.go:781] Failed to stop sandbox {"docker" "f79efeabb3ae9964d8daf7419b5f91049d2e3fb517464156505aaf8f0fc8687b"}
Dec 04 22:58:14 ubuntu-xenial kubelet[4344]: E1204 22:58:14.014713    4344 kuberuntime_manager.go:581] killPodWithSyncResult failed: failed to "KillPodSandbox" for "9a5ea476-d946-11e7-a964-02b983f0a4db" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"kube-dns-545bc4bfd4-djsm4_kube-system\" network: failed to find plugin \"portmap\" in path [/opt/flannel/bin /opt/cni/bin]"
Dec 04 22:58:14 ubuntu-xenial kubelet[4344]: E1204 22:58:14.014727    4344 pod_workers.go:182] Error syncing pod 9a5ea476-d946-11e7-a964-02b983f0a4db ("kube-dns-545bc4bfd4-djsm4_kube-system(9a5ea476-d946-11e7-a964-02b983f0a4db)"), skipping: failed to "KillPodSandbox" for "9a5ea476-d946-11e7-a964-02b983f0a4db" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"kube-dns-545bc4bfd4-djsm4_kube-system\" network: failed to find plugin \"portmap\" in path [/opt/flannel/bin /opt/cni/bin]"
Dec 04 22:58:15 ubuntu-xenial kubelet[4344]: W1204 22:58:15.012912    4344 cni.go:265] CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container "f79efeabb3ae9964d8daf7419b5f91049d2e3fb517464156505aaf8f0fc8687b"
Dec 04 22:58:15 ubuntu-xenial kubelet[4344]: E1204 22:58:15.013046    4344 cni.go:319] Error deleting network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:58:15 ubuntu-xenial kubelet[4344]: E1204 22:58:15.013549    4344 remote_runtime.go:115] StopPodSandbox "f79efeabb3ae9964d8daf7419b5f91049d2e3fb517464156505aaf8f0fc8687b" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod "kube-dns-545bc4bfd4-djsm4_kube-system" network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:58:15 ubuntu-xenial kubelet[4344]: E1204 22:58:15.013574    4344 kuberuntime_manager.go:781] Failed to stop sandbox {"docker" "f79efeabb3ae9964d8daf7419b5f91049d2e3fb517464156505aaf8f0fc8687b"}
Dec 04 22:58:15 ubuntu-xenial kubelet[4344]: E1204 22:58:15.013606    4344 kuberuntime_manager.go:581] killPodWithSyncResult failed: failed to "KillPodSandbox" for "9a5ea476-d946-11e7-a964-02b983f0a4db" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"kube-dns-545bc4bfd4-djsm4_kube-system\" network: failed to find plugin \"portmap\" in path [/opt/flannel/bin /opt/cni/bin]"
Dec 04 22:58:15 ubuntu-xenial kubelet[4344]: E1204 22:58:15.013620    4344 pod_workers.go:182] Error syncing pod 9a5ea476-d946-11e7-a964-02b983f0a4db ("kube-dns-545bc4bfd4-djsm4_kube-system(9a5ea476-d946-11e7-a964-02b983f0a4db)"), skipping: failed to "KillPodSandbox" for "9a5ea476-d946-11e7-a964-02b983f0a4db" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"kube-dns-545bc4bfd4-djsm4_kube-system\" network: failed to find plugin \"portmap\" in path [/opt/flannel/bin /opt/cni/bin]"
Dec 04 22:58:21 ubuntu-xenial kubelet[4344]: W1204 22:58:21.553137    4344 helpers.go:847] eviction manager: no observation found for eviction signal allocatableNodeFs.available
Dec 04 22:58:29 ubuntu-xenial kubelet[4344]: W1204 22:58:29.470536    4344 cni.go:265] CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container "f79efeabb3ae9964d8daf7419b5f91049d2e3fb517464156505aaf8f0fc8687b"
Dec 04 22:58:29 ubuntu-xenial kubelet[4344]: E1204 22:58:29.470689    4344 cni.go:319] Error deleting network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:58:29 ubuntu-xenial kubelet[4344]: E1204 22:58:29.471257    4344 remote_runtime.go:115] StopPodSandbox "f79efeabb3ae9964d8daf7419b5f91049d2e3fb517464156505aaf8f0fc8687b" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod "kube-dns-545bc4bfd4-djsm4_kube-system" network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:58:29 ubuntu-xenial kubelet[4344]: E1204 22:58:29.471283    4344 kuberuntime_manager.go:781] Failed to stop sandbox {"docker" "f79efeabb3ae9964d8daf7419b5f91049d2e3fb517464156505aaf8f0fc8687b"}
Dec 04 22:58:29 ubuntu-xenial kubelet[4344]: E1204 22:58:29.471357    4344 kuberuntime_manager.go:581] killPodWithSyncResult failed: failed to "KillPodSandbox" for "9a5ea476-d946-11e7-a964-02b983f0a4db" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"kube-dns-545bc4bfd4-djsm4_kube-system\" network: failed to find plugin \"portmap\" in path [/opt/flannel/bin /opt/cni/bin]"
Dec 04 22:58:29 ubuntu-xenial kubelet[4344]: E1204 22:58:29.471373    4344 pod_workers.go:182] Error syncing pod 9a5ea476-d946-11e7-a964-02b983f0a4db ("kube-dns-545bc4bfd4-djsm4_kube-system(9a5ea476-d946-11e7-a964-02b983f0a4db)"), skipping: failed to "KillPodSandbox" for "9a5ea476-d946-11e7-a964-02b983f0a4db" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"kube-dns-545bc4bfd4-djsm4_kube-system\" network: failed to find plugin \"portmap\" in path [/opt/flannel/bin /opt/cni/bin]"
Dec 04 22:58:31 ubuntu-xenial kubelet[4344]: W1204 22:58:31.563187    4344 conversion.go:110] Could not get instant cpu stats: different number of cpus
Dec 04 22:58:31 ubuntu-xenial kubelet[4344]: W1204 22:58:31.567766    4344 helpers.go:847] eviction manager: no observation found for eviction signal allocatableNodeFs.available
Dec 04 22:58:40 ubuntu-xenial kubelet[4344]: W1204 22:58:40.466896    4344 cni.go:265] CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container "f79efeabb3ae9964d8daf7419b5f91049d2e3fb517464156505aaf8f0fc8687b"
Dec 04 22:58:40 ubuntu-xenial kubelet[4344]: E1204 22:58:40.467216    4344 cni.go:319] Error deleting network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:58:40 ubuntu-xenial kubelet[4344]: E1204 22:58:40.467935    4344 remote_runtime.go:115] StopPodSandbox "f79efeabb3ae9964d8daf7419b5f91049d2e3fb517464156505aaf8f0fc8687b" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod "kube-dns-545bc4bfd4-djsm4_kube-system" network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:58:40 ubuntu-xenial kubelet[4344]: E1204 22:58:40.467961    4344 kuberuntime_manager.go:781] Failed to stop sandbox {"docker" "f79efeabb3ae9964d8daf7419b5f91049d2e3fb517464156505aaf8f0fc8687b"}
Dec 04 22:58:40 ubuntu-xenial kubelet[4344]: E1204 22:58:40.468046    4344 kuberuntime_manager.go:581] killPodWithSyncResult failed: failed to "KillPodSandbox" for "9a5ea476-d946-11e7-a964-02b983f0a4db" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"kube-dns-545bc4bfd4-djsm4_kube-system\" network: failed to find plugin \"portmap\" in path [/opt/flannel/bin /opt/cni/bin]"
Dec 04 22:58:40 ubuntu-xenial kubelet[4344]: E1204 22:58:40.468065    4344 pod_workers.go:182] Error syncing pod 9a5ea476-d946-11e7-a964-02b983f0a4db ("kube-dns-545bc4bfd4-djsm4_kube-system(9a5ea476-d946-11e7-a964-02b983f0a4db)"), skipping: failed to "KillPodSandbox" for "9a5ea476-d946-11e7-a964-02b983f0a4db" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"kube-dns-545bc4bfd4-djsm4_kube-system\" network: failed to find plugin \"portmap\" in path [/opt/flannel/bin /opt/cni/bin]"
Dec 04 22:58:41 ubuntu-xenial kubelet[4344]: W1204 22:58:41.261640    4344 cni.go:265] CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container "27edc42c7b071c5b15ab395823c0a61d71e5ef91067248762a701f859b812f1b"
Dec 04 22:58:41 ubuntu-xenial kubelet[4344]: E1204 22:58:41.261919    4344 cni.go:319] Error deleting network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:58:41 ubuntu-xenial kubelet[4344]: E1204 22:58:41.262701    4344 remote_runtime.go:115] StopPodSandbox "27edc42c7b071c5b15ab395823c0a61d71e5ef91067248762a701f859b812f1b" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod "kube-dns-545bc4bfd4-hjnmp_kube-system" network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:58:41 ubuntu-xenial kubelet[4344]: E1204 22:58:41.262851    4344 kuberuntime_gc.go:152] Failed to stop sandbox "27edc42c7b071c5b15ab395823c0a61d71e5ef91067248762a701f859b812f1b" before removing: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod "kube-dns-545bc4bfd4-hjnmp_kube-system" network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:58:41 ubuntu-xenial kubelet[4344]: W1204 22:58:41.572225    4344 conversion.go:110] Could not get instant cpu stats: different number of cpus
Dec 04 22:58:41 ubuntu-xenial kubelet[4344]: W1204 22:58:41.577474    4344 helpers.go:847] eviction manager: no observation found for eviction signal allocatableNodeFs.available
Dec 04 22:58:51 ubuntu-xenial kubelet[4344]: W1204 22:58:51.585679    4344 helpers.go:847] eviction manager: no observation found for eviction signal allocatableNodeFs.available
Dec 04 22:58:52 ubuntu-xenial kubelet[4344]: W1204 22:58:52.472475    4344 cni.go:265] CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container "f79efeabb3ae9964d8daf7419b5f91049d2e3fb517464156505aaf8f0fc8687b"
Dec 04 22:58:52 ubuntu-xenial kubelet[4344]: E1204 22:58:52.473106    4344 cni.go:319] Error deleting network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:58:52 ubuntu-xenial kubelet[4344]: E1204 22:58:52.474318    4344 remote_runtime.go:115] StopPodSandbox "f79efeabb3ae9964d8daf7419b5f91049d2e3fb517464156505aaf8f0fc8687b" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod "kube-dns-545bc4bfd4-djsm4_kube-system" network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:58:52 ubuntu-xenial kubelet[4344]: E1204 22:58:52.474612    4344 kuberuntime_manager.go:781] Failed to stop sandbox {"docker" "f79efeabb3ae9964d8daf7419b5f91049d2e3fb517464156505aaf8f0fc8687b"}
Dec 04 22:58:52 ubuntu-xenial kubelet[4344]: E1204 22:58:52.474815    4344 kuberuntime_manager.go:581] killPodWithSyncResult failed: failed to "KillPodSandbox" for "9a5ea476-d946-11e7-a964-02b983f0a4db" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"kube-dns-545bc4bfd4-djsm4_kube-system\" network: failed to find plugin \"portmap\" in path [/opt/flannel/bin /opt/cni/bin]"
Dec 04 22:58:52 ubuntu-xenial kubelet[4344]: E1204 22:58:52.475025    4344 pod_workers.go:182] Error syncing pod 9a5ea476-d946-11e7-a964-02b983f0a4db ("kube-dns-545bc4bfd4-djsm4_kube-system(9a5ea476-d946-11e7-a964-02b983f0a4db)"), skipping: failed to "KillPodSandbox" for "9a5ea476-d946-11e7-a964-02b983f0a4db" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"kube-dns-545bc4bfd4-djsm4_kube-system\" network: failed to find plugin \"portmap\" in path [/opt/flannel/bin /opt/cni/bin]"
Dec 04 22:59:01 ubuntu-xenial kubelet[4344]: W1204 22:59:01.596103    4344 helpers.go:847] eviction manager: no observation found for eviction signal allocatableNodeFs.available
Dec 04 22:59:05 ubuntu-xenial kubelet[4344]: W1204 22:59:05.472647    4344 cni.go:265] CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container "f79efeabb3ae9964d8daf7419b5f91049d2e3fb517464156505aaf8f0fc8687b"
Dec 04 22:59:05 ubuntu-xenial kubelet[4344]: E1204 22:59:05.472772    4344 cni.go:319] Error deleting network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:59:05 ubuntu-xenial kubelet[4344]: E1204 22:59:05.473462    4344 remote_runtime.go:115] StopPodSandbox "f79efeabb3ae9964d8daf7419b5f91049d2e3fb517464156505aaf8f0fc8687b" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod "kube-dns-545bc4bfd4-djsm4_kube-system" network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:59:05 ubuntu-xenial kubelet[4344]: E1204 22:59:05.473708    4344 kuberuntime_manager.go:781] Failed to stop sandbox {"docker" "f79efeabb3ae9964d8daf7419b5f91049d2e3fb517464156505aaf8f0fc8687b"}
Dec 04 22:59:05 ubuntu-xenial kubelet[4344]: E1204 22:59:05.473969    4344 kuberuntime_manager.go:581] killPodWithSyncResult failed: failed to "KillPodSandbox" for "9a5ea476-d946-11e7-a964-02b983f0a4db" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"kube-dns-545bc4bfd4-djsm4_kube-system\" network: failed to find plugin \"portmap\" in path [/opt/flannel/bin /opt/cni/bin]"
Dec 04 22:59:05 ubuntu-xenial kubelet[4344]: E1204 22:59:05.474197    4344 pod_workers.go:182] Error syncing pod 9a5ea476-d946-11e7-a964-02b983f0a4db ("kube-dns-545bc4bfd4-djsm4_kube-system(9a5ea476-d946-11e7-a964-02b983f0a4db)"), skipping: failed to "KillPodSandbox" for "9a5ea476-d946-11e7-a964-02b983f0a4db" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"kube-dns-545bc4bfd4-djsm4_kube-system\" network: failed to find plugin \"portmap\" in path [/opt/flannel/bin /opt/cni/bin]"
Dec 04 22:59:11 ubuntu-xenial kubelet[4344]: W1204 22:59:11.610481    4344 helpers.go:847] eviction manager: no observation found for eviction signal allocatableNodeFs.available
Dec 04 22:59:17 ubuntu-xenial kubelet[4344]: W1204 22:59:17.467388    4344 cni.go:265] CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container "f79efeabb3ae9964d8daf7419b5f91049d2e3fb517464156505aaf8f0fc8687b"
Dec 04 22:59:17 ubuntu-xenial kubelet[4344]: E1204 22:59:17.467878    4344 cni.go:319] Error deleting network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:59:17 ubuntu-xenial kubelet[4344]: E1204 22:59:17.468594    4344 remote_runtime.go:115] StopPodSandbox "f79efeabb3ae9964d8daf7419b5f91049d2e3fb517464156505aaf8f0fc8687b" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod "kube-dns-545bc4bfd4-djsm4_kube-system" network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:59:17 ubuntu-xenial kubelet[4344]: E1204 22:59:17.468751    4344 kuberuntime_manager.go:781] Failed to stop sandbox {"docker" "f79efeabb3ae9964d8daf7419b5f91049d2e3fb517464156505aaf8f0fc8687b"}
Dec 04 22:59:17 ubuntu-xenial kubelet[4344]: E1204 22:59:17.468907    4344 kuberuntime_manager.go:581] killPodWithSyncResult failed: failed to "KillPodSandbox" for "9a5ea476-d946-11e7-a964-02b983f0a4db" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"kube-dns-545bc4bfd4-djsm4_kube-system\" network: failed to find plugin \"portmap\" in path [/opt/flannel/bin /opt/cni/bin]"
Dec 04 22:59:17 ubuntu-xenial kubelet[4344]: E1204 22:59:17.469041    4344 pod_workers.go:182] Error syncing pod 9a5ea476-d946-11e7-a964-02b983f0a4db ("kube-dns-545bc4bfd4-djsm4_kube-system(9a5ea476-d946-11e7-a964-02b983f0a4db)"), skipping: failed to "KillPodSandbox" for "9a5ea476-d946-11e7-a964-02b983f0a4db" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"kube-dns-545bc4bfd4-djsm4_kube-system\" network: failed to find plugin \"portmap\" in path [/opt/flannel/bin /opt/cni/bin]"
Dec 04 22:59:21 ubuntu-xenial kubelet[4344]: W1204 22:59:21.620819    4344 helpers.go:847] eviction manager: no observation found for eviction signal allocatableNodeFs.available
Dec 04 22:59:29 ubuntu-xenial kubelet[4344]: W1204 22:59:29.475086    4344 cni.go:265] CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container "f79efeabb3ae9964d8daf7419b5f91049d2e3fb517464156505aaf8f0fc8687b"
Dec 04 22:59:29 ubuntu-xenial kubelet[4344]: E1204 22:59:29.475622    4344 cni.go:319] Error deleting network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:59:29 ubuntu-xenial kubelet[4344]: E1204 22:59:29.476376    4344 remote_runtime.go:115] StopPodSandbox "f79efeabb3ae9964d8daf7419b5f91049d2e3fb517464156505aaf8f0fc8687b" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod "kube-dns-545bc4bfd4-djsm4_kube-system" network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 04 22:59:29 ubuntu-xenial kubelet[4344]: E1204 22:59:29.476403    4344 kuberuntime_manager.go:781] Failed to stop sandbox {"docker" "f79efeabb3ae9964d8daf7419b5f91049d2e3fb517464156505aaf8f0fc8687b"}
Dec 04 22:59:29 ubuntu-xenial kubelet[4344]: E1204 22:59:29.476511    4344 kuberuntime_manager.go:581] killPodWithSyncResult failed: failed to "KillPodSandbox" for "9a5ea476-d946-11e7-a964-02b983f0a4db" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"kube-dns-545bc4bfd4-djsm4_kube-system\" network: failed to find plugin \"portmap\" in path [/opt/flannel/bin /opt/cni/bin]"
Dec 04 22:59:29 ubuntu-xenial kubelet[4344]: E1204 22:59:29.476529    4344 pod_workers.go:182] Error syncing pod 9a5ea476-d946-11e7-a964-02b983f0a4db ("kube-dns-545bc4bfd4-djsm4_kube-system(9a5ea476-d946-11e7-a964-02b983f0a4db)"), skipping: failed to "KillPodSandbox" for "9a5ea476-d946-11e7-a964-02b983f0a4db" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"kube-dns-545bc4bfd4-djsm4_kube-system\" network: failed to find plugin \"portmap\" in path [/opt/flannel/bin /opt/cni/bin]"
Dec 04 22:59:31 ubuntu-xenial kubelet[4344]: W1204 22:59:31.664895    4344 helpers.go:847] eviction manager: no observation found for eviction signal allocatableNodeFs.available
Raw

@Lion-Wei
Copy link

Lion-Wei commented Dec 5, 2017

Seems like you haven't install any pod network plugin.
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network

@Inevitable
Copy link

I'm seeing very similar behavior as of a few hours ago. I can confirm that it seems to occur with flanneld container up and running applied from the standard recommended pod-network plugin yaml.

This is a bare-metal install on a blade system. (The restarts are normal for us, as pulling images often takes several tries for some reason on our net).

Dec 05 00:11:43 nid00000 kubelet[174288]: W1205 00:11:43.770086  174288 cni.go:265] CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container "7217abc0f4e244518c7dcf7b4a8a716883123efe05299a5fdec406e35278cc2e"
Dec 05 00:11:43 nid00000 kubelet[174288]: E1205 00:11:43.770350  174288 cni.go:319] Error deleting network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 05 00:11:43 nid00000 kubelet[174288]: E1205 00:11:43.771273  174288 remote_runtime.go:115] StopPodSandbox "7217abc0f4e244518c7dcf7b4a8a716883123efe05299a5fdec406e35278cc2e" from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod "kube-dns-545bc4bfd4-nb9tq_kube-system" network: failed to find plugin "portmap" in path [/opt/flannel/bin /opt/cni/bin]
Dec 05 00:11:43 nid00000 kubelet[174288]: E1205 00:11:43.771334  174288 kuberuntime_manager.go:781] Failed to stop sandbox {"docker" "7217abc0f4e244518c7dcf7b4a8a716883123efe05299a5fdec406e35278cc2e"}
Dec 05 00:11:43 nid00000 kubelet[174288]: E1205 00:11:43.771411  174288 kuberuntime_manager.go:581] killPodWithSyncResult failed: failed to "KillPodSandbox" for "4dd9ebb1-d97f-11e7-a834-001e67d337ea" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"kube-dns-545bc4bfd4-nb9tq_kube-system\" network: failed to find plugin \"portmap\" in path [/opt/flannel/bin /opt/cni/bin]"
Dec 05 00:11:43 nid00000 kubelet[174288]: E1205 00:11:43.771448  174288 pod_workers.go:182] Error syncing pod 4dd9ebb1-d97f-11e7-a834-001e67d337ea ("kube-dns-545bc4bfd4-nb9tq_kube-system(4dd9ebb1-d97f-11e7-a834-001e67d337ea)"), skipping: failed to "KillPodSandbox" for "4dd9ebb1-d97f-11e7-a834-001e67d337ea" with KillPodSandboxError: "rpc error: code = Unknown desc = NetworkPlugin cni failed to teardown pod \"kube-dns-545bc4bfd4-nb9tq_kube-system\" network: failed to find plugin \"portmap\" in path [/opt/flannel/bin /opt/cni/bin]"
kubectl get po -n kube-system                                                                              Tue Dec  5 00:13:19 2017
NAME                               READY     STATUS              RESTARTS   AGE
etcd-nid00000                      1/1       Running             1          29m
kube-apiserver-nid00000            1/1       Running             2          28m
kube-controller-manager-nid00000   1/1       Running             1          29m
kube-dns-545bc4bfd4-nb9tq          0/3       ContainerCreating   0          29m
kube-flannel-ds-trrcp              1/1       Running             8          28m
kube-proxy-lsc77                   1/1       Running             1          29m
kube-scheduler-nid00000            1/1       Running             2          29m

@Inevitable
Copy link

Inevitable commented Dec 5, 2017

Figured this out.

Flanneld requires the portmap binary in when using the default plugin yaml.

A pull request is in at utf18/ansible-kubeadm#3

To work around the issue you can grab portmap and place it at /opt/cni/bin/portmap set to 0755.

@srossross
Copy link
Author

@Inevitable, thanks. in the mean time. I got this running with kube-router

I am curious as to why there are 6 options in the install instructions. More specifically, there is no default choice or a clear reason why I (blissfully ignorant of the nuances of networking) would choose one over the other. Would it make sense to have an officially supported version, or an initial recommendation?

@vidarno
Copy link

vidarno commented Dec 7, 2017

@Inevitable - how did you figure out this was related to Portmap?

I want to confirm this solved the issue for me to, and that the same issue was causing similar behaviour
with Calico - which is also functioning after fetching portmap.

@Inevitable
Copy link

Bit of search-fu based on the flanneld pod log lead me to flannel-io/flannel#890 From there just a simple test to see if my situation was the same.

@cmoscardi
Copy link

Thanks @Inevitable for the clear solution!

@pnovotnak
Copy link

pnovotnak commented Jan 9, 2018

I think this is still a bug. It is happening for me with 1.8.4 in GKE... I've tried deleting the host node to get it on a new one and it has the same problem over and over.

Deployment config:

---
# ØMQ forwarder
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    maintainers: "Peter Novotnak <[email protected]>,Jane Doe <[email protected]>"
    source: https://github.com/myorg/myproj
  labels:
    name: myprojorwhatever
    tier: backend
  name: zmq-fwd
spec:
  replicas: 1
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: zmq
    spec:
      containers:
      - name: zmq-fwd
        image: gcr.io/myproj/zmq-dev
        command:
        - invoke
        - zmq-forwarder
        env:
        - name: ZMQ_FORWARDER_PULL
          value: 'tcp://*:5556'
        - name: ZMQ_FORWARDER_PUB
          value: 'tcp://*:5557'
        ports:
        - containerPort: 5556
          name: zmq-fwd-pull
          protocol: TCP
        - containerPort: 5557
          name: zmq-fwd-pub
          protocol: TCP
        resources:
          requests:
            cpu: "1"
            memory: "100m"
          limits:
            cpu: "1"
            memory: "300m"

Associated service:

apiVersion: v1
kind: Service
metadata:
  name: zmq-fwd
spec:
  ports:
  - name: zmq-fwd-pull
    port: 5556
    protocol: TCP
    targetPort: zmq-fwd-pull
  - name: zmq-fwd-pub
    port: 5557
    protocol: TCP
    targetPort: zmq-fwd-pub
  selector:
    name: zmq-fwd
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

The result of a kubectl describe <pod>:

$ kubectl describe zmq-fwd-54b4cf586-tdzzk
Name:           zmq-fwd-54b4cf586-tdzzk
Namespace:      development
Node:           gke-node-pool-1-25cv/10.230.0.100
Start Time:     Tue, 09 Jan 2018 14:43:29 -0800
Labels:         app=myapp
                pod-template-hash=106079142
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"development","name":"zmq-fwd-54b4cf586","uid":...
Status:         Pending
IP:
Controlled By:  ReplicaSet/zmq-fwd-54b4cf586
Containers:
  zmq-fwd:
    Container ID:
    Image:         gcr.io/myproj/zmq-dev
    Image ID:
    Ports:         5556/TCP, 5557/TCP
    Command:
      invoke
      zmq-forwarder
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     1
      memory:  300m
    Requests:
      cpu:     1
      memory:  100m
    Environment:
      ZMQ_FORWARDER_PULL:  tcp://*:5556
      ZMQ_FORWARDER_PUB:   tcp://*:5557
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-asdf (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          False
  PodScheduled   True
Volumes:
  default-token-asdf:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-asdf
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.alpha.kubernetes.io/notReady:NoExecute for 300s
                 node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age               From                                                          Message
  ----     ------                  ----              ----                                                          -------
  Normal   Scheduled               12s               default-scheduler                                             Successfully assigned zmq-fwd-54b4cf586-tdzzk to gke-node-pool-1-25cv
  Normal   SuccessfulMountVolume   12s               kubelet, gke-node-pool-1-25cv  MountVolume.SetUp succeeded for volume "default-token-asdf"
  Warning  FailedCreatePodSandBox  4s (x8 over 11s)  kubelet, gke-node-pool-1-25cv  Failed create pod sandbox.
  Warning  FailedSync              4s (x8 over 11s)  kubelet, gke-node-pool-1-25cv  Error syncing pod
  Normal   SandboxChanged          4s (x8 over 11s)  kubelet, gke-node-pool-1-25cv  Pod sandbox changed, it will be killed and re-created.

@pnovotnak
Copy link

I don't know if I can provide access to our cluster but I can provide uncensored logs to anyone looking into this.

@pnovotnak
Copy link

Ah, in my case this is because the requests/ limits I have configured are written incorrectly.

@timothysc
Copy link
Member

Closing, this is a heavily validated area and is typically a conf or setup issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants