* ==> Docker <== * -- Logs begin at Mon 2021-01-11 23:12:17 UTC, end at Mon 2021-01-11 23:16:50 UTC. -- * Jan 11 23:12:38 minikube dockerd[2433]: time="2021-01-11T23:12:38.817046880Z" level=info msg="Removing stale sandbox 7a5c1a15186af5ec820e851230e99cd39d59e7b99be019cd8495d7c0626017d0 (3d0bbab9e5da62c954629be448f566f613632af376497278839cfac708aded81)" * Jan 11 23:12:38 minikube dockerd[2433]: time="2021-01-11T23:12:38.817949963Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5d2f8797640b866a95769ba8280d980490f8ea5aaf027601a508762f58db2497 aa022ae57f2c16af2eb7e6427a977edaada1c81d4c690469b3d3b9d08c72a2ba], retrying...." * Jan 11 23:12:38 minikube dockerd[2433]: time="2021-01-11T23:12:38.902588118Z" level=info msg="Removing stale sandbox 931460ac70a01e8c9577bd801c4fc2df717f68e7d648f0045af4171ed942c9a3 (d8e71157d57834d9ffd6521901aac1cbbf8717efeb0d77f42966c3fe359eaac3)" * Jan 11 23:12:38 minikube dockerd[2433]: time="2021-01-11T23:12:38.903550300Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 5d2f8797640b866a95769ba8280d980490f8ea5aaf027601a508762f58db2497 26a2f8b1f136ccb0dbed5a2318041ec6d43439367b9135e8ce5de55bd5d52f8a], retrying...." * Jan 11 23:12:38 minikube dockerd[2433]: time="2021-01-11T23:12:38.979182558Z" level=info msg="Removing stale sandbox ac6fb04fe3db0e94d42fe93456a8b82e473628830a37bc19dd28030f7c40bcf9 (084d529de11a825b4d2c1120f36c51416d1af20c7d668ccc904d8a20011e300a)" * Jan 11 23:12:38 minikube dockerd[2433]: time="2021-01-11T23:12:38.980972231Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 6eb2e02dd72f4795191f106e2c8408b2301414c6e04601680235482dc6b2e521 7108ef938c047676f0f1b2b19659b5f8caae18a11c7e11c0108d525134b569d7], retrying...." * Jan 11 23:12:39 minikube dockerd[2433]: time="2021-01-11T23:12:39.064065096Z" level=info msg="Removing stale sandbox d4a916cc5ca3702643b3c03df7a74e0acfe12b9cd54afff88d691d3cf8ea8e3f (c6318dfa1a0af2f9092acb69c57493cab7273b630da293f5bdeb8366f6a8d90e)" * Jan 11 23:12:39 minikube dockerd[2433]: time="2021-01-11T23:12:39.065756774Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 6eb2e02dd72f4795191f106e2c8408b2301414c6e04601680235482dc6b2e521 6208d825fa3a7a842957ada0e52a5f31f1980cb026b996d0d5822fd8c9bfa935], retrying...." * Jan 11 23:12:39 minikube dockerd[2433]: time="2021-01-11T23:12:39.142839339Z" level=info msg="Removing stale sandbox df4bad1b3ccb0ac2c9a2176a846b286d2656992304abec2fef1b7a809cce06d9 (cffaa5328299291e9671098ffa7a47c8ae3d5878a6566f666b2d69ff42cf0171)" * Jan 11 23:12:39 minikube dockerd[2433]: time="2021-01-11T23:12:39.144575955Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 6eb2e02dd72f4795191f106e2c8408b2301414c6e04601680235482dc6b2e521 5fa2177ad982b60f6b051213b40ea63cee9cf523c5ccb03556ba2711f0854a13], retrying...." * Jan 11 23:12:39 minikube dockerd[2433]: time="2021-01-11T23:12:39.227021853Z" level=info msg="Removing stale sandbox e20bda78a3fa19ecaed96213446315291ab394ebec65fa33f6af4ef50933d694 (478b2af40f69e5af16aab7487c878de4a70a48935ec0af236914d18915bfe151)" * Jan 11 23:12:39 minikube dockerd[2433]: time="2021-01-11T23:12:39.228725934Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 6eb2e02dd72f4795191f106e2c8408b2301414c6e04601680235482dc6b2e521 854b52beb43554f8fbcc992f2430170f621a6f9076d5199b3f9a4bf2ec7f7cb0], retrying...." * Jan 11 23:12:39 minikube dockerd[2433]: time="2021-01-11T23:12:39.265149639Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" * Jan 11 23:12:39 minikube dockerd[2433]: time="2021-01-11T23:12:39.325131139Z" level=info msg="Loading containers: done." * Jan 11 23:12:39 minikube dockerd[2433]: time="2021-01-11T23:12:39.357701451Z" level=info msg="Docker daemon" commit=4484c46 graphdriver(s)=overlay2 version=19.03.13 * Jan 11 23:12:39 minikube dockerd[2433]: time="2021-01-11T23:12:39.358961010Z" level=info msg="Daemon has completed initialization" * Jan 11 23:12:39 minikube dockerd[2433]: time="2021-01-11T23:12:39.400287042Z" level=info msg="API listen on /var/run/docker.sock" * Jan 11 23:12:39 minikube systemd[1]: Started Docker Application Container Engine. * Jan 11 23:12:39 minikube dockerd[2433]: time="2021-01-11T23:12:39.400665050Z" level=info msg="API listen on [::]:2376" * Jan 11 23:13:05 minikube dockerd[2440]: time="2021-01-11T23:13:05.951265477Z" level=info msg="shim containerd-shim started" address=/containerd-shim/9acf4f57ee867c4a2da62dedb7e3c8c484f166f5855f9120afeab1a5537a0af5.sock debug=false pid=3904 * Jan 11 23:13:05 minikube dockerd[2440]: time="2021-01-11T23:13:05.952184371Z" level=info msg="shim containerd-shim started" address=/containerd-shim/f82b42b64fbec2314378bf43aa8819b90c404052e296e54c3bf66879de8d761d.sock debug=false pid=3905 * Jan 11 23:13:05 minikube dockerd[2440]: time="2021-01-11T23:13:05.971039756Z" level=info msg="shim containerd-shim started" address=/containerd-shim/bf523d5e777049f57acece2f422d3c06bd6ebf2eceb71192f1cff3b20fae98dd.sock debug=false pid=3917 * Jan 11 23:13:07 minikube dockerd[2440]: time="2021-01-11T23:13:07.440380672Z" level=info msg="shim containerd-shim started" address=/containerd-shim/6151787239b2621abbe128a6503f6e7acb1b34b6f2f2c5c0c41b08eb30aaed7b.sock debug=false pid=4140 * Jan 11 23:13:07 minikube dockerd[2440]: time="2021-01-11T23:13:07.559904747Z" level=info msg="shim containerd-shim started" address=/containerd-shim/dc608dbd1187e4c8a032078566245150895f17a35b9d912f757f286b0674e5a0.sock debug=false pid=4167 * Jan 11 23:13:07 minikube dockerd[2440]: time="2021-01-11T23:13:07.562294485Z" level=info msg="shim containerd-shim started" address=/containerd-shim/68951141dc95b16460f3c65b05bebc6215a55fb73d28e06f3968e4e47d516782.sock debug=false pid=4168 * Jan 11 23:13:07 minikube dockerd[2440]: time="2021-01-11T23:13:07.584238349Z" level=info msg="shim containerd-shim started" address=/containerd-shim/7ea9f9a3a60be618a1a11ed35909da9162324d43c1040088da154dfef215bc30.sock debug=false pid=4197 * Jan 11 23:13:11 minikube dockerd[2440]: time="2021-01-11T23:13:11.306299188Z" level=info msg="shim containerd-shim started" address=/containerd-shim/84d5207ebdd245423afa67bbf01ed097bba39b967d26e64f6be1cb9f8fad41b4.sock debug=false pid=4432 * Jan 11 23:13:49 minikube dockerd[2440]: time="2021-01-11T23:13:49.988202605Z" level=info msg="shim containerd-shim started" address=/containerd-shim/a59337de4227d02d97fa7cdf168146331bfaaec1b64f58762ced6a24958c03f7.sock debug=false pid=5032 * Jan 11 23:13:50 minikube dockerd[2440]: time="2021-01-11T23:13:50.162391108Z" level=info msg="shim containerd-shim started" address=/containerd-shim/402ac19937c09a623969897ded2a5fa23c1c76c981f5ac791bcd3108f8084fb2.sock debug=false pid=5116 * Jan 11 23:13:50 minikube dockerd[2440]: time="2021-01-11T23:13:50.308811331Z" level=info msg="shim containerd-shim started" address=/containerd-shim/70baad1fcc50e6097289b096ac1353e9fedd9c8b2c302e4438f571e807124f10.sock debug=false pid=5145 * Jan 11 23:13:50 minikube dockerd[2440]: time="2021-01-11T23:13:50.420102922Z" level=info msg="shim containerd-shim started" address=/containerd-shim/59af27cdd7313cf749feca5c44376869afb42e5bfc9d725500b1fdcff7bd2de6.sock debug=false pid=5173 * Jan 11 23:13:50 minikube dockerd[2440]: time="2021-01-11T23:13:50.897732836Z" level=info msg="shim containerd-shim started" address=/containerd-shim/b96f4dd9f7315a11e8ed3d1bf26ae0ea103c6712cca83428728ac1a9c66eb29b.sock debug=false pid=5283 * Jan 11 23:13:50 minikube dockerd[2440]: time="2021-01-11T23:13:50.917324033Z" level=info msg="shim containerd-shim started" address=/containerd-shim/8f0f43baa4bbb0675460ef5bdd8d4e4772a3f04b31a21204cc1d733ba600c27c.sock debug=false pid=5289 * Jan 11 23:13:51 minikube dockerd[2440]: time="2021-01-11T23:13:51.659092789Z" level=info msg="shim containerd-shim started" address=/containerd-shim/c384d346fce397bb9eb79a6f06ed3af52f1652a589802beea5456636a6ecb5a4.sock debug=false pid=5467 * Jan 11 23:14:12 minikube dockerd[2440]: time="2021-01-11T23:14:12.540862425Z" level=info msg="shim containerd-shim started" address=/containerd-shim/4693fcb1fc679f700665ba73b2ed16a679e70da60ee798daa926b3517a8bf528.sock debug=false pid=6220 * Jan 11 23:14:12 minikube dockerd[2440]: time="2021-01-11T23:14:12.587330038Z" level=info msg="shim containerd-shim started" address=/containerd-shim/1d2506677f188d3d7287293692d1ecf059f6ad6884c5b76a61b0d8a2b2130fec.sock debug=false pid=6236 * Jan 11 23:14:18 minikube dockerd[2440]: time="2021-01-11T23:14:18.859471534Z" level=info msg="shim containerd-shim started" address=/containerd-shim/fd8eda72d34cf5f8f13aef337c5e90cd6175aae4754a57b041e5b0386c3126dc.sock debug=false pid=6361 * Jan 11 23:14:22 minikube dockerd[2440]: time="2021-01-11T23:14:22.691255072Z" level=info msg="shim containerd-shim started" address=/containerd-shim/2552a37ccaa6ebc3e09ce391a721ed164604879b63c2f1d2b1fd1853636cfad5.sock debug=false pid=6403 * Jan 11 23:14:23 minikube dockerd[2440]: time="2021-01-11T23:14:23.277224958Z" level=info msg="shim containerd-shim started" address=/containerd-shim/c930679238c18177354c2710265bb2081736cfed678aff75f26eb50a9b429716.sock debug=false pid=6437 * Jan 11 23:14:27 minikube dockerd[2440]: time="2021-01-11T23:14:27.910852825Z" level=info msg="shim containerd-shim started" address=/containerd-shim/e74b0660a80747d77d459667b32c4321d99c810873a6e44911e04777cd2ea7f0.sock debug=false pid=6519 * Jan 11 23:14:30 minikube dockerd[2440]: time="2021-01-11T23:14:30.560273303Z" level=info msg="shim containerd-shim started" address=/containerd-shim/9966215ebb676ff5a599e013381289585c86a2492de3549c81a641d32db1520b.sock debug=false pid=6564 * Jan 11 23:14:31 minikube dockerd[2440]: time="2021-01-11T23:14:31.136804617Z" level=info msg="shim containerd-shim started" address=/containerd-shim/f825b179c9797d9a09e74aca48f1997fd41623610bff26dce15e38770be388ba.sock debug=false pid=6599 * Jan 11 23:14:34 minikube dockerd[2440]: time="2021-01-11T23:14:34.489698169Z" level=info msg="shim containerd-shim started" address=/containerd-shim/4e4f825cc810add0ac92e3ab0d3509487775de7d6bba57fa077ee518562f7107.sock debug=false pid=6658 * Jan 11 23:14:36 minikube dockerd[2440]: time="2021-01-11T23:14:36.104291944Z" level=info msg="shim containerd-shim started" address=/containerd-shim/106694ce97f0f5a5221a36f5811e49a6019ad8b5d292c2bf773452ac6f051df6.sock debug=false pid=6697 * Jan 11 23:15:11 minikube dockerd[2440]: time="2021-01-11T23:15:11.313310433Z" level=info msg="shim reaped" id=369e34081d9246b50f1bcfbf1a4bfc2b35bcd7f806baf91af43bd1a68dceef49 * Jan 11 23:15:11 minikube dockerd[2433]: time="2021-01-11T23:15:11.324888875Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Jan 11 23:15:11 minikube dockerd[2440]: time="2021-01-11T23:15:11.683920726Z" level=info msg="shim reaped" id=52d6974654fd7e39da3b448e9cc8430b181eb23814718399b48460d80b6ca7f1 * Jan 11 23:15:11 minikube dockerd[2433]: time="2021-01-11T23:15:11.694442081Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Jan 11 23:15:12 minikube dockerd[2440]: time="2021-01-11T23:15:12.546130440Z" level=info msg="shim containerd-shim started" address=/containerd-shim/13b631cb0f2c6c9b07424d3c2fb8ebc496d9c596c1249b9e65bd0a5bf5d6d723.sock debug=false pid=7051 * Jan 11 23:15:14 minikube dockerd[2440]: time="2021-01-11T23:15:14.016114977Z" level=info msg="shim reaped" id=b16a1dd6c1680c19aeb288324db10b9a3805b45c27dd447f2596734570a984dc * Jan 11 23:15:14 minikube dockerd[2433]: time="2021-01-11T23:15:14.029538725Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Jan 11 23:15:16 minikube dockerd[2440]: time="2021-01-11T23:15:16.904534520Z" level=info msg="shim containerd-shim started" address=/containerd-shim/464a6e84d30ba9b71333831a4e6d23d46b6fd834123f15a647c74708d39b5b9e.sock debug=false pid=7135 * Jan 11 23:15:42 minikube dockerd[2440]: time="2021-01-11T23:15:42.798920528Z" level=info msg="shim reaped" id=7a96b939d2247d27f01a562f667846bcabd3b3c8fe2e969a093c53f0e4768b35 * Jan 11 23:15:42 minikube dockerd[2433]: time="2021-01-11T23:15:42.809851915Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Jan 11 23:15:56 minikube dockerd[2440]: time="2021-01-11T23:15:56.868143241Z" level=info msg="shim containerd-shim started" address=/containerd-shim/6079ab341d4341c613843295405c6791a3fc4d5b05d8a69de5400bbe70a9dd76.sock debug=false pid=7317 * Jan 11 23:16:05 minikube dockerd[2440]: time="2021-01-11T23:16:05.139752115Z" level=info msg="shim containerd-shim started" address=/containerd-shim/22e43030e3478d7048f289e430ef5bf0ee1162e6efbc5d3d923d7726410a2c1d.sock debug=false pid=7473 * Jan 11 23:16:27 minikube dockerd[2440]: time="2021-01-11T23:16:27.159937839Z" level=info msg="shim reaped" id=9d5c582f535d1172b4005f9efada220a64ce992b694b2ad1f9f57724f8435376 * Jan 11 23:16:27 minikube dockerd[2433]: time="2021-01-11T23:16:27.170174857Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * Jan 11 23:16:39 minikube dockerd[2440]: time="2021-01-11T23:16:39.900951912Z" level=info msg="shim reaped" id=ff2e1c61afc2f46dd9d18b90df0fba58bfe90400216e43944cb5791573bf6c43 * Jan 11 23:16:39 minikube dockerd[2433]: time="2021-01-11T23:16:39.911656177Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID * ff2e1c61afc2f mongo-express@sha256:6ae44c697cd2381772f8ea8f0571008b62e36301305b113df7f35f2e683e8255 48 seconds ago Exited mongo-express 1002 14ad2ad443748 * 9d5c582f535d1 503bc4b7440b9 57 seconds ago Exited kubernetes-dashboard 2 b8e27c9fb0404 * 8522d01ea5f2a bfe3a36ebd252 About a minute ago Running coredns 4 f15ee4d3dfa96 * 726d935754c15 a90209bb39e3d 2 minutes ago Running echoserver 2 89021d8cf28d4 * 52d6974654fd7 bad58561c4be7 2 minutes ago Exited storage-provisioner 6496 435e1a90b6134 * 77b35ee8344f7 86262685d9abb 2 minutes ago Running dashboard-metrics-scraper 0 930eecb2430fc * 060175f0d8701 mongo@sha256:7722bd2778a299b6f4a62b93a0d2741c734ba7332a090131030ca28261a9a198 2 minutes ago Running mongodb 2 6b1c85dce9d35 * e02a7ac0eb424 635b36f4d89f0 2 minutes ago Running kube-proxy 4 02680772feda0 * 54a78451fba3a nginx@sha256:d20aa6d1cae56fd17cd458f4807e0de462caf2336f0b70b5eeb69fcaaf30dd9c 2 minutes ago Running nginx 2 a6588ec836a9d * f89b69dd7fecf 0369cf4303ffd 3 minutes ago Running etcd 4 a602cd664e052 * 14b738828ad89 14cd22f7abe78 3 minutes ago Running kube-scheduler 4 3b31473d8a1e6 * df42e2879a017 b15c6247777d7 3 minutes ago Running kube-apiserver 4 91c5207ea3eb1 * 2df7e8a9e2b6d 4830ab6185860 3 minutes ago Running kube-controller-manager 4 a7e3b4d6ef19b * 8f9d02606607f 503bc4b7440b9 59 minutes ago Exited kubernetes-dashboard 10 084d529de11a8 * d81b7aee1e071 nginx@sha256:d20aa6d1cae56fd17cd458f4807e0de462caf2336f0b70b5eeb69fcaaf30dd9c 2 hours ago Exited nginx 1 fb306a2a2e61b * accb6b364d9fa mongo@sha256:7722bd2778a299b6f4a62b93a0d2741c734ba7332a090131030ca28261a9a198 2 hours ago Exited mongodb 1 478b2af40f69e * f01fdbbbf9d25 86262685d9abb 2 hours ago Exited dashboard-metrics-scraper 0 eb54a0d4dbcda * 8c1a9bf86f5b4 a90209bb39e3d 2 hours ago Exited echoserver 1 c6318dfa1a0af * c714d48a07603 635b36f4d89f0 2 hours ago Exited kube-proxy 3 5bb38d87badc8 * a2fb58d671408 bfe3a36ebd252 2 hours ago Exited coredns 3 cffaa53282992 * 297528ea90b20 0369cf4303ffd 2 hours ago Exited etcd 3 731ec64fc5080 * a4abc1499fee5 4830ab6185860 2 hours ago Exited kube-controller-manager 3 f3ddfa7a23817 * 2286a719fc179 14cd22f7abe78 2 hours ago Exited kube-scheduler 3 d8e71157d5783 * 57116d2c074b7 b15c6247777d7 2 hours ago Exited kube-apiserver 3 ee91ffe3c7097 * 03e42c26afc43 503bc4b7440b9 2 hours ago Exited kubernetes-dashboard 6837 6994e1052f809 * 7a3f22110d31f 86262685d9abb 3 weeks ago Exited dashboard-metrics-scraper 1 36e056b53dd37 * * ==> coredns [8522d01ea5f2] <== * I0111 23:15:48.529279 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-01-11 23:15:18.500742752 +0000 UTC m=+1.149581075) (total time: 30.001405332s): * Trace[2019727887]: [30.001405332s] [30.001405332s] END * E0111 23:15:48.529332 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout * I0111 23:15:48.529359 1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-01-11 23:15:18.486197831 +0000 UTC m=+1.135036206) (total time: 30.016210587s): * Trace[1427131847]: [30.016210587s] [30.016210587s] END * E0111 23:15:48.529366 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout * I0111 23:15:48.529596 1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-01-11 23:15:18.486228678 +0000 UTC m=+1.135067062) (total time: 30.043344911s): * Trace[911902081]: [30.043344911s] [30.043344911s] END * E0111 23:15:48.529608 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout * I0111 23:16:19.386987 1 trace.go:116] Trace[336122540]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-01-11 23:15:49.38582771 +0000 UTC m=+32.034666029) (total time: 30.00113697s): * Trace[336122540]: [30.00113697s] [30.00113697s] END * E0111 23:16:19.387163 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout * I0111 23:16:19.680173 1 trace.go:116] Trace[646203300]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-01-11 23:15:49.679694519 +0000 UTC m=+32.328532833) (total time: 30.000456882s): * Trace[646203300]: [30.000456882s] [30.000456882s] END * E0111 23:16:19.680421 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout * I0111 23:16:19.881448 1 trace.go:116] Trace[1747278511]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-01-11 23:15:49.880575596 +0000 UTC m=+32.529413925) (total time: 30.000850516s): * Trace[1747278511]: [30.000850516s] [30.000850516s] END * E0111 23:16:19.881507 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout * I0111 23:16:51.144641 1 trace.go:116] Trace[817455089]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-01-11 23:16:21.143111612 +0000 UTC m=+63.791949941) (total time: 30.001469099s): * Trace[817455089]: [30.001469099s] [30.001469099s] END * E0111 23:16:51.144666 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout * I0111 23:16:51.825483 1 trace.go:116] Trace[1006933274]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-01-11 23:16:21.8248538 +0000 UTC m=+64.473692071) (total time: 30.000606047s): * Trace[1006933274]: [30.000606047s] [30.000606047s] END * E0111 23:16:51.825519 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout * I0111 23:16:52.106531 1 trace.go:116] Trace[629431445]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-01-11 23:16:22.105289771 +0000 UTC m=+64.754128031) (total time: 30.001152602s): * Trace[629431445]: [30.001152602s] [30.001152602s] END * E0111 23:16:52.106560 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout * .:53 * [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 * CoreDNS-1.7.0 * linux/amd64, go1.14.4, f59c03d * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * * ==> coredns [a2fb58d67140] <== * I0111 22:13:19.798909 1 trace.go:116] Trace[842277839]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-01-11 22:12:49.797917192 +0000 UTC m=+1585.280440862) (total time: 30.000960598s): * Trace[842277839]: [30.000960598s] [30.000960598s] END * E0111 22:13:19.799280 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout * I0111 22:13:49.431628 1 trace.go:116] Trace[1996325786]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-01-11 22:13:19.430036933 +0000 UTC m=+1614.912560630) (total time: 30.001566859s): * Trace[1996325786]: [30.001566859s] [30.001566859s] END * E0111 22:13:49.431644 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout * I0111 22:13:53.276229 1 trace.go:116] Trace[1263958076]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-01-11 22:13:23.274709035 +0000 UTC m=+1618.757232723) (total time: 30.001484867s): * Trace[1263958076]: [30.001484867s] [30.001484867s] END * E0111 22:13:53.276283 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout * I0111 22:14:37.484054 1 trace.go:116] Trace[1186657351]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-01-11 22:14:07.482827203 +0000 UTC m=+1662.965350877) (total time: 30.001115787s): * Trace[1186657351]: [30.001115787s] [30.001115787s] END * E0111 22:14:37.484107 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout * I0111 22:15:05.631714 1 trace.go:116] Trace[2057190364]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-01-11 22:14:35.629465444 +0000 UTC m=+1691.111989186) (total time: 30.002100423s): * Trace[2057190364]: [30.002100423s] [30.002100423s] END * E0111 22:15:05.631914 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout * I0111 22:15:06.595228 1 trace.go:116] Trace[230599183]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-01-11 22:14:36.594642664 +0000 UTC m=+1692.077166346) (total time: 30.000553544s): * Trace[230599183]: [30.000553544s] [30.000553544s] END * E0111 22:15:06.595631 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout * I0111 22:15:52.234073 1 trace.go:116] Trace[844500090]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-01-11 22:15:22.232990141 +0000 UTC m=+1737.715513826) (total time: 30.001049452s): * Trace[844500090]: [30.001049452s] [30.001049452s] END * E0111 22:15:52.234126 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout * I0111 22:16:29.549536 1 trace.go:116] Trace[408092258]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-01-11 22:15:59.548463818 +0000 UTC m=+1775.030987489) (total time: 30.001036533s): * Trace[408092258]: [30.001036533s] [30.001036533s] END * E0111 22:16:29.549639 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout * I0111 22:16:30.088353 1 trace.go:116] Trace[1404543231]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-01-11 22:16:00.0870077 +0000 UTC m=+1775.569531382) (total time: 30.001309963s): * Trace[1404543231]: [30.001309963s] [30.001309963s] END * E0111 22:16:30.088640 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout * I0111 22:16:56.147963 1 trace.go:116] Trace[1117508154]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-01-11 22:16:26.147211834 +0000 UTC m=+1801.629735511) (total time: 30.000717939s): * Trace[1117508154]: [30.000717939s] [30.000717939s] END * E0111 22:16:56.148393 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout * I0111 22:17:33.041559 1 trace.go:116] Trace[326081223]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-01-11 22:17:03.04093731 +0000 UTC m=+1838.523460983) (total time: 30.000595719s): * Trace[326081223]: [30.000595719s] [30.000595719s] END * E0111 22:17:33.041574 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * [INFO] plugin/ready: Still waiting on: "kubernetes" * * ==> describe nodes <== * Name: minikube * Roles: master * Labels: beta.kubernetes.io/arch=amd64 * beta.kubernetes.io/os=linux * kubernetes.io/arch=amd64 * kubernetes.io/hostname=minikube * kubernetes.io/os=linux * minikube.k8s.io/commit=23f40a012abb52eff365ff99a709501a61ac5876 * minikube.k8s.io/name=minikube * minikube.k8s.io/updated_at=2020_12_16T09_03_01_0700 * minikube.k8s.io/version=v1.15.1 * node-role.kubernetes.io/master= * Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock * node.alpha.kubernetes.io/ttl: 0 * volumes.kubernetes.io/controller-managed-attach-detach: true * CreationTimestamp: Wed, 16 Dec 2020 14:02:58 +0000 * Taints: * Unschedulable: false * Lease: * HolderIdentity: minikube * AcquireTime: * RenewTime: Mon, 11 Jan 2021 23:16:54 +0000 * Conditions: * Type Status LastHeartbeatTime LastTransitionTime Reason Message * ---- ------ ----------------- ------------------ ------ ------- * MemoryPressure False Mon, 11 Jan 2021 23:13:56 +0000 Wed, 16 Dec 2020 14:02:53 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available * DiskPressure False Mon, 11 Jan 2021 23:13:56 +0000 Wed, 16 Dec 2020 14:02:53 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure * PIDPressure False Mon, 11 Jan 2021 23:13:56 +0000 Wed, 16 Dec 2020 14:02:53 +0000 KubeletHasSufficientPID kubelet has sufficient PID available * Ready True Mon, 11 Jan 2021 23:13:56 +0000 Wed, 16 Dec 2020 14:03:18 +0000 KubeletReady kubelet is posting ready status * Addresses: * InternalIP: 192.168.99.100 * Hostname: minikube * Capacity: * cpu: 2 * ephemeral-storage: 17784752Ki * hugepages-2Mi: 0 * memory: 2993488Ki * pods: 110 * Allocatable: * cpu: 2 * ephemeral-storage: 17784752Ki * hugepages-2Mi: 0 * memory: 2993488Ki * pods: 110 * System Info: * Machine ID: ae255b9a41c74c73b3f6057f62d66386 * System UUID: ff8c27ac-bc3a-7349-8f9d-bd3a072954ba * Boot ID: 56815eeb-2655-4628-a53e-2f3e61b152df * Kernel Version: 4.19.150 * OS Image: Buildroot 2020.02.7 * Operating System: linux * Architecture: amd64 * Container Runtime Version: docker://19.3.13 * Kubelet Version: v1.19.4 * Kube-Proxy Version: v1.19.4 * Non-terminated Pods: (14 in total) * Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE * --------- ---- ------------ ---------- --------------- ------------- --- * default hello-minikube-6ddfcc9757-c4lvb 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3d2h * default mongo-express-78fcf796b8-kfz2x 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d * default mongodb-deployment-8f6675bc5-62nmm 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4d * default nginx-conrad-685b84d6d-td8km 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5d10h * kube-system coredns-f9fd979d6-lghmf 100m (5%) 0 (0%) 70Mi (2%) 170Mi (5%) 26d * kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 26d * kube-system ingress-nginx-controller-558664778f-qfxjw 100m (5%) 0 (0%) 90Mi (3%) 0 (0%) 126m * kube-system kube-apiserver-minikube 250m (12%) 0 (0%) 0 (0%) 0 (0%) 26d * kube-system kube-controller-manager-minikube 200m (10%) 0 (0%) 0 (0%) 0 (0%) 26d * kube-system kube-proxy-tc2bg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 26d * kube-system kube-scheduler-minikube 100m (5%) 0 (0%) 0 (0%) 0 (0%) 26d * kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 26d * kubernetes-dashboard dashboard-metrics-scraper-c95fcf479-8zd2v 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m46s * kubernetes-dashboard kubernetes-dashboard-584f46694c-zmvhr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m2s * Allocated resources: * (Total limits may be over 100 percent, i.e., overcommitted.) * Resource Requests Limits * -------- -------- ------ * cpu 750m (37%) 0 (0%) * memory 160Mi (5%) 170Mi (5%) * ephemeral-storage 0 (0%) 0 (0%) * hugepages-2Mi 0 (0%) 0 (0%) * Events: * Type Reason Age From Message * ---- ------ ---- ---- ------- * Normal Starting 90m kubelet Starting kubelet. * Normal NodeHasSufficientMemory 90m (x8 over 90m) kubelet Node minikube status is now: NodeHasSufficientMemory * Normal NodeHasNoDiskPressure 90m (x8 over 90m) kubelet Node minikube status is now: NodeHasNoDiskPressure * Normal NodeHasSufficientPID 90m (x7 over 90m) kubelet Node minikube status is now: NodeHasSufficientPID * Normal NodeAllocatableEnforced 90m kubelet Updated Node Allocatable limit across pods * Normal Starting 90m kube-proxy Starting kube-proxy. * Normal Starting 3m55s kubelet Starting kubelet. * Normal NodeHasSufficientMemory 3m54s (x8 over 3m55s) kubelet Node minikube status is now: NodeHasSufficientMemory * Normal NodeHasNoDiskPressure 3m54s (x8 over 3m55s) kubelet Node minikube status is now: NodeHasNoDiskPressure * Normal NodeHasSufficientPID 3m54s (x7 over 3m55s) kubelet Node minikube status is now: NodeHasSufficientPID * Normal NodeAllocatableEnforced 3m54s kubelet Updated Node Allocatable limit across pods * Normal Starting 2m21s kube-proxy Starting kube-proxy. * * ==> dmesg <== * [ +2.575174] hpet1: lost 318 rtc interrupts * [ +5.002382] hpet1: lost 320 rtc interrupts * [ +5.005217] hpet_rtc_timer_reinit: 105 callbacks suppressed * [ +0.000022] hpet1: lost 318 rtc interrupts * [ +3.375814] systemd-fstab-generator[2937]: Ignoring "noauto" for root device * [ +1.623704] hpet1: lost 318 rtc interrupts * [ +5.001757] hpet1: lost 318 rtc interrupts * [ +0.051923] systemd-fstab-generator[3131]: Ignoring "noauto" for root device * [ +4.950619] hpet1: lost 318 rtc interrupts * [Jan11 23:13] hpet1: lost 318 rtc interrupts * [ +10.002488] hpet_rtc_timer_reinit: 40 callbacks suppressed * [ +0.000002] hpet1: lost 318 rtc interrupts * [ +5.002360] hpet1: lost 319 rtc interrupts * [ +5.002984] hpet1: lost 318 rtc interrupts * [ +5.003233] hpet1: lost 318 rtc interrupts * [ +5.002882] hpet1: lost 319 rtc interrupts * [ +5.004636] hpet1: lost 319 rtc interrupts * [ +5.003143] hpet1: lost 319 rtc interrupts * [ +5.004778] hpet1: lost 318 rtc interrupts * [ +5.004423] hpet_rtc_timer_reinit: 6 callbacks suppressed * [ +0.000001] hpet1: lost 319 rtc interrupts * [ +5.009011] hpet1: lost 319 rtc interrupts * [Jan11 23:14] hpet1: lost 318 rtc interrupts * [ +5.006932] hpet1: lost 319 rtc interrupts * [ +5.004980] hpet1: lost 318 rtc interrupts * [ +5.003735] hpet1: lost 318 rtc interrupts * [ +4.253889] NFSD: Unable to end grace period: -110 * [ +0.748544] hpet1: lost 318 rtc interrupts * [ +4.999816] hpet1: lost 318 rtc interrupts * [ +5.003743] hpet1: lost 319 rtc interrupts * [ +5.001984] hpet1: lost 318 rtc interrupts * [ +4.997545] hpet1: lost 319 rtc interrupts * [ +5.001458] hpet1: lost 318 rtc interrupts * [ +5.005649] hpet1: lost 318 rtc interrupts * [ +5.005347] hpet1: lost 318 rtc interrupts * [Jan11 23:15] hpet1: lost 319 rtc interrupts * [ +5.003409] hpet1: lost 319 rtc interrupts * [ +5.003143] hpet1: lost 318 rtc interrupts * [ +5.002945] hpet1: lost 318 rtc interrupts * [ +5.005368] hpet1: lost 319 rtc interrupts * [ +5.006346] hpet1: lost 318 rtc interrupts * [ +5.001779] hpet1: lost 318 rtc interrupts * [ +5.001375] hpet1: lost 318 rtc interrupts * [ +5.002793] hpet1: lost 319 rtc interrupts * [ +5.002086] hpet1: lost 318 rtc interrupts * [ +5.006209] hpet1: lost 318 rtc interrupts * [ +5.007244] hpet1: lost 319 rtc interrupts * [Jan11 23:16] hpet1: lost 317 rtc interrupts * [ +5.000756] hpet1: lost 319 rtc interrupts * [ +5.004499] hpet1: lost 318 rtc interrupts * [ +4.997732] hpet1: lost 318 rtc interrupts * [ +5.005625] hpet1: lost 318 rtc interrupts * [ +5.000106] hpet1: lost 318 rtc interrupts * [ +5.003699] hpet1: lost 318 rtc interrupts * [ +5.001357] hpet1: lost 318 rtc interrupts * [ +5.003094] hpet1: lost 319 rtc interrupts * [ +5.000833] hpet1: lost 318 rtc interrupts * [ +5.007125] hpet1: lost 318 rtc interrupts * [ +5.009770] hpet1: lost 319 rtc interrupts * [Jan11 23:17] hpet1: lost 318 rtc interrupts * * ==> etcd [297528ea90b2] <== * 2021-01-11 22:08:17.538935 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:08:27.537790 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:08:37.538262 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:08:47.539466 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:08:57.538432 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:09:07.538931 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:09:17.538831 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:09:27.539142 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:09:37.539585 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:09:47.538579 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:09:57.538502 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:10:07.538210 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:10:17.538652 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:10:27.538888 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:10:37.538710 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:10:47.539773 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:10:57.537802 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:11:07.538672 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:11:17.538123 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:11:18.048693 I | mvcc: store.index: compact 599654 * 2021-01-11 22:11:18.049880 I | mvcc: finished scheduled compaction at 599654 (took 835.091µs) * 2021-01-11 22:11:27.541564 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:11:37.537794 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:11:47.538363 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:11:57.537604 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:12:07.537869 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:12:17.538818 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:12:27.537658 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:12:37.539281 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:12:47.538541 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:12:57.538284 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:13:07.539309 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:13:17.539671 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:13:27.537966 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:13:37.538516 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:13:47.538480 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:13:57.538031 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:14:07.538424 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:14:17.537775 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:14:27.537822 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:14:37.538829 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:14:47.538081 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:14:57.538258 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:15:07.538757 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:15:17.538858 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:15:27.539674 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:15:37.539343 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:15:47.539733 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:15:57.538038 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:16:07.538663 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:16:17.538246 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:16:18.056145 I | mvcc: store.index: compact 599744 * 2021-01-11 22:16:18.059153 I | mvcc: finished scheduled compaction at 599744 (took 1.575983ms) * 2021-01-11 22:16:27.538029 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:16:37.537577 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:16:47.537829 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:16:57.538305 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:17:07.539270 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:17:17.538061 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 22:17:27.539342 I | etcdserver/api/etcdhttp: /health OK (status code 200) * * ==> etcd [f89b69dd7fec] <== * 2021-01-11 23:13:44.229091 I | etcdserver: starting server... [version: 3.4.13, cluster version: 3.4] * 2021-01-11 23:13:44.237599 I | etcdserver: 7feb3ee23ce5b4a7 as single-node; fast-forwarding 9 ticks (election ticks 10) * 2021-01-11 23:13:44.247680 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = * 2021-01-11 23:13:44.247932 I | embed: listening for metrics on http://127.0.0.1:2381 * 2021-01-11 23:13:44.272695 I | embed: listening for peers on 192.168.99.100:2380 * raft2021/01/11 23:13:44 INFO: 7feb3ee23ce5b4a7 is starting a new election at term 5 * raft2021/01/11 23:13:44 INFO: 7feb3ee23ce5b4a7 became candidate at term 6 * raft2021/01/11 23:13:44 INFO: 7feb3ee23ce5b4a7 received MsgVoteResp from 7feb3ee23ce5b4a7 at term 6 * raft2021/01/11 23:13:44 INFO: 7feb3ee23ce5b4a7 became leader at term 6 * raft2021/01/11 23:13:44 INFO: raft.node: 7feb3ee23ce5b4a7 elected leader 7feb3ee23ce5b4a7 at term 6 * 2021-01-11 23:13:44.779245 I | etcdserver: published {Name:minikube ClientURLs:[https://192.168.99.100:2379]} to cluster a9449303b0ccd8c0 * 2021-01-11 23:13:44.779414 I | embed: ready to serve client requests * 2021-01-11 23:13:44.780461 I | embed: ready to serve client requests * 2021-01-11 23:13:44.798042 I | embed: serving client requests on 127.0.0.1:2379 * 2021-01-11 23:13:44.799473 I | embed: serving client requests on 192.168.99.100:2379 * 2021-01-11 23:13:53.267135 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 23:13:54.436086 W | etcdserver: request "header: txn: success:> failure:<>>" with result "size:18" took too long (103.168584ms) to execute * 2021-01-11 23:13:54.694962 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 23:13:55.737896 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/token-cleaner\" " with result "range_response_count:1 size:237" took too long (745.853571ms) to execute * 2021-01-11 23:13:55.754443 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:6" took too long (634.118061ms) to execute * 2021-01-11 23:13:56.134633 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:6" took too long (124.738523ms) to execute * 2021-01-11 23:13:56.692847 W | etcdserver: request "header: txn: success:> failure:<>>" with result "size:18" took too long (107.270445ms) to execute * 2021-01-11 23:13:56.698881 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/ingress-nginx-controller-558664778f-qfxjw\" " with result "range_response_count:1 size:5385" took too long (304.446044ms) to execute * 2021-01-11 23:13:56.700075 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/etcd-minikube.165950748c7d3f47\" " with result "range_response_count:1 size:805" took too long (212.999612ms) to execute * 2021-01-11 23:14:04.691080 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 23:14:14.690137 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 23:14:24.688881 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 23:14:28.750327 W | etcdserver: request "header: txn: success:> failure:<>>" with result "size:18" took too long (220.350265ms) to execute * 2021-01-11 23:14:29.078497 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:6" took too long (105.152825ms) to execute * 2021-01-11 23:14:31.304387 W | etcdserver: request "header: txn: success:> failure: >>" with result "size:18" took too long (259.236528ms) to execute * 2021-01-11 23:14:31.304960 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:6" took too long (339.048819ms) to execute * 2021-01-11 23:14:32.201553 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:6" took too long (235.882465ms) to execute * 2021-01-11 23:14:32.202225 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:2 size:6829" took too long (203.306558ms) to execute * 2021-01-11 23:14:33.121001 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:6" took too long (153.847579ms) to execute * 2021-01-11 23:14:34.169416 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:6" took too long (186.679052ms) to execute * 2021-01-11 23:14:34.688718 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 23:14:35.629353 W | etcdserver: request "header: txn: success:> failure: >>" with result "size:18" took too long (184.386602ms) to execute * 2021-01-11 23:14:35.630060 W | etcdserver: read-only range request "key:\"/registry/pods/default/mongo-express-78fcf796b8-kfz2x\" " with result "range_response_count:1 size:3749" took too long (248.105939ms) to execute * 2021-01-11 23:14:35.930794 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/storage-provisioner\" " with result "range_response_count:1 size:4160" took too long (274.623697ms) to execute * 2021-01-11 23:14:36.142381 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/coredns-f9fd979d6-lghmf.16595081f07778d2\" " with result "range_response_count:1 size:795" took too long (364.343502ms) to execute * 2021-01-11 23:14:36.235383 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:6" took too long (269.813604ms) to execute * 2021-01-11 23:14:36.382064 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-f9fd979d6-lghmf\" " with result "range_response_count:1 size:5008" took too long (229.219906ms) to execute * 2021-01-11 23:14:38.749686 W | etcdserver: request "header: txn: success:> failure: >>" with result "size:18" took too long (102.684239ms) to execute * 2021-01-11 23:14:39.090727 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:6" took too long (118.017179ms) to execute * 2021-01-11 23:14:41.115488 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:6" took too long (143.323517ms) to execute * 2021-01-11 23:14:44.689416 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 23:14:54.689467 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 23:15:04.688951 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 23:15:14.689769 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 23:15:24.691300 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 23:15:34.689126 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 23:15:44.689838 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 23:15:54.690829 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 23:16:04.689111 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 23:16:14.689415 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 23:16:24.688818 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 23:16:34.689333 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 23:16:44.690272 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 23:16:54.692142 I | etcdserver/api/etcdhttp: /health OK (status code 200) * 2021-01-11 23:17:04.689477 I | etcdserver/api/etcdhttp: /health OK (status code 200) * * ==> kernel <== * 23:17:11 up 5 min, 0 users, load average: 0.52, 1.18, 0.61 * Linux minikube 4.19.150 #1 SMP Fri Nov 6 15:58:07 PST 2020 x86_64 GNU/Linux * PRETTY_NAME="Buildroot 2020.02.7" * * ==> kube-apiserver [57116d2c074b] <== * I0111 22:04:42.027153 1 client.go:360] parsed scheme: "passthrough" * I0111 22:04:42.027237 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0111 22:04:42.027256 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0111 22:05:17.165231 1 client.go:360] parsed scheme: "passthrough" * I0111 22:05:17.165289 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0111 22:05:17.165300 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0111 22:05:48.075548 1 client.go:360] parsed scheme: "passthrough" * I0111 22:05:48.075678 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0111 22:05:48.075697 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0111 22:06:26.021963 1 client.go:360] parsed scheme: "passthrough" * I0111 22:06:26.022461 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0111 22:06:26.022556 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0111 22:07:03.694637 1 client.go:360] parsed scheme: "passthrough" * I0111 22:07:03.694771 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0111 22:07:03.694789 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0111 22:07:38.174279 1 client.go:360] parsed scheme: "passthrough" * I0111 22:07:38.174415 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0111 22:07:38.174434 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0111 22:08:21.939456 1 client.go:360] parsed scheme: "passthrough" * I0111 22:08:21.939676 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0111 22:08:21.939848 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0111 22:09:04.781168 1 client.go:360] parsed scheme: "passthrough" * I0111 22:09:04.781224 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0111 22:09:04.781240 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0111 22:09:43.175713 1 client.go:360] parsed scheme: "passthrough" * I0111 22:09:43.175777 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0111 22:09:43.175790 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0111 22:10:26.608557 1 client.go:360] parsed scheme: "passthrough" * I0111 22:10:26.608777 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0111 22:10:26.608903 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0111 22:10:59.858997 1 client.go:360] parsed scheme: "passthrough" * I0111 22:10:59.859146 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0111 22:10:59.859210 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0111 22:11:44.043052 1 client.go:360] parsed scheme: "passthrough" * I0111 22:11:44.043303 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0111 22:11:44.043370 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0111 22:12:28.242841 1 client.go:360] parsed scheme: "passthrough" * I0111 22:12:28.242919 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0111 22:12:28.243005 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0111 22:13:05.119793 1 client.go:360] parsed scheme: "passthrough" * I0111 22:13:05.119878 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0111 22:13:05.119896 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0111 22:13:50.122940 1 client.go:360] parsed scheme: "passthrough" * I0111 22:13:50.123020 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0111 22:13:50.123097 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0111 22:14:28.997145 1 client.go:360] parsed scheme: "passthrough" * I0111 22:14:28.997230 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0111 22:14:28.997247 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0111 22:15:05.004373 1 client.go:360] parsed scheme: "passthrough" * I0111 22:15:05.004938 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0111 22:15:05.005117 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0111 22:15:44.806971 1 client.go:360] parsed scheme: "passthrough" * I0111 22:15:44.807053 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0111 22:15:44.807073 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0111 22:16:17.681971 1 client.go:360] parsed scheme: "passthrough" * I0111 22:16:17.682152 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0111 22:16:17.682175 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0111 22:17:01.691311 1 client.go:360] parsed scheme: "passthrough" * I0111 22:17:01.691679 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0111 22:17:01.691915 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * * ==> kube-apiserver [df42e2879a01] <== * I0111 23:13:49.170822 1 autoregister_controller.go:141] Starting autoregister controller * I0111 23:13:49.170883 1 cache.go:32] Waiting for caches to sync for autoregister controller * I0111 23:13:49.171315 1 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key * I0111 23:13:49.171404 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt * I0111 23:13:49.171477 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt * I0111 23:13:49.205296 1 crdregistration_controller.go:111] Starting crd-autoregister controller * I0111 23:13:49.205445 1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister * I0111 23:13:49.255002 1 controller.go:86] Starting OpenAPI controller * I0111 23:13:49.255047 1 naming_controller.go:291] Starting NamingConditionController * I0111 23:13:49.255060 1 establishing_controller.go:76] Starting EstablishingController * I0111 23:13:49.255072 1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController * I0111 23:13:49.255092 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController * I0111 23:13:49.255102 1 crd_finalizer.go:266] Starting CRDFinalizer * E0111 23:13:49.391457 1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service * I0111 23:13:49.405667 1 shared_informer.go:247] Caches are synced for crd-autoregister * I0111 23:13:49.463082 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller * I0111 23:13:49.472473 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller * I0111 23:13:49.488022 1 cache.go:39] Caches are synced for AvailableConditionController controller * I0111 23:13:49.489568 1 cache.go:39] Caches are synced for autoregister controller * I0111 23:13:50.161106 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). * I0111 23:13:50.161131 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). * I0111 23:13:50.190966 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist. * I0111 23:13:52.526505 1 controller.go:606] quota admission added evaluator for: serviceaccounts * I0111 23:13:52.591004 1 controller.go:606] quota admission added evaluator for: deployments.apps * I0111 23:13:52.697707 1 controller.go:606] quota admission added evaluator for: daemonsets.apps * I0111 23:13:52.724819 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io * I0111 23:13:52.745521 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io * I0111 23:13:54.439862 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io * I0111 23:13:55.746885 1 trace.go:205] Trace[1830524490]: "Get" url:/api/v1/namespaces/kube-system/serviceaccounts/token-cleaner,user-agent:kube-controller-manager/v1.19.4 (linux/amd64) kubernetes/d360454/kube-controller-manager,client:192.168.99.100 (11-Jan-2021 23:13:54.991) (total time: 755ms): * Trace[1830524490]: ---"About to write a response" 755ms (23:13:00.746) * Trace[1830524490]: [755.357787ms] [755.357787ms] END * I0111 23:13:55.755785 1 trace.go:205] Trace[724315254]: "GuaranteedUpdate etcd3" type:*core.Pod (11-Jan-2021 23:13:54.992) (total time: 763ms): * Trace[724315254]: ---"Transaction committed" 761ms (23:13:00.755) * Trace[724315254]: [763.129182ms] [763.129182ms] END * I0111 23:13:55.757391 1 trace.go:205] Trace[162378449]: "Patch" url:/api/v1/namespaces/kubernetes-dashboard/pods/kubernetes-dashboard-584f46694c-qg2hk/status,user-agent:kubelet/v1.19.4 (linux/amd64) kubernetes/d360454,client:192.168.99.100 (11-Jan-2021 23:13:54.992) (total time: 764ms): * Trace[162378449]: ---"Object stored in database" 763ms (23:13:00.757) * Trace[162378449]: [764.808307ms] [764.808307ms] END * I0111 23:13:55.758334 1 trace.go:205] Trace[1304810531]: "Create" url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.19.4 (linux/amd64) kubernetes/d360454,client:192.168.99.100 (11-Jan-2021 23:13:55.086) (total time: 672ms): * Trace[1304810531]: ---"Object stored in database" 671ms (23:13:00.758) * Trace[1304810531]: [672.004101ms] [672.004101ms] END * I0111 23:13:55.816039 1 trace.go:205] Trace[673176365]: "Create" url:/api/v1/nodes,user-agent:kubelet/v1.19.4 (linux/amd64) kubernetes/d360454,client:192.168.99.100 (11-Jan-2021 23:13:55.192) (total time: 623ms): * Trace[673176365]: [623.025888ms] [623.025888ms] END * I0111 23:13:56.256350 1 controller.go:606] quota admission added evaluator for: endpoints * I0111 23:13:56.261636 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io * I0111 23:14:21.654151 1 client.go:360] parsed scheme: "passthrough" * I0111 23:14:21.654266 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0111 23:14:21.654285 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0111 23:14:47.665129 1 controller.go:606] quota admission added evaluator for: jobs.batch * I0111 23:15:02.156038 1 client.go:360] parsed scheme: "passthrough" * I0111 23:15:02.156096 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0111 23:15:02.156178 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0111 23:15:40.530299 1 client.go:360] parsed scheme: "passthrough" * I0111 23:15:40.530430 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0111 23:15:40.530450 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0111 23:16:18.325289 1 client.go:360] parsed scheme: "passthrough" * I0111 23:16:18.325363 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0111 23:16:18.325383 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * I0111 23:16:57.514518 1 client.go:360] parsed scheme: "passthrough" * I0111 23:16:57.514965 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } * I0111 23:16:57.515103 1 clientconn.go:948] ClientConn switching balancer to "pick_first" * * ==> kube-controller-manager [2df7e8a9e2b6] <== * I0111 23:13:55.757088 1 shared_informer.go:240] Waiting for caches to sync for token_cleaner * I0111 23:13:55.757094 1 shared_informer.go:247] Caches are synced for token_cleaner * I0111 23:13:55.771233 1 controllermanager.go:549] Started "persistentvolume-expander" * I0111 23:13:55.771582 1 expand_controller.go:303] Starting expand controller * I0111 23:13:55.771606 1 shared_informer.go:240] Waiting for caches to sync for expand * I0111 23:13:55.788620 1 controllermanager.go:549] Started "serviceaccount" * I0111 23:13:55.788778 1 serviceaccounts_controller.go:117] Starting service account controller * I0111 23:13:55.788785 1 shared_informer.go:240] Waiting for caches to sync for service account * I0111 23:13:55.814338 1 controllermanager.go:549] Started "bootstrapsigner" * W0111 23:13:55.814401 1 controllermanager.go:541] Skipping "root-ca-cert-publisher" * W0111 23:13:55.814410 1 controllermanager.go:541] Skipping "ephemeral-volume" * I0111 23:13:55.814537 1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer * I0111 23:13:55.841537 1 controllermanager.go:549] Started "disruption" * I0111 23:13:55.841675 1 disruption.go:331] Starting disruption controller * I0111 23:13:55.841863 1 shared_informer.go:240] Waiting for caches to sync for disruption * I0111 23:13:55.850192 1 shared_informer.go:240] Waiting for caches to sync for resource quota * I0111 23:13:56.047796 1 shared_informer.go:247] Caches are synced for TTL * I0111 23:13:56.048507 1 shared_informer.go:247] Caches are synced for PV protection * I0111 23:13:56.048591 1 shared_informer.go:247] Caches are synced for certificate-csrapproving * I0111 23:13:56.048673 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator * I0111 23:13:56.050548 1 shared_informer.go:247] Caches are synced for service account * I0111 23:13:56.054388 1 shared_informer.go:247] Caches are synced for namespace * W0111 23:13:56.055921 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist * I0111 23:13:56.059791 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown * I0111 23:13:56.119895 1 shared_informer.go:247] Caches are synced for expand * I0111 23:13:56.059816 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving * I0111 23:13:56.059890 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client * I0111 23:13:56.120028 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client * I0111 23:13:56.121616 1 shared_informer.go:247] Caches are synced for deployment * I0111 23:13:56.158965 1 shared_informer.go:247] Caches are synced for disruption * I0111 23:13:56.158990 1 disruption.go:339] Sending events to api server. * I0111 23:13:56.159051 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring * I0111 23:13:56.159148 1 shared_informer.go:247] Caches are synced for ReplicaSet * I0111 23:13:56.228024 1 shared_informer.go:247] Caches are synced for bootstrap_signer * I0111 23:13:56.228114 1 shared_informer.go:247] Caches are synced for persistent volume * I0111 23:13:56.231846 1 shared_informer.go:247] Caches are synced for PVC protection * I0111 23:13:56.231865 1 shared_informer.go:247] Caches are synced for endpoint * I0111 23:13:56.231877 1 shared_informer.go:247] Caches are synced for taint * I0111 23:13:56.242985 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: * W0111 23:13:56.243030 1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube. Assuming now as a timestamp. * I0111 23:13:56.243065 1 node_lifecycle_controller.go:1245] Controller detected that zone is now in state Normal. * I0111 23:13:56.245001 1 shared_informer.go:247] Caches are synced for stateful set * I0111 23:13:56.231889 1 shared_informer.go:247] Caches are synced for endpoint_slice * I0111 23:13:56.245484 1 taint_manager.go:187] Starting NoExecuteTaintManager * I0111 23:13:56.246334 1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" * I0111 23:13:56.241390 1 shared_informer.go:247] Caches are synced for GC * I0111 23:13:56.241431 1 shared_informer.go:247] Caches are synced for job * I0111 23:13:56.241440 1 shared_informer.go:247] Caches are synced for ReplicationController * I0111 23:13:56.241450 1 shared_informer.go:247] Caches are synced for HPA * I0111 23:13:56.241461 1 shared_informer.go:247] Caches are synced for attach detach * I0111 23:13:56.252825 1 shared_informer.go:240] Waiting for caches to sync for garbage collector * I0111 23:13:56.252887 1 shared_informer.go:247] Caches are synced for resource quota * I0111 23:13:56.292958 1 shared_informer.go:247] Caches are synced for daemon sets * I0111 23:13:56.317437 1 shared_informer.go:247] Caches are synced for resource quota * I0111 23:13:56.378818 1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-584f46694c" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-584f46694c-zmvhr" * I0111 23:13:56.385771 1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-c95fcf479" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-c95fcf479-8cmx8" * I0111 23:13:56.461890 1 shared_informer.go:247] Caches are synced for garbage collector * I0111 23:13:56.532572 1 shared_informer.go:247] Caches are synced for garbage collector * I0111 23:13:56.532795 1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage * I0111 23:14:12.088666 1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-c95fcf479" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-c95fcf479-8zd2v" * * ==> kube-controller-manager [a4abc1499fee] <== * I0111 21:46:27.762149 1 controllermanager.go:549] Started "csrapproving" * I0111 21:46:27.762252 1 certificate_controller.go:118] Starting certificate controller "csrapproving" * I0111 21:46:27.762269 1 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving * I0111 21:46:27.911981 1 controllermanager.go:549] Started "persistentvolume-binder" * I0111 21:46:27.912161 1 pv_controller_base.go:303] Starting persistent volume controller * I0111 21:46:27.912332 1 shared_informer.go:240] Waiting for caches to sync for persistent volume * I0111 21:46:28.063683 1 controllermanager.go:549] Started "serviceaccount" * I0111 21:46:28.063968 1 serviceaccounts_controller.go:117] Starting service account controller * I0111 21:46:28.064523 1 shared_informer.go:240] Waiting for caches to sync for service account * I0111 21:46:28.212545 1 controllermanager.go:549] Started "csrcleaner" * I0111 21:46:28.212687 1 cleaner.go:83] Starting CSR cleaner controller * I0111 21:46:28.363668 1 controllermanager.go:549] Started "bootstrapsigner" * I0111 21:46:28.364331 1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer * I0111 21:46:28.514167 1 controllermanager.go:549] Started "attachdetach" * I0111 21:46:28.514544 1 attach_detach_controller.go:322] Starting attach detach controller * I0111 21:46:28.515139 1 shared_informer.go:240] Waiting for caches to sync for attach detach * I0111 21:46:28.541498 1 shared_informer.go:240] Waiting for caches to sync for resource quota * I0111 21:46:28.562396 1 shared_informer.go:247] Caches are synced for certificate-csrapproving * W0111 21:46:28.564147 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist * I0111 21:46:28.586498 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator * I0111 21:46:28.598169 1 shared_informer.go:247] Caches are synced for job * I0111 21:46:28.608316 1 shared_informer.go:247] Caches are synced for endpoint_slice * I0111 21:46:28.611509 1 shared_informer.go:247] Caches are synced for deployment * I0111 21:46:28.618856 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring * I0111 21:46:28.619180 1 shared_informer.go:247] Caches are synced for persistent volume * I0111 21:46:28.623281 1 shared_informer.go:247] Caches are synced for daemon sets * I0111 21:46:28.623321 1 shared_informer.go:247] Caches are synced for attach detach * I0111 21:46:28.623683 1 shared_informer.go:247] Caches are synced for ReplicationController * I0111 21:46:28.629292 1 shared_informer.go:247] Caches are synced for expand * I0111 21:46:28.636619 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client * I0111 21:46:28.636658 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client * I0111 21:46:28.636717 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown * I0111 21:46:28.636822 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving * I0111 21:46:28.646101 1 shared_informer.go:247] Caches are synced for PV protection * I0111 21:46:28.662158 1 shared_informer.go:247] Caches are synced for PVC protection * I0111 21:46:28.662404 1 shared_informer.go:247] Caches are synced for taint * I0111 21:46:28.662613 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: * W0111 21:46:28.662801 1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube. Assuming now as a timestamp. * I0111 21:46:28.663090 1 node_lifecycle_controller.go:1245] Controller detected that zone is now in state Normal. * I0111 21:46:28.663723 1 shared_informer.go:247] Caches are synced for TTL * I0111 21:46:28.662570 1 shared_informer.go:247] Caches are synced for HPA * I0111 21:46:28.663961 1 taint_manager.go:187] Starting NoExecuteTaintManager * I0111 21:46:28.664276 1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" * I0111 21:46:28.664341 1 shared_informer.go:247] Caches are synced for ReplicaSet * I0111 21:46:28.664750 1 shared_informer.go:247] Caches are synced for service account * I0111 21:46:28.668128 1 shared_informer.go:247] Caches are synced for bootstrap_signer * I0111 21:46:28.674218 1 shared_informer.go:247] Caches are synced for endpoint * I0111 21:46:28.675915 1 shared_informer.go:247] Caches are synced for GC * I0111 21:46:28.678825 1 shared_informer.go:247] Caches are synced for namespace * I0111 21:46:28.711780 1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-c95fcf479" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-c95fcf479-tkzzm" * I0111 21:46:28.753244 1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-584f46694c" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-584f46694c-qg2hk" * I0111 21:46:28.806194 1 shared_informer.go:247] Caches are synced for resource quota * I0111 21:46:28.816682 1 shared_informer.go:247] Caches are synced for disruption * I0111 21:46:28.817240 1 disruption.go:339] Sending events to api server. * I0111 21:46:28.816776 1 shared_informer.go:247] Caches are synced for stateful set * I0111 21:46:28.847636 1 shared_informer.go:247] Caches are synced for resource quota * I0111 21:46:28.907786 1 shared_informer.go:240] Waiting for caches to sync for garbage collector * I0111 21:46:29.205734 1 shared_informer.go:247] Caches are synced for garbage collector * I0111 21:46:29.205755 1 garbagecollector.go:137] Garbage collector: all resource monitors have synced. Proceeding to collect garbage * I0111 21:46:29.208169 1 shared_informer.go:247] Caches are synced for garbage collector * * ==> kube-proxy [c714d48a0760] <== * W0111 22:07:56.267423 1 iptables.go:556] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * E0111 22:08:19.666399 1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * I0111 22:08:19.666512 1 proxier.go:850] Sync failed; retrying in 30s * W0111 22:08:26.265847 1 iptables.go:556] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * E0111 22:08:49.670792 1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * I0111 22:08:49.670869 1 proxier.go:850] Sync failed; retrying in 30s * W0111 22:08:56.265968 1 iptables.go:556] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * E0111 22:09:19.675073 1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * I0111 22:09:19.675123 1 proxier.go:850] Sync failed; retrying in 30s * W0111 22:09:26.266147 1 iptables.go:556] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * E0111 22:09:49.678568 1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * I0111 22:09:49.678624 1 proxier.go:850] Sync failed; retrying in 30s * W0111 22:09:56.266514 1 iptables.go:556] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * E0111 22:10:19.683025 1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * I0111 22:10:19.683139 1 proxier.go:850] Sync failed; retrying in 30s * W0111 22:10:26.266965 1 iptables.go:556] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * E0111 22:10:49.686268 1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * I0111 22:10:49.686382 1 proxier.go:850] Sync failed; retrying in 30s * W0111 22:10:56.267018 1 iptables.go:556] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * E0111 22:11:19.689120 1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * I0111 22:11:19.689178 1 proxier.go:850] Sync failed; retrying in 30s * W0111 22:11:26.265466 1 iptables.go:556] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * E0111 22:11:49.692230 1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * I0111 22:11:49.692281 1 proxier.go:850] Sync failed; retrying in 30s * W0111 22:11:56.266457 1 iptables.go:556] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * E0111 22:12:00.949403 1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * I0111 22:12:00.949431 1 proxier.go:850] Sync failed; retrying in 30s * W0111 22:12:26.265657 1 iptables.go:556] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * E0111 22:12:30.791958 1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * I0111 22:12:30.792194 1 proxier.go:850] Sync failed; retrying in 30s * W0111 22:12:56.266242 1 iptables.go:556] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * E0111 22:13:00.796049 1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * I0111 22:13:00.796099 1 proxier.go:850] Sync failed; retrying in 30s * W0111 22:13:26.266447 1 iptables.go:556] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * E0111 22:13:30.798372 1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * I0111 22:13:30.798431 1 proxier.go:850] Sync failed; retrying in 30s * W0111 22:13:56.266239 1 iptables.go:556] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * E0111 22:14:00.801442 1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * I0111 22:14:00.801729 1 proxier.go:850] Sync failed; retrying in 30s * W0111 22:14:26.266336 1 iptables.go:556] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * E0111 22:14:30.804452 1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * I0111 22:14:30.804503 1 proxier.go:850] Sync failed; retrying in 30s * W0111 22:14:56.266765 1 iptables.go:556] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * E0111 22:15:00.807501 1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * I0111 22:15:00.807575 1 proxier.go:850] Sync failed; retrying in 30s * W0111 22:15:26.265787 1 iptables.go:556] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * E0111 22:15:30.810740 1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * I0111 22:15:30.810882 1 proxier.go:850] Sync failed; retrying in 30s * W0111 22:15:56.265499 1 iptables.go:556] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * E0111 22:16:00.814707 1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * I0111 22:16:00.814763 1 proxier.go:850] Sync failed; retrying in 30s * W0111 22:16:26.265797 1 iptables.go:556] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * E0111 22:16:30.818738 1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * I0111 22:16:30.818766 1 proxier.go:850] Sync failed; retrying in 30s * W0111 22:16:56.266930 1 iptables.go:556] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * E0111 22:17:00.822121 1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * I0111 22:17:00.822233 1 proxier.go:850] Sync failed; retrying in 30s * W0111 22:17:26.266461 1 iptables.go:556] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * E0111 22:17:30.825478 1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * I0111 22:17:30.825910 1 proxier.go:850] Sync failed; retrying in 30s * * ==> kube-proxy [e02a7ac0eb42] <== * I0111 23:14:31.440257 1 node.go:136] Successfully retrieved node IP: 192.168.99.100 * I0111 23:14:31.441927 1 server_others.go:111] kube-proxy node IP is an IPv4 address (192.168.99.100), assume IPv4 operation * W0111 23:14:36.386371 1 iptables.go:209] Error checking iptables version, assuming version at least 1.4.11: exit status 127 * W0111 23:14:36.908991 1 proxier.go:649] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules * W0111 23:14:36.911093 1 proxier.go:649] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules * W0111 23:14:36.912561 1 proxier.go:649] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules * W0111 23:14:36.914429 1 proxier.go:649] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules * W0111 23:14:36.916119 1 proxier.go:649] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules * W0111 23:14:36.916852 1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy * I0111 23:14:36.917311 1 server_others.go:186] Using iptables Proxier. * W0111 23:14:36.917361 1 server_others.go:456] detect-local-mode set to ClusterCIDR, but no cluster CIDR defined * I0111 23:14:36.917373 1 server_others.go:467] detect-local-mode: ClusterCIDR , defaulting to no-op detect-local * I0111 23:14:36.918054 1 server.go:650] Version: v1.19.4 * W0111 23:14:36.920174 1 iptables.go:556] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * I0111 23:14:37.072078 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 * I0111 23:14:37.072446 1 conntrack.go:52] Setting nf_conntrack_max to 131072 * I0111 23:14:37.073113 1 conntrack.go:83] Setting conntrack hashsize to 32768 * I0111 23:14:37.078836 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 * I0111 23:14:37.079140 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 * I0111 23:14:37.204865 1 config.go:315] Starting service config controller * I0111 23:14:37.204899 1 shared_informer.go:240] Waiting for caches to sync for service config * I0111 23:14:37.204932 1 config.go:224] Starting endpoint slice config controller * I0111 23:14:37.204937 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config * I0111 23:14:37.406811 1 shared_informer.go:247] Caches are synced for endpoint slice config * I0111 23:14:37.406904 1 shared_informer.go:247] Caches are synced for service config * E0111 23:14:37.445450 1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * I0111 23:14:37.445531 1 proxier.go:850] Sync failed; retrying in 30s * E0111 23:14:38.756093 1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * I0111 23:14:38.756455 1 proxier.go:850] Sync failed; retrying in 30s * W0111 23:15:06.921575 1 iptables.go:556] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * E0111 23:15:08.760211 1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * I0111 23:15:08.760241 1 proxier.go:850] Sync failed; retrying in 30s * E0111 23:15:12.482868 1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * I0111 23:15:12.482896 1 proxier.go:850] Sync failed; retrying in 30s * E0111 23:15:13.496751 1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * I0111 23:15:13.496806 1 proxier.go:850] Sync failed; retrying in 30s * W0111 23:15:36.925225 1 iptables.go:556] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * E0111 23:15:43.381856 1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * I0111 23:15:43.381894 1 proxier.go:850] Sync failed; retrying in 30s * E0111 23:15:57.794006 1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * I0111 23:15:57.794281 1 proxier.go:850] Sync failed; retrying in 30s * W0111 23:16:06.923066 1 iptables.go:556] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * E0111 23:16:27.389976 1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * I0111 23:16:27.390009 1 proxier.go:850] Sync failed; retrying in 30s * W0111 23:16:36.924453 1 iptables.go:556] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * E0111 23:16:57.393821 1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * I0111 23:16:57.393902 1 proxier.go:850] Sync failed; retrying in 30s * E0111 23:16:59.367980 1 proxier.go:858] Failed to ensure that filter chain KUBE-EXTERNAL-SERVICES exists: error creating chain "KUBE-EXTERNAL-SERVICES": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * I0111 23:16:59.368263 1 proxier.go:850] Sync failed; retrying in 30s * W0111 23:17:06.921970 1 iptables.go:556] Could not set up iptables canary mangle/KUBE-PROXY-CANARY: error creating chain "KUBE-PROXY-CANARY": exit status 127: iptables: error while loading shared libraries: /lib/x86_64-linux-gnu/libdl.so.2: invalid ELF header * * ==> kube-scheduler [14b738828ad8] <== * E0111 23:13:25.392688 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.99.100:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.99.100:8443: connect: connection refused * E0111 23:13:25.530031 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.99.100:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.99.100:8443: connect: connection refused * E0111 23:13:25.538979 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.99.100:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0": dial tcp 192.168.99.100:8443: connect: connection refused * E0111 23:13:25.712126 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.99.100:8443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 192.168.99.100:8443: connect: connection refused * E0111 23:13:25.716281 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.99.100:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.99.100:8443: connect: connection refused * E0111 23:13:25.774851 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.99.100:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.99.100:8443: connect: connection refused * E0111 23:13:25.811845 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.99.100:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.99.100:8443: connect: connection refused * E0111 23:13:25.862751 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.99.100:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.99.100:8443: connect: connection refused * E0111 23:13:25.881032 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.99.100:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.99.100:8443: connect: connection refused * E0111 23:13:25.899699 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://192.168.99.100:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.99.100:8443: connect: connection refused * E0111 23:13:25.918690 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.99.100:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.99.100:8443: connect: connection refused * E0111 23:13:25.926570 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.99.100:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.99.100:8443: connect: connection refused * E0111 23:13:27.219987 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.99.100:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.99.100:8443: connect: connection refused * E0111 23:13:27.352222 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.99.100:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.99.100:8443: connect: connection refused * E0111 23:13:27.421762 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.99.100:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.99.100:8443: connect: connection refused * E0111 23:13:27.751242 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.99.100:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.99.100:8443: connect: connection refused * E0111 23:13:27.898863 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.99.100:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.99.100:8443: connect: connection refused * E0111 23:13:27.910415 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.99.100:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.99.100:8443: connect: connection refused * E0111 23:13:27.950605 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.99.100:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.99.100:8443: connect: connection refused * E0111 23:13:28.114082 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.99.100:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.99.100:8443: connect: connection refused * E0111 23:13:28.271413 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.99.100:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.99.100:8443: connect: connection refused * E0111 23:13:28.508265 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.99.100:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.99.100:8443: connect: connection refused * E0111 23:13:28.524638 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.99.100:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0": dial tcp 192.168.99.100:8443: connect: connection refused * E0111 23:13:28.579051 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.99.100:8443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 192.168.99.100:8443: connect: connection refused * E0111 23:13:28.752322 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://192.168.99.100:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.99.100:8443: connect: connection refused * E0111 23:13:31.380573 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.99.100:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.99.100:8443: connect: connection refused * E0111 23:13:31.411798 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.99.100:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.99.100:8443: connect: connection refused * E0111 23:13:31.781265 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.99.100:8443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 192.168.99.100:8443: connect: connection refused * E0111 23:13:32.171220 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.99.100:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.99.100:8443: connect: connection refused * E0111 23:13:32.334795 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.99.100:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.99.100:8443: connect: connection refused * E0111 23:13:32.532686 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: Get "https://192.168.99.100:8443/apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.99.100:8443: connect: connection refused * E0111 23:13:32.829544 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.99.100:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0": dial tcp 192.168.99.100:8443: connect: connection refused * I0111 23:13:43.039986 1 trace.go:205] Trace[1241587439]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (11-Jan-2021 23:13:33.038) (total time: 10001ms): * Trace[1241587439]: [10.001354484s] [10.001354484s] END * E0111 23:13:43.040275 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.99.100:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": net/http: TLS handshake timeout * I0111 23:13:43.651241 1 trace.go:205] Trace[1211196203]: "Reflector ListAndWatch" name:k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 (11-Jan-2021 23:13:33.649) (total time: 10002ms): * Trace[1211196203]: [10.002169574s] [10.002169574s] END * E0111 23:13:43.651379 1 reflector.go:127] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.99.100:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0": net/http: TLS handshake timeout * I0111 23:13:43.801775 1 trace.go:205] Trace[1694936360]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (11-Jan-2021 23:13:33.800) (total time: 10001ms): * Trace[1694936360]: [10.001080257s] [10.001080257s] END * E0111 23:13:43.801986 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.99.100:8443/api/v1/nodes?limit=500&resourceVersion=0": net/http: TLS handshake timeout * I0111 23:13:43.917020 1 trace.go:205] Trace[341889463]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (11-Jan-2021 23:13:33.916) (total time: 10000ms): * Trace[341889463]: [10.000672308s] [10.000672308s] END * E0111 23:13:43.917079 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.99.100:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": net/http: TLS handshake timeout * I0111 23:13:44.472103 1 trace.go:205] Trace[1381069098]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (11-Jan-2021 23:13:34.469) (total time: 10002ms): * Trace[1381069098]: [10.00231965s] [10.00231965s] END * E0111 23:13:44.472137 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.99.100:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": net/http: TLS handshake timeout * I0111 23:13:44.541109 1 trace.go:205] Trace[935513127]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (11-Jan-2021 23:13:34.539) (total time: 10001ms): * Trace[935513127]: [10.001894293s] [10.001894293s] END * E0111 23:13:44.541179 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.99.100:8443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout * I0111 23:13:48.257179 1 trace.go:205] Trace[923810264]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (11-Jan-2021 23:13:38.255) (total time: 10001ms): * Trace[923810264]: [10.001389721s] [10.001389721s] END * E0111 23:13:48.257199 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.99.100:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": net/http: TLS handshake timeout * E0111 23:13:49.302097 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope * E0111 23:13:49.302344 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope * E0111 23:13:49.302488 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope * E0111 23:13:49.302690 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope * E0111 23:13:49.302814 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope * E0111 23:13:49.302900 1 reflector.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope * I0111 23:13:52.459122 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kube-scheduler [2286a719fc17] <== * I0111 21:46:13.789789 1 registry.go:173] Registering SelectorSpread plugin * I0111 21:46:13.789852 1 registry.go:173] Registering SelectorSpread plugin * I0111 21:46:14.726346 1 serving.go:331] Generated self-signed cert in-memory * W0111 21:46:22.067524 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' * W0111 21:46:22.067673 1 authentication.go:294] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" * W0111 21:46:22.067742 1 authentication.go:295] Continuing without authentication configuration. This may treat all requests as anonymous. * W0111 21:46:22.067829 1 authentication.go:296] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false * I0111 21:46:22.126111 1 registry.go:173] Registering SelectorSpread plugin * I0111 21:46:22.126179 1 registry.go:173] Registering SelectorSpread plugin * I0111 21:46:22.138800 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259 * I0111 21:46:22.139211 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0111 21:46:22.139234 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * I0111 21:46:22.139250 1 tlsconfig.go:240] Starting DynamicServingCertificateController * I0111 21:46:22.239268 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Mon 2021-01-11 23:12:17 UTC, end at Mon 2021-01-11 23:17:29 UTC. -- * Jan 11 23:15:38 minikube kubelet[3139]: E0111 23:15:38.777354 3139 pod_workers.go:191] Error syncing pod 92ae9da8-e9df-4aaa-85a2-7fe1aea100b9 ("storage-provisioner_kube-system(92ae9da8-e9df-4aaa-85a2-7fe1aea100b9)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(92ae9da8-e9df-4aaa-85a2-7fe1aea100b9)" * Jan 11 23:15:43 minikube kubelet[3139]: W0111 23:15:43.344382 3139 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-584f46694c-zmvhr through plugin: invalid network status for * Jan 11 23:15:43 minikube kubelet[3139]: I0111 23:15:43.352462 3139 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 369e34081d9246b50f1bcfbf1a4bfc2b35bcd7f806baf91af43bd1a68dceef49 * Jan 11 23:15:43 minikube kubelet[3139]: I0111 23:15:43.353209 3139 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 7a96b939d2247d27f01a562f667846bcabd3b3c8fe2e969a093c53f0e4768b35 * Jan 11 23:15:43 minikube kubelet[3139]: E0111 23:15:43.353781 3139 pod_workers.go:191] Error syncing pod c4691d6c-daa1-49cb-b8a8-0492e5021fa5 ("kubernetes-dashboard-584f46694c-zmvhr_kubernetes-dashboard(c4691d6c-daa1-49cb-b8a8-0492e5021fa5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-584f46694c-zmvhr_kubernetes-dashboard(c4691d6c-daa1-49cb-b8a8-0492e5021fa5)" * Jan 11 23:15:44 minikube kubelet[3139]: W0111 23:15:44.383571 3139 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-584f46694c-zmvhr through plugin: invalid network status for * Jan 11 23:15:44 minikube kubelet[3139]: I0111 23:15:44.577369 3139 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 7a96b939d2247d27f01a562f667846bcabd3b3c8fe2e969a093c53f0e4768b35 * Jan 11 23:15:44 minikube kubelet[3139]: E0111 23:15:44.578773 3139 pod_workers.go:191] Error syncing pod c4691d6c-daa1-49cb-b8a8-0492e5021fa5 ("kubernetes-dashboard-584f46694c-zmvhr_kubernetes-dashboard(c4691d6c-daa1-49cb-b8a8-0492e5021fa5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-584f46694c-zmvhr_kubernetes-dashboard(c4691d6c-daa1-49cb-b8a8-0492e5021fa5)" * Jan 11 23:15:49 minikube kubelet[3139]: I0111 23:15:49.776284 3139 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b16a1dd6c1680c19aeb288324db10b9a3805b45c27dd447f2596734570a984dc * Jan 11 23:15:49 minikube kubelet[3139]: E0111 23:15:49.776868 3139 pod_workers.go:191] Error syncing pod bffb64fd-3303-49ac-829a-60520926979b ("mongo-express-78fcf796b8-kfz2x_default(bffb64fd-3303-49ac-829a-60520926979b)"), skipping: failed to "StartContainer" for "mongo-express" with CrashLoopBackOff: "back-off 40s restarting failed container=mongo-express pod=mongo-express-78fcf796b8-kfz2x_default(bffb64fd-3303-49ac-829a-60520926979b)" * Jan 11 23:15:53 minikube kubelet[3139]: I0111 23:15:53.777214 3139 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 52d6974654fd7e39da3b448e9cc8430b181eb23814718399b48460d80b6ca7f1 * Jan 11 23:15:53 minikube kubelet[3139]: E0111 23:15:53.780154 3139 pod_workers.go:191] Error syncing pod 92ae9da8-e9df-4aaa-85a2-7fe1aea100b9 ("storage-provisioner_kube-system(92ae9da8-e9df-4aaa-85a2-7fe1aea100b9)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(92ae9da8-e9df-4aaa-85a2-7fe1aea100b9)" * Jan 11 23:15:54 minikube kubelet[3139]: E0111 23:15:54.877213 3139 kubelet.go:1594] Unable to attach or mount volumes for pod "ingress-nginx-controller-558664778f-qfxjw_kube-system(00e5ec92-2234-4e06-9f79-b8bc910b104c)": unmounted volumes=[webhook-cert], unattached volumes=[webhook-cert ingress-nginx-token-8p9wt]: timed out waiting for the condition; skipping pod * Jan 11 23:15:54 minikube kubelet[3139]: E0111 23:15:54.877260 3139 pod_workers.go:191] Error syncing pod 00e5ec92-2234-4e06-9f79-b8bc910b104c ("ingress-nginx-controller-558664778f-qfxjw_kube-system(00e5ec92-2234-4e06-9f79-b8bc910b104c)"), skipping: unmounted volumes=[webhook-cert], unattached volumes=[webhook-cert ingress-nginx-token-8p9wt]: timed out waiting for the condition * Jan 11 23:15:56 minikube kubelet[3139]: I0111 23:15:56.777017 3139 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 7a96b939d2247d27f01a562f667846bcabd3b3c8fe2e969a093c53f0e4768b35 * Jan 11 23:15:57 minikube kubelet[3139]: W0111 23:15:57.751934 3139 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-584f46694c-zmvhr through plugin: invalid network status for * Jan 11 23:15:58 minikube kubelet[3139]: E0111 23:15:58.835743 3139 secret.go:195] Couldn't get secret kube-system/ingress-nginx-admission: secret "ingress-nginx-admission" not found * Jan 11 23:15:58 minikube kubelet[3139]: E0111 23:15:58.836699 3139 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/00e5ec92-2234-4e06-9f79-b8bc910b104c-webhook-cert podName:00e5ec92-2234-4e06-9f79-b8bc910b104c nodeName:}" failed. No retries permitted until 2021-01-11 23:18:00.836643349 +0000 UTC m=+307.465688940 (durationBeforeRetry 2m2s). Error: "MountVolume.SetUp failed for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/00e5ec92-2234-4e06-9f79-b8bc910b104c-webhook-cert\") pod \"ingress-nginx-controller-558664778f-qfxjw\" (UID: \"00e5ec92-2234-4e06-9f79-b8bc910b104c\") : secret \"ingress-nginx-admission\" not found" * Jan 11 23:16:00 minikube kubelet[3139]: I0111 23:16:00.775576 3139 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b16a1dd6c1680c19aeb288324db10b9a3805b45c27dd447f2596734570a984dc * Jan 11 23:16:04 minikube kubelet[3139]: I0111 23:16:04.776695 3139 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 52d6974654fd7e39da3b448e9cc8430b181eb23814718399b48460d80b6ca7f1 * Jan 11 23:16:04 minikube kubelet[3139]: E0111 23:16:04.777365 3139 pod_workers.go:191] Error syncing pod 92ae9da8-e9df-4aaa-85a2-7fe1aea100b9 ("storage-provisioner_kube-system(92ae9da8-e9df-4aaa-85a2-7fe1aea100b9)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(92ae9da8-e9df-4aaa-85a2-7fe1aea100b9)" * Jan 11 23:16:05 minikube kubelet[3139]: W0111 23:16:05.900506 3139 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/mongo-express-78fcf796b8-kfz2x through plugin: invalid network status for * Jan 11 23:16:15 minikube kubelet[3139]: I0111 23:16:15.775510 3139 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 52d6974654fd7e39da3b448e9cc8430b181eb23814718399b48460d80b6ca7f1 * Jan 11 23:16:15 minikube kubelet[3139]: E0111 23:16:15.775878 3139 pod_workers.go:191] Error syncing pod 92ae9da8-e9df-4aaa-85a2-7fe1aea100b9 ("storage-provisioner_kube-system(92ae9da8-e9df-4aaa-85a2-7fe1aea100b9)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(92ae9da8-e9df-4aaa-85a2-7fe1aea100b9)" * Jan 11 23:16:27 minikube kubelet[3139]: W0111 23:16:27.355385 3139 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-584f46694c-zmvhr through plugin: invalid network status for * Jan 11 23:16:27 minikube kubelet[3139]: I0111 23:16:27.362383 3139 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 7a96b939d2247d27f01a562f667846bcabd3b3c8fe2e969a093c53f0e4768b35 * Jan 11 23:16:27 minikube kubelet[3139]: I0111 23:16:27.363260 3139 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 9d5c582f535d1172b4005f9efada220a64ce992b694b2ad1f9f57724f8435376 * Jan 11 23:16:27 minikube kubelet[3139]: E0111 23:16:27.363645 3139 pod_workers.go:191] Error syncing pod c4691d6c-daa1-49cb-b8a8-0492e5021fa5 ("kubernetes-dashboard-584f46694c-zmvhr_kubernetes-dashboard(c4691d6c-daa1-49cb-b8a8-0492e5021fa5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-584f46694c-zmvhr_kubernetes-dashboard(c4691d6c-daa1-49cb-b8a8-0492e5021fa5)" * Jan 11 23:16:28 minikube kubelet[3139]: W0111 23:16:28.395166 3139 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-584f46694c-zmvhr through plugin: invalid network status for * Jan 11 23:16:28 minikube kubelet[3139]: I0111 23:16:28.776207 3139 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 52d6974654fd7e39da3b448e9cc8430b181eb23814718399b48460d80b6ca7f1 * Jan 11 23:16:28 minikube kubelet[3139]: E0111 23:16:28.776797 3139 pod_workers.go:191] Error syncing pod 92ae9da8-e9df-4aaa-85a2-7fe1aea100b9 ("storage-provisioner_kube-system(92ae9da8-e9df-4aaa-85a2-7fe1aea100b9)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(92ae9da8-e9df-4aaa-85a2-7fe1aea100b9)" * Jan 11 23:16:34 minikube kubelet[3139]: I0111 23:16:34.576758 3139 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 9d5c582f535d1172b4005f9efada220a64ce992b694b2ad1f9f57724f8435376 * Jan 11 23:16:34 minikube kubelet[3139]: E0111 23:16:34.579209 3139 pod_workers.go:191] Error syncing pod c4691d6c-daa1-49cb-b8a8-0492e5021fa5 ("kubernetes-dashboard-584f46694c-zmvhr_kubernetes-dashboard(c4691d6c-daa1-49cb-b8a8-0492e5021fa5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-584f46694c-zmvhr_kubernetes-dashboard(c4691d6c-daa1-49cb-b8a8-0492e5021fa5)" * Jan 11 23:16:40 minikube kubelet[3139]: W0111 23:16:40.757205 3139 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/mongo-express-78fcf796b8-kfz2x through plugin: invalid network status for * Jan 11 23:16:40 minikube kubelet[3139]: I0111 23:16:40.773981 3139 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b16a1dd6c1680c19aeb288324db10b9a3805b45c27dd447f2596734570a984dc * Jan 11 23:16:40 minikube kubelet[3139]: I0111 23:16:40.775585 3139 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 52d6974654fd7e39da3b448e9cc8430b181eb23814718399b48460d80b6ca7f1 * Jan 11 23:16:40 minikube kubelet[3139]: E0111 23:16:40.776649 3139 pod_workers.go:191] Error syncing pod 92ae9da8-e9df-4aaa-85a2-7fe1aea100b9 ("storage-provisioner_kube-system(92ae9da8-e9df-4aaa-85a2-7fe1aea100b9)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(92ae9da8-e9df-4aaa-85a2-7fe1aea100b9)" * Jan 11 23:16:40 minikube kubelet[3139]: I0111 23:16:40.786118 3139 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: ff2e1c61afc2f46dd9d18b90df0fba58bfe90400216e43944cb5791573bf6c43 * Jan 11 23:16:40 minikube kubelet[3139]: E0111 23:16:40.786501 3139 pod_workers.go:191] Error syncing pod bffb64fd-3303-49ac-829a-60520926979b ("mongo-express-78fcf796b8-kfz2x_default(bffb64fd-3303-49ac-829a-60520926979b)"), skipping: failed to "StartContainer" for "mongo-express" with CrashLoopBackOff: "back-off 1m20s restarting failed container=mongo-express pod=mongo-express-78fcf796b8-kfz2x_default(bffb64fd-3303-49ac-829a-60520926979b)" * Jan 11 23:16:41 minikube kubelet[3139]: W0111 23:16:41.806359 3139 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/mongo-express-78fcf796b8-kfz2x through plugin: invalid network status for * Jan 11 23:16:44 minikube kubelet[3139]: I0111 23:16:44.775545 3139 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 9d5c582f535d1172b4005f9efada220a64ce992b694b2ad1f9f57724f8435376 * Jan 11 23:16:44 minikube kubelet[3139]: E0111 23:16:44.776896 3139 pod_workers.go:191] Error syncing pod c4691d6c-daa1-49cb-b8a8-0492e5021fa5 ("kubernetes-dashboard-584f46694c-zmvhr_kubernetes-dashboard(c4691d6c-daa1-49cb-b8a8-0492e5021fa5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 20s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-584f46694c-zmvhr_kubernetes-dashboard(c4691d6c-daa1-49cb-b8a8-0492e5021fa5)" * Jan 11 23:16:51 minikube kubelet[3139]: I0111 23:16:51.776769 3139 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: ff2e1c61afc2f46dd9d18b90df0fba58bfe90400216e43944cb5791573bf6c43 * Jan 11 23:16:51 minikube kubelet[3139]: E0111 23:16:51.777417 3139 pod_workers.go:191] Error syncing pod bffb64fd-3303-49ac-829a-60520926979b ("mongo-express-78fcf796b8-kfz2x_default(bffb64fd-3303-49ac-829a-60520926979b)"), skipping: failed to "StartContainer" for "mongo-express" with CrashLoopBackOff: "back-off 1m20s restarting failed container=mongo-express pod=mongo-express-78fcf796b8-kfz2x_default(bffb64fd-3303-49ac-829a-60520926979b)" * Jan 11 23:16:54 minikube kubelet[3139]: I0111 23:16:54.775873 3139 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 52d6974654fd7e39da3b448e9cc8430b181eb23814718399b48460d80b6ca7f1 * Jan 11 23:16:54 minikube kubelet[3139]: E0111 23:16:54.777722 3139 pod_workers.go:191] Error syncing pod 92ae9da8-e9df-4aaa-85a2-7fe1aea100b9 ("storage-provisioner_kube-system(92ae9da8-e9df-4aaa-85a2-7fe1aea100b9)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(92ae9da8-e9df-4aaa-85a2-7fe1aea100b9)" * Jan 11 23:16:58 minikube kubelet[3139]: I0111 23:16:58.775809 3139 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 9d5c582f535d1172b4005f9efada220a64ce992b694b2ad1f9f57724f8435376 * Jan 11 23:16:59 minikube kubelet[3139]: W0111 23:16:59.331234 3139 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-584f46694c-zmvhr through plugin: invalid network status for * Jan 11 23:17:05 minikube kubelet[3139]: I0111 23:17:05.775750 3139 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: ff2e1c61afc2f46dd9d18b90df0fba58bfe90400216e43944cb5791573bf6c43 * Jan 11 23:17:05 minikube kubelet[3139]: E0111 23:17:05.777670 3139 pod_workers.go:191] Error syncing pod bffb64fd-3303-49ac-829a-60520926979b ("mongo-express-78fcf796b8-kfz2x_default(bffb64fd-3303-49ac-829a-60520926979b)"), skipping: failed to "StartContainer" for "mongo-express" with CrashLoopBackOff: "back-off 1m20s restarting failed container=mongo-express pod=mongo-express-78fcf796b8-kfz2x_default(bffb64fd-3303-49ac-829a-60520926979b)" * Jan 11 23:17:06 minikube kubelet[3139]: I0111 23:17:06.776244 3139 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 52d6974654fd7e39da3b448e9cc8430b181eb23814718399b48460d80b6ca7f1 * Jan 11 23:17:06 minikube kubelet[3139]: E0111 23:17:06.777684 3139 pod_workers.go:191] Error syncing pod 92ae9da8-e9df-4aaa-85a2-7fe1aea100b9 ("storage-provisioner_kube-system(92ae9da8-e9df-4aaa-85a2-7fe1aea100b9)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(92ae9da8-e9df-4aaa-85a2-7fe1aea100b9)" * Jan 11 23:17:17 minikube kubelet[3139]: I0111 23:17:17.776026 3139 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 52d6974654fd7e39da3b448e9cc8430b181eb23814718399b48460d80b6ca7f1 * Jan 11 23:17:17 minikube kubelet[3139]: E0111 23:17:17.776381 3139 pod_workers.go:191] Error syncing pod 92ae9da8-e9df-4aaa-85a2-7fe1aea100b9 ("storage-provisioner_kube-system(92ae9da8-e9df-4aaa-85a2-7fe1aea100b9)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 5m0s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(92ae9da8-e9df-4aaa-85a2-7fe1aea100b9)" * Jan 11 23:17:19 minikube kubelet[3139]: I0111 23:17:19.777302 3139 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: ff2e1c61afc2f46dd9d18b90df0fba58bfe90400216e43944cb5791573bf6c43 * Jan 11 23:17:19 minikube kubelet[3139]: E0111 23:17:19.779429 3139 pod_workers.go:191] Error syncing pod bffb64fd-3303-49ac-829a-60520926979b ("mongo-express-78fcf796b8-kfz2x_default(bffb64fd-3303-49ac-829a-60520926979b)"), skipping: failed to "StartContainer" for "mongo-express" with CrashLoopBackOff: "back-off 1m20s restarting failed container=mongo-express pod=mongo-express-78fcf796b8-kfz2x_default(bffb64fd-3303-49ac-829a-60520926979b)" * Jan 11 23:17:29 minikube kubelet[3139]: W0111 23:17:29.140230 3139 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-584f46694c-zmvhr through plugin: invalid network status for * Jan 11 23:17:29 minikube kubelet[3139]: I0111 23:17:29.146992 3139 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 9d5c582f535d1172b4005f9efada220a64ce992b694b2ad1f9f57724f8435376 * Jan 11 23:17:29 minikube kubelet[3139]: I0111 23:17:29.147386 3139 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: c2aa12173172d98a2e46c56b4285249f24c9af3041e86233e8d2c78ef92731c0 * Jan 11 23:17:29 minikube kubelet[3139]: E0111 23:17:29.147906 3139 pod_workers.go:191] Error syncing pod c4691d6c-daa1-49cb-b8a8-0492e5021fa5 ("kubernetes-dashboard-584f46694c-zmvhr_kubernetes-dashboard(c4691d6c-daa1-49cb-b8a8-0492e5021fa5)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "back-off 40s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-584f46694c-zmvhr_kubernetes-dashboard(c4691d6c-daa1-49cb-b8a8-0492e5021fa5)" * * ==> kubernetes-dashboard [03e42c26afc4] <== * 2021/01/11 21:40:34 Starting overwatch * panic: Get "https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf": dial tcp 10.96.0.1:443: i/o timeout * * goroutine 1 [running]: * github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc00000d0e0) * /home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:41 +0x446 * github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...) * /home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:66 * github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc00009c100) * /home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:501 +0xc6 * github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc00009c100) * /home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:469 +0x47 * github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...) * /home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:550 * main.main() * /home/runner/work/dashboard/dashboard/src/app/backend/dashboard.go:105 +0x20d * 2021/01/11 21:40:34 Using namespace: kubernetes-dashboard * 2021/01/11 21:40:34 Using in-cluster config to connect to apiserver * 2021/01/11 21:40:34 Using secret token for csrf signing * 2021/01/11 21:40:34 Initializing csrf token from kubernetes-dashboard-csrf secret * * ==> kubernetes-dashboard [8f9d02606607] <== * * ==> kubernetes-dashboard [9d5c582f535d] <== * * ==> storage-provisioner [52d6974654fd] <== * F0111 23:15:11.612953 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout