You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 20, 2022. It is now read-only.
While testing out Kops 1.11-beta.1 with K8s 1.12.3 I noticed some data corruption after migrating to etcd-manager.
Replication process, create a new k8s cluster with Kops.
Kops version: 1.10
Kubernetes version: 1.10
etcd version: 3.2.12
Update etcd and k8s version.
Kubernetes version: 1.13
etcd version: 3.2.18 / 3.2.24 (Tested with both and saw the same issue)
Below is the logs I'm seeing from the etcd-manager container when the corruption seems to happen. When this happens etcd does not start and unfortunately I have not been able to find any relevant logs as to why. Flag --insecure-bind-address has been deprecated, This flag will be removed in a future version. Flag --insecure-port has been deprecated, This flag will be removed in a future version. I1212 00:12:42.352817 7 flags.go:33] FLAG: --address="127.0.0.1" I1212 00:12:42.352874 7 flags.go:33] FLAG: --admission-control="[]" I1212 00:12:42.352885 7 flags.go:33] FLAG: --admission-control-config-file="" I1212 00:12:42.352892 7 flags.go:33] FLAG: --advertise-address="<nil>" I1212 00:12:42.352896 7 flags.go:33] FLAG: --allow-privileged="true" I1212 00:12:42.352900 7 flags.go:33] FLAG: --alsologtostderr="false" I1212 00:12:42.352904 7 flags.go:33] FLAG: --anonymous-auth="false" I1212 00:12:42.352907 7 flags.go:33] FLAG: --apiserver-count="5" I1212 00:12:42.352911 7 flags.go:33] FLAG: --audit-log-batch-buffer-size="10000" I1212 00:12:42.352915 7 flags.go:33] FLAG: --audit-log-batch-max-size="1" I1212 00:12:42.352917 7 flags.go:33] FLAG: --audit-log-batch-max-wait="0s" I1212 00:12:42.352921 7 flags.go:33] FLAG: --audit-log-batch-throttle-burst="0" I1212 00:12:42.352924 7 flags.go:33] FLAG: --audit-log-batch-throttle-enable="false" I1212 00:12:42.352927 7 flags.go:33] FLAG: --audit-log-batch-throttle-qps="0" I1212 00:12:42.352934 7 flags.go:33] FLAG: --audit-log-format="json" I1212 00:12:42.352937 7 flags.go:33] FLAG: --audit-log-maxage="10" I1212 00:12:42.352940 7 flags.go:33] FLAG: --audit-log-maxbackup="5" I1212 00:12:42.352943 7 flags.go:33] FLAG: --audit-log-maxsize="100" I1212 00:12:42.352946 7 flags.go:33] FLAG: --audit-log-mode="blocking" I1212 00:12:42.352949 7 flags.go:33] FLAG: --audit-log-path="/var/log/kube-audit.log" I1212 00:12:42.352952 7 flags.go:33] FLAG: --audit-log-truncate-enabled="false" I1212 00:12:42.352955 7 flags.go:33] FLAG: --audit-log-truncate-max-batch-size="10485760" I1212 00:12:42.352960 7 flags.go:33] FLAG: --audit-log-truncate-max-event-size="102400" I1212 00:12:42.352963 7 flags.go:33] FLAG: --audit-log-version="audit.k8s.io/v1beta1" I1212 00:12:42.352966 7 flags.go:33] FLAG: --audit-policy-file="/srv/kubernetes/audit_policy.yaml" I1212 00:12:42.352969 7 flags.go:33] FLAG: --audit-webhook-batch-buffer-size="10000" I1212 00:12:42.352972 7 flags.go:33] FLAG: --audit-webhook-batch-initial-backoff="10s" I1212 00:12:42.352975 7 flags.go:33] FLAG: --audit-webhook-batch-max-size="400" I1212 00:12:42.352978 7 flags.go:33] FLAG: --audit-webhook-batch-max-wait="30s" I1212 00:12:42.352981 7 flags.go:33] FLAG: --audit-webhook-batch-throttle-burst="15" I1212 00:12:42.352984 7 flags.go:33] FLAG: --audit-webhook-batch-throttle-enable="true" I1212 00:12:42.352987 7 flags.go:33] FLAG: --audit-webhook-batch-throttle-qps="10" I1212 00:12:42.352990 7 flags.go:33] FLAG: --audit-webhook-config-file="" I1212 00:12:42.352993 7 flags.go:33] FLAG: --audit-webhook-initial-backoff="10s" I1212 00:12:42.352996 7 flags.go:33] FLAG: --audit-webhook-mode="batch" I1212 00:12:42.352999 7 flags.go:33] FLAG: --audit-webhook-truncate-enabled="false" I1212 00:12:42.353002 7 flags.go:33] FLAG: --audit-webhook-truncate-max-batch-size="10485760" I1212 00:12:42.353005 7 flags.go:33] FLAG: --audit-webhook-truncate-max-event-size="102400" I1212 00:12:42.353008 7 flags.go:33] FLAG: --audit-webhook-version="audit.k8s.io/v1beta1" I1212 00:12:42.353011 7 flags.go:33] FLAG: --authentication-token-webhook-cache-ttl="2m0s" I1212 00:12:42.353014 7 flags.go:33] FLAG: --authentication-token-webhook-config-file="/etc/kubernetes/authn.config" I1212 00:12:42.353017 7 flags.go:33] FLAG: --authorization-mode="[RBAC]" I1212 00:12:42.353021 7 flags.go:33] FLAG: --authorization-policy-file="" I1212 00:12:42.353024 7 flags.go:33] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" I1212 00:12:42.353027 7 flags.go:33] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" I1212 00:12:42.353030 7 flags.go:33] FLAG: --authorization-webhook-config-file="" I1212 00:12:42.353032 7 flags.go:33] FLAG: --basic-auth-file="/srv/kubernetes/basic_auth.csv" I1212 00:12:42.353036 7 flags.go:33] FLAG: --bind-address="0.0.0.0" I1212 00:12:42.353039 7 flags.go:33] FLAG: --cert-dir="/var/run/kubernetes" I1212 00:12:42.353042 7 flags.go:33] FLAG: --client-ca-file="/srv/kubernetes/ca.crt" I1212 00:12:42.353045 7 flags.go:33] FLAG: --cloud-config="/etc/kubernetes/cloud.config" I1212 00:12:42.353048 7 flags.go:33] FLAG: --cloud-provider="aws" I1212 00:12:42.353051 7 flags.go:33] FLAG: --cloud-provider-gce-lb-src-cidrs="130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16" I1212 00:12:42.353056 7 flags.go:33] FLAG: --contention-profiling="false" I1212 00:12:42.353059 7 flags.go:33] FLAG: --cors-allowed-origins="[]" I1212 00:12:42.353065 7 flags.go:33] FLAG: --default-not-ready-toleration-seconds="300" I1212 00:12:42.353068 7 flags.go:33] FLAG: --default-unreachable-toleration-seconds="300" I1212 00:12:42.353071 7 flags.go:33] FLAG: --default-watch-cache-size="100" I1212 00:12:42.353074 7 flags.go:33] FLAG: --delete-collection-workers="1" I1212 00:12:42.353077 7 flags.go:33] FLAG: --deserialization-cache-size="0" I1212 00:12:42.353080 7 flags.go:33] FLAG: --disable-admission-plugins="[]" I1212 00:12:42.353083 7 flags.go:33] FLAG: --enable-admission-plugins="[Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,NodeRestriction,ResourceQuota]" I1212 00:12:42.353100 7 flags.go:33] FLAG: --enable-aggregator-routing="false" I1212 00:12:42.353107 7 flags.go:33] FLAG: --enable-bootstrap-token-auth="false" I1212 00:12:42.353109 7 flags.go:33] FLAG: --enable-garbage-collector="true" I1212 00:12:42.353112 7 flags.go:33] FLAG: --enable-logs-handler="true" I1212 00:12:42.353115 7 flags.go:33] FLAG: --enable-swagger-ui="false" I1212 00:12:42.353118 7 flags.go:33] FLAG: --endpoint-reconciler-type="lease" I1212 00:12:42.353121 7 flags.go:33] FLAG: --etcd-cafile="" I1212 00:12:42.353123 7 flags.go:33] FLAG: --etcd-certfile="" I1212 00:12:42.353126 7 flags.go:33] FLAG: --etcd-compaction-interval="5m0s" I1212 00:12:42.353129 7 flags.go:33] FLAG: --etcd-count-metric-poll-period="1m0s" I1212 00:12:42.353132 7 flags.go:33] FLAG: --etcd-keyfile="" I1212 00:12:42.353135 7 flags.go:33] FLAG: --etcd-prefix="/registry" I1212 00:12:42.353138 7 flags.go:33] FLAG: --etcd-quorum-read="true" I1212 00:12:42.353141 7 flags.go:33] FLAG: --etcd-servers="[http://127.0.0.1:4001]" I1212 00:12:42.353145 7 flags.go:33] FLAG: --etcd-servers-overrides="[/events#http://127.0.0.1:4002]" I1212 00:12:42.353150 7 flags.go:33] FLAG: --event-ttl="1h0m0s" I1212 00:12:42.353156 7 flags.go:33] FLAG: --experimental-encryption-provider-config="" I1212 00:12:42.353159 7 flags.go:33] FLAG: --external-hostname="" I1212 00:12:42.353162 7 flags.go:33] FLAG: --feature-gates="" I1212 00:12:42.353167 7 flags.go:33] FLAG: --help="false" I1212 00:12:42.353170 7 flags.go:33] FLAG: --http2-max-streams-per-connection="0" I1212 00:12:42.353172 7 flags.go:33] FLAG: --insecure-bind-address="127.0.0.1" I1212 00:12:42.353176 7 flags.go:33] FLAG: --insecure-port="8080" I1212 00:12:42.353179 7 flags.go:33] FLAG: --kubelet-certificate-authority="" I1212 00:12:42.353182 7 flags.go:33] FLAG: --kubelet-client-certificate="/srv/kubernetes/kubelet-api.pem" I1212 00:12:42.353185 7 flags.go:33] FLAG: --kubelet-client-key="/srv/kubernetes/kubelet-api-key.pem" I1212 00:12:42.353188 7 flags.go:33] FLAG: --kubelet-https="true" I1212 00:12:42.353191 7 flags.go:33] FLAG: --kubelet-port="10250" I1212 00:12:42.353199 7 flags.go:33] FLAG: --kubelet-preferred-address-types="[InternalIP,Hostname,ExternalIP]" I1212 00:12:42.353203 7 flags.go:33] FLAG: --kubelet-read-only-port="10255" I1212 00:12:42.353206 7 flags.go:33] FLAG: --kubelet-timeout="5s" I1212 00:12:42.353209 7 flags.go:33] FLAG: --kubernetes-service-node-port="0" I1212 00:12:42.353212 7 flags.go:33] FLAG: --log-backtrace-at=":0" I1212 00:12:42.353219 7 flags.go:33] FLAG: --log-dir="" I1212 00:12:42.353222 7 flags.go:33] FLAG: --log-flush-frequency="5s" I1212 00:12:42.353225 7 flags.go:33] FLAG: --logtostderr="true" I1212 00:12:42.353228 7 flags.go:33] FLAG: --master-service-namespace="default" I1212 00:12:42.353231 7 flags.go:33] FLAG: --max-connection-bytes-per-sec="0" I1212 00:12:42.353234 7 flags.go:33] FLAG: --max-mutating-requests-inflight="200" I1212 00:12:42.353237 7 flags.go:33] FLAG: --max-requests-inflight="400" I1212 00:12:42.353240 7 flags.go:33] FLAG: --min-request-timeout="1800" I1212 00:12:42.353243 7 flags.go:33] FLAG: --oidc-ca-file="" I1212 00:12:42.353246 7 flags.go:33] FLAG: --oidc-client-id="" I1212 00:12:42.353249 7 flags.go:33] FLAG: --oidc-groups-claim="" I1212 00:12:42.353251 7 flags.go:33] FLAG: --oidc-groups-prefix="" I1212 00:12:42.353254 7 flags.go:33] FLAG: --oidc-issuer-url="" I1212 00:12:42.353257 7 flags.go:33] FLAG: --oidc-required-claim="" I1212 00:12:42.353261 7 flags.go:33] FLAG: --oidc-signing-algs="[RS256]" I1212 00:12:42.353266 7 flags.go:33] FLAG: --oidc-username-claim="sub" I1212 00:12:42.353269 7 flags.go:33] FLAG: --oidc-username-prefix="" I1212 00:12:42.353271 7 flags.go:33] FLAG: --port="8080" I1212 00:12:42.353274 7 flags.go:33] FLAG: --profiling="true" I1212 00:12:42.353277 7 flags.go:33] FLAG: --proxy-client-cert-file="/srv/kubernetes/apiserver-aggregator.cert" I1212 00:12:42.353281 7 flags.go:33] FLAG: --proxy-client-key-file="/srv/kubernetes/apiserver-aggregator.key" I1212 00:12:42.353284 7 flags.go:33] FLAG: --repair-malformed-updates="false" I1212 00:12:42.353287 7 flags.go:33] FLAG: --request-timeout="1m0s" I1212 00:12:42.353290 7 flags.go:33] FLAG: --requestheader-allowed-names="[aggregator]" I1212 00:12:42.353294 7 flags.go:33] FLAG: --requestheader-client-ca-file="/srv/kubernetes/apiserver-aggregator-ca.cert" I1212 00:12:42.353299 7 flags.go:33] FLAG: --requestheader-extra-headers-prefix="[X-Remote-Extra-]" I1212 00:12:42.353304 7 flags.go:33] FLAG: --requestheader-group-headers="[X-Remote-Group]" I1212 00:12:42.353307 7 flags.go:33] FLAG: --requestheader-username-headers="[X-Remote-User]" I1212 00:12:42.353313 7 flags.go:33] FLAG: --runtime-config="admissionregistration.k8s.io/v1alpha1=true" I1212 00:12:42.353320 7 flags.go:33] FLAG: --secure-port="443" I1212 00:12:42.353323 7 flags.go:33] FLAG: --service-account-api-audiences="[]" I1212 00:12:42.353326 7 flags.go:33] FLAG: --service-account-issuer="" I1212 00:12:42.353329 7 flags.go:33] FLAG: --service-account-key-file="[]" I1212 00:12:42.353338 7 flags.go:33] FLAG: --service-account-lookup="true" I1212 00:12:42.353341 7 flags.go:33] FLAG: --service-account-max-token-expiration="0s" I1212 00:12:42.353344 7 flags.go:33] FLAG: --service-account-signing-key-file="" I1212 00:12:42.353347 7 flags.go:33] FLAG: --service-cluster-ip-range="100.64.0.0/13" I1212 00:12:42.353352 7 flags.go:33] FLAG: --service-node-port-range="30000-32767" I1212 00:12:42.353359 7 flags.go:33] FLAG: --ssh-keyfile="" I1212 00:12:42.353362 7 flags.go:33] FLAG: --ssh-user="" I1212 00:12:42.353364 7 flags.go:33] FLAG: --stderrthreshold="2" I1212 00:12:42.353367 7 flags.go:33] FLAG: --storage-backend="etcd3" I1212 00:12:42.353370 7 flags.go:33] FLAG: --storage-media-type="application/vnd.kubernetes.protobuf" I1212 00:12:42.353374 7 flags.go:33] FLAG: --storage-versions="admission.k8s.io/v1beta1,admissionregistration.k8s.io/v1beta1,apps/v1,authentication.k8s.io/v1,authorization.k8s.io/v1,autoscaling/v1,batch/v1,certificates.k8s.io/v1beta1,coordination.k8s.io/v1beta1,events.k8s.io/v1beta1,extensions/v1beta1,imagepolicy.k8s.io/v1alpha1,networking.k8s.io/v1,policy/v1beta1,rbac.authorization.k8s.io/v1,scheduling.k8s.io/v1beta1,settings.k8s.io/v1alpha1,storage.k8s.io/v1,v1" I1212 00:12:42.353390 7 flags.go:33] FLAG: --target-ram-mb="0" I1212 00:12:42.353393 7 flags.go:33] FLAG: --tls-cert-file="/srv/kubernetes/server.cert" I1212 00:12:42.353396 7 flags.go:33] FLAG: --tls-cipher-suites="[]" I1212 00:12:42.353400 7 flags.go:33] FLAG: --tls-min-version="" I1212 00:12:42.353403 7 flags.go:33] FLAG: --tls-private-key-file="/srv/kubernetes/server.key" I1212 00:12:42.353406 7 flags.go:33] FLAG: --tls-sni-cert-key="[]" I1212 00:12:42.353410 7 flags.go:33] FLAG: --token-auth-file="/srv/kubernetes/known_tokens.csv" I1212 00:12:42.353413 7 flags.go:33] FLAG: --v="2" I1212 00:12:42.353416 7 flags.go:33] FLAG: --version="false" I1212 00:12:42.353421 7 flags.go:33] FLAG: --vmodule="" I1212 00:12:42.353424 7 flags.go:33] FLAG: --watch-cache="true" I1212 00:12:42.353427 7 flags.go:33] FLAG: --watch-cache-sizes="[]" I1212 00:12:42.353695 7 server.go:681] external host was not specified, using 10.5.0.30 I1212 00:12:42.354026 7 server.go:705] Initializing deserialization cache size based on 0MB limit I1212 00:12:42.354036 7 server.go:724] Initializing cache sizes based on 0MB limit I1212 00:12:42.354101 7 server.go:152] Version: v1.12.3 W1212 00:12:42.832684 7 admission.go:76] PersistentVolumeLabel admission controller is deprecated. Please remove this controller from your configuration files and scripts. I1212 00:12:42.832846 7 feature_gate.go:206] feature gates: &{map[Initializers:true]} I1212 00:12:42.832863 7 initialization.go:90] enabled Initializers feature as part of admission plugin setup I1212 00:12:42.833085 7 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,Priority,DefaultTolerationSeconds,PersistentVolumeLabel,DefaultStorageClass,MutatingAdmissionWebhook,Initializers. I1212 00:12:42.833094 7 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota. W1212 00:12:42.833382 7 admission.go:76] PersistentVolumeLabel admission controller is deprecated. Please remove this controller from your configuration files and scripts. I1212 00:12:42.833654 7 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,Priority,DefaultTolerationSeconds,PersistentVolumeLabel,DefaultStorageClass,MutatingAdmissionWebhook,Initializers. I1212 00:12:42.833664 7 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota. I1212 00:12:42.835749 7 store.go:1414] Monitoring customresourcedefinitions.apiextensions.k8s.io count at <storage-prefix>//apiextensions.k8s.io/customresourcedefinitions I1212 00:12:42.859202 7 master.go:240] Using reconciler: lease I1212 00:12:42.862882 7 store.go:1414] Monitoring podtemplates count at <storage-prefix>//podtemplates I1212 00:12:42.863313 7 store.go:1414] Monitoring events count at <storage-prefix>//events I1212 00:12:42.863693 7 store.go:1414] Monitoring limitranges count at <storage-prefix>//limitranges I1212 00:12:42.864078 7 store.go:1414] Monitoring resourcequotas count at <storage-prefix>//resourcequotas I1212 00:12:42.864499 7 store.go:1414] Monitoring secrets count at <storage-prefix>//secrets I1212 00:12:42.864886 7 store.go:1414] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes I1212 00:12:42.865271 7 store.go:1414] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims I1212 00:12:42.865659 7 store.go:1414] Monitoring configmaps count at <storage-prefix>//configmaps I1212 00:12:42.866063 7 store.go:1414] Monitoring namespaces count at <storage-prefix>//namespaces I1212 00:12:42.866465 7 store.go:1414] Monitoring endpoints count at <storage-prefix>//services/endpoints I1212 00:12:42.866890 7 store.go:1414] Monitoring nodes count at <storage-prefix>//minions I1212 00:12:42.867659 7 store.go:1414] Monitoring pods count at <storage-prefix>//pods I1212 00:12:42.868099 7 store.go:1414] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts I1212 00:12:42.868523 7 store.go:1414] Monitoring services count at <storage-prefix>//services/specs I1212 00:12:42.869296 7 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//controllers I1212 00:12:43.236425 7 master.go:432] Enabling API group "authentication.k8s.io". I1212 00:12:43.236452 7 master.go:432] Enabling API group "authorization.k8s.io". I1212 00:12:43.237028 7 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers I1212 00:12:43.237503 7 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers I1212 00:12:43.237908 7 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers I1212 00:12:43.237922 7 master.go:432] Enabling API group "autoscaling". I1212 00:12:43.238316 7 store.go:1414] Monitoring jobs.batch count at <storage-prefix>//jobs I1212 00:12:43.238723 7 store.go:1414] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs I1212 00:12:43.238739 7 master.go:432] Enabling API group "batch". I1212 00:12:43.239112 7 store.go:1414] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests I1212 00:12:43.239127 7 master.go:432] Enabling API group "certificates.k8s.io". I1212 00:12:43.239556 7 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases I1212 00:12:43.239572 7 master.go:432] Enabling API group "coordination.k8s.io". I1212 00:12:43.239956 7 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//controllers I1212 00:12:43.240365 7 store.go:1414] Monitoring daemonsets.extensions count at <storage-prefix>//daemonsets I1212 00:12:43.240731 7 store.go:1414] Monitoring deployments.extensions count at <storage-prefix>//deployments I1212 00:12:43.241123 7 store.go:1414] Monitoring ingresses.extensions count at <storage-prefix>//ingress I1212 00:12:43.241545 7 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy I1212 00:12:43.241975 7 store.go:1414] Monitoring replicasets.extensions count at <storage-prefix>//replicasets I1212 00:12:43.242372 7 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies I1212 00:12:43.242385 7 master.go:432] Enabling API group "extensions". I1212 00:12:43.242779 7 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies I1212 00:12:43.242791 7 master.go:432] Enabling API group "networking.k8s.io". I1212 00:12:43.243237 7 store.go:1414] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets I1212 00:12:43.243653 7 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy I1212 00:12:43.243666 7 master.go:432] Enabling API group "policy". I1212 00:12:43.243998 7 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles I1212 00:12:43.244431 7 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings I1212 00:12:43.244808 7 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles I1212 00:12:43.245201 7 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings I1212 00:12:43.245546 7 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles I1212 00:12:43.245916 7 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings I1212 00:12:43.246314 7 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles I1212 00:12:43.246702 7 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings I1212 00:12:43.246718 7 master.go:432] Enabling API group "rbac.authorization.k8s.io". I1212 00:12:43.247889 7 store.go:1414] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses I1212 00:12:43.247908 7 master.go:432] Enabling API group "scheduling.k8s.io". I1212 00:12:43.247920 7 master.go:424] Skipping disabled API group "settings.k8s.io". I1212 00:12:43.248329 7 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses I1212 00:12:43.248726 7 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments I1212 00:12:43.249164 7 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses I1212 00:12:43.249176 7 master.go:432] Enabling API group "storage.k8s.io". I1212 00:12:43.249588 7 store.go:1414] Monitoring deployments.extensions count at <storage-prefix>//deployments I1212 00:12:43.249996 7 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets I1212 00:12:43.250453 7 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions I1212 00:12:43.250895 7 store.go:1414] Monitoring deployments.extensions count at <storage-prefix>//deployments I1212 00:12:43.251298 7 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets I1212 00:12:43.251706 7 store.go:1414] Monitoring daemonsets.extensions count at <storage-prefix>//daemonsets I1212 00:12:43.252085 7 store.go:1414] Monitoring replicasets.extensions count at <storage-prefix>//replicasets I1212 00:12:43.274505 7 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions I1212 00:12:43.275002 7 store.go:1414] Monitoring deployments.extensions count at <storage-prefix>//deployments I1212 00:12:43.276141 7 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets I1212 00:12:43.277721 7 store.go:1414] Monitoring daemonsets.extensions count at <storage-prefix>//daemonsets I1212 00:12:43.279482 7 store.go:1414] Monitoring replicasets.extensions count at <storage-prefix>//replicasets I1212 00:12:43.279883 7 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions I1212 00:12:43.279895 7 master.go:432] Enabling API group "apps". I1212 00:12:43.280238 7 store.go:1414] Monitoring initializerconfigurations.admissionregistration.k8s.io count at <storage-prefix>//initializerconfigurations I1212 00:12:43.280641 7 store.go:1414] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations I1212 00:12:43.280968 7 store.go:1414] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations I1212 00:12:43.280979 7 master.go:432] Enabling API group "admissionregistration.k8s.io". I1212 00:12:43.281301 7 store.go:1414] Monitoring events count at <storage-prefix>//events I1212 00:12:43.281312 7 master.go:432] Enabling API group "events.k8s.io". W1212 00:12:43.516919 7 genericapiserver.go:325] Skipping API batch/v2alpha1 because it has no resources. W1212 00:12:43.835670 7 genericapiserver.go:325] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. W1212 00:12:43.848163 7 genericapiserver.go:325] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. W1212 00:12:43.869772 7 genericapiserver.go:325] Skipping API storage.k8s.io/v1alpha1 because it has no resources. [restful] 2018/12/12 00:12:44 log.go:33: [restful/swagger] listing is available at https://10.5.0.30:443/swaggerapi [restful] 2018/12/12 00:12:44 log.go:33: [restful/swagger] https://10.5.0.30:443/swaggerui/ is mapped to folder /swagger-ui/ [restful] 2018/12/12 00:12:45 log.go:33: [restful/swagger] listing is available at https://10.5.0.30:443/swaggerapi [restful] 2018/12/12 00:12:45 log.go:33: [restful/swagger] https://10.5.0.30:443/swaggerui/ is mapped to folder /swagger-ui/ W1212 00:12:45.683798 7 admission.go:76] PersistentVolumeLabel admission controller is deprecated. Please remove this controller from your configuration files and scripts. I1212 00:12:45.684127 7 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,Priority,DefaultTolerationSeconds,PersistentVolumeLabel,DefaultStorageClass,MutatingAdmissionWebhook,Initializers. I1212 00:12:45.684138 7 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota. I1212 00:12:45.686223 7 store.go:1414] Monitoring apiservices.apiregistration.k8s.io count at <storage-prefix>//apiregistration.k8s.io/apiservices I1212 00:12:45.686707 7 store.go:1414] Monitoring apiservices.apiregistration.k8s.io count at <storage-prefix>//apiregistration.k8s.io/apiservices I1212 00:12:48.453407 7 deprecated_insecure_serving.go:50] Serving insecurely on 127.0.0.1:8080 I1212 00:12:48.454725 7 secure_serving.go:116] Serving securely on [::]:443 I1212 00:12:48.454763 7 autoregister_controller.go:136] Starting autoregister controller I1212 00:12:48.454770 7 cache.go:32] Waiting for caches to sync for autoregister controller I1212 00:12:48.454874 7 apiservice_controller.go:90] Starting APIServiceRegistrationController I1212 00:12:48.454892 7 controller.go:84] Starting OpenAPI AggregationController I1212 00:12:48.454902 7 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I1212 00:12:48.454935 7 crdregistration_controller.go:112] Starting crd-autoregister controller I1212 00:12:48.454932 7 crd_finalizer.go:242] Starting CRDFinalizer I1212 00:12:48.454962 7 available_controller.go:278] Starting AvailableConditionController I1212 00:12:48.454967 7 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I1212 00:12:48.454969 7 naming_controller.go:284] Starting NamingConditionController I1212 00:12:48.454994 7 establishing_controller.go:73] Starting EstablishingController I1212 00:12:48.454950 7 controller_utils.go:1027] Waiting for caches to sync for crd-autoregister controller I1212 00:12:48.455033 7 customresource_discovery_controller.go:199] Starting DiscoveryController I1212 00:12:58.923688 7 trace.go:76] Trace[1029194318]: "Create /api/v1/namespaces/kube-system/serviceaccounts" (started: 2018-12-12 00:12:48.921492128 +0000 UTC m=+6.629693070) (total time: 10.002174722s): Trace[1029194318]: [10.002174722s] [10.00039192s] END I1212 00:13:08.925557 7 trace.go:76] Trace[645995136]: "Create /apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings" (started: 2018-12-12 00:12:58.924586395 +0000 UTC m=+16.632787332) (total time: 10.000946296s): Trace[645995136]: [10.000946296s] [10.00036682s] END I1212 00:13:24.847128 7 shared_informer.go:119] stop requested I1212 00:13:24.847145 7 shared_informer.go:119] stop requested I1212 00:13:24.847146 7 shared_informer.go:119] stop requested I1212 00:13:24.847144 7 secure_serving.go:156] Stopped listening on 127.0.0.1:8080 I1212 00:13:24.847158 7 shared_informer.go:119] stop requested I1212 00:13:24.847160 7 shared_informer.go:119] stop requested I1212 00:13:24.847158 7 shared_informer.go:119] stop requested E1212 00:13:24.847165 7 customresource_discovery_controller.go:202] timed out waiting for caches to sync I1212 00:13:24.847168 7 crd_finalizer.go:246] Shutting down CRDFinalizer E1212 00:13:24.847171 7 controller_utils.go:1030] Unable to sync caches for crd-autoregister controller I1212 00:13:24.847172 7 shared_informer.go:119] stop requested I1212 00:13:24.847171 7 customresource_discovery_controller.go:203] Shutting down DiscoveryController E1212 00:13:24.847180 7 cache.go:35] Unable to sync caches for autoregister controller E1212 00:13:24.847148 7 cache.go:35] Unable to sync caches for APIServiceRegistrationController controller I1212 00:13:24.847157 7 establishing_controller.go:77] Shutting down EstablishingController I1212 00:13:24.847135 7 shared_informer.go:119] stop requested I1212 00:13:24.847215 7 secure_serving.go:156] Stopped listening on [::]:443 I1212 00:13:24.847215 7 controller.go:171] Shutting down kubernetes service endpoint reconciler E1212 00:13:24.847225 7 cache.go:35] Unable to sync caches for AvailableConditionController controller I1212 00:13:24.847152 7 naming_controller.go:288] Shutting down NamingConditionController I1212 00:13:24.847186 7 controller.go:90] Shutting down OpenAPI AggregationController I1212 00:13:24.848248 7 crdregistration_controller.go:117] Shutting down crd-autoregister controller I1212 00:13:24.849329 7 autoregister_controller.go:141] Shutting down autoregister controller I1212 00:13:24.850406 7 apiservice_controller.go:94] Shutting down APIServiceRegistrationController I1212 00:13:24.851479 7 available_controller.go:282] Shutting down AvailableConditionController E1212 00:13:34.847575 7 controller.go:173] rpc error: code = Unavailable desc = transport is closing E1212 00:13:48.464293 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.464359 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.465402 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} I1212 00:13:48.466507 7 trace.go:76] Trace[1330970842]: "List /apis/admissionregistration.k8s.io/v1alpha1/initializerconfigurations" (started: 2018-12-12 00:12:48.464197188 +0000 UTC m=+6.172398126) (total time: 1m0.00229233s): Trace[1330970842]: [1m0.00229233s] [1m0.002288147s] END E1212 00:13:48.466971 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.467596 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.468649 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} I1212 00:13:48.469741 7 trace.go:76] Trace[1868745693]: "List /apis/admissionregistration.k8s.io/v1alpha1/initializerconfigurations" (started: 2018-12-12 00:12:48.466884694 +0000 UTC m=+6.175085674) (total time: 1m0.002842133s): Trace[1868745693]: [1m0.002842133s] [1m0.002837372s] END E1212 00:13:48.470629 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.470821 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.470927 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.471076 7 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.Secret: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets) E1212 00:13:48.471550 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.471675 7 reflector.go:134] k8s.io/apiextensions-apiserver/pkg/client/informers/internalversion/factory.go:117: Failed to list *apiextensions.CustomResourceDefinition: the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io) E1212 00:13:48.471884 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} I1212 00:13:48.472979 7 trace.go:76] Trace[1800478074]: "List /apis/admissionregistration.k8s.io/v1alpha1/initializerconfigurations" (started: 2018-12-12 00:12:48.470532433 +0000 UTC m=+6.178733370) (total time: 1m0.002432073s): Trace[1800478074]: [1m0.002432073s] [1m0.002427554s] END E1212 00:13:48.474023 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.475085 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.477257 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout I1212 00:13:48.479430 7 trace.go:76] Trace[1280622339]: "List /api/v1/secrets" (started: 2018-12-12 00:12:48.470911773 +0000 UTC m=+6.179112712) (total time: 1m0.008501979s): Trace[1280622339]: [1m0.008501979s] [1m0.008459411s] END E1212 00:13:48.480486 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} I1212 00:13:48.481577 7 trace.go:76] Trace[1804652784]: "List /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions" (started: 2018-12-12 00:12:48.471466491 +0000 UTC m=+6.179667429) (total time: 1m0.010100941s): Trace[1804652784]: [1m0.010100941s] [1m0.010060801s] END E1212 00:13:48.500678 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.500713 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.500806 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.500845 7 reflector.go:134] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:130: Failed to list *scheduling.PriorityClass: the server was unable to return a response in the time allotted, but may still be processing the request (get priorityclasses.scheduling.k8s.io) E1212 00:13:48.500882 7 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.ClusterRole: the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io) E1212 00:13:48.500903 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.500953 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.500957 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.500957 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.500979 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.500987 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.501007 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.501090 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.501091 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.501147 7 reflector.go:134] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:130: Failed to list *storage.StorageClass: the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io) E1212 00:13:48.501156 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.501160 7 reflector.go:134] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:130: Failed to list *core.LimitRange: the server was unable to return a response in the time allotted, but may still be processing the request (get limitranges) E1212 00:13:48.501238 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.501241 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.501275 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.501284 7 reflector.go:134] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:130: Failed to list *core.Secret: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets) E1212 00:13:48.501395 7 reflector.go:134] k8s.io/kube-aggregator/pkg/client/informers/internalversion/factory.go:117: Failed to list *apiregistration.APIService: the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io) E1212 00:13:48.501398 7 reflector.go:134] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:130: Failed to list *core.PersistentVolume: the server was unable to return a response in the time allotted, but may still be processing the request (get persistentvolumes) E1212 00:13:48.501439 7 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.ServiceAccount: the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts) E1212 00:13:48.501457 7 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1beta1.ValidatingWebhookConfiguration: the server was unable to return a response in the time allotted, but may still be processing the request (get validatingwebhookconfigurations.admissionregistration.k8s.io) E1212 00:13:48.501501 7 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.Service: the server was unable to return a response in the time allotted, but may still be processing the request (get services) E1212 00:13:48.501524 7 reflector.go:134] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:130: Failed to list *core.ResourceQuota: the server was unable to return a response in the time allotted, but may still be processing the request (get resourcequotas) E1212 00:13:48.501628 7 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.Pod: the server was unable to return a response in the time allotted, but may still be processing the request (get pods) E1212 00:13:48.501687 7 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1beta1.MutatingWebhookConfiguration: the server was unable to return a response in the time allotted, but may still be processing the request (get mutatingwebhookconfigurations.admissionregistration.k8s.io) E1212 00:13:48.501731 7 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.Namespace: the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces) E1212 00:13:48.501747 7 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.RoleBinding: the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io) E1212 00:13:48.501776 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.501974 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.502090 7 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.Role: the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io) I1212 00:13:48.502863 7 trace.go:76] Trace[2003208653]: "List /apis/scheduling.k8s.io/v1beta1/priorityclasses" (started: 2018-12-12 00:12:48.50058919 +0000 UTC m=+6.208790128) (total time: 1m0.002260482s): Trace[2003208653]: [1m0.002260482s] [1m0.002225647s] END E1212 00:13:48.503663 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.503680 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.503783 7 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.StorageClass: the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io) E1212 00:13:48.503809 7 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.ClusterRoleBinding: the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io) E1212 00:13:48.503945 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.503984 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.504107 7 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.Endpoints: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints) E1212 00:13:48.504981 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.508235 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.509303 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.510393 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.511474 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.512543 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.513624 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.514705 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.515781 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.516852 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.517948 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.521205 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.522313 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.523383 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.536332 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.538482 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.539550 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.542790 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout I1212 00:13:48.544949 7 trace.go:76] Trace[1776629030]: "List /apis/rbac.authorization.k8s.io/v1/clusterroles" (started: 2018-12-12 00:12:48.500703254 +0000 UTC m=+6.208904191) (total time: 1m0.044230748s): Trace[1776629030]: [1m0.044230748s] [1m0.044191519s] END E1212 00:13:48.546005 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.547081 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.548160 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.549233 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.550326 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.551402 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.552483 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.553559 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.554642 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.555717 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.556795 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.557885 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.558957 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.560038 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.561114 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.562191 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.563270 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} I1212 00:13:48.564370 7 trace.go:76] Trace[1439133424]: "List /apis/storage.k8s.io/v1/storageclasses" (started: 2018-12-12 00:12:48.50081678 +0000 UTC m=+6.209017718) (total time: 1m0.063540529s): Trace[1439133424]: [1m0.063540529s] [1m0.063506486s] END I1212 00:13:48.565439 7 trace.go:76] Trace[1683817720]: "List /api/v1/serviceaccounts" (started: 2018-12-12 00:12:48.500940682 +0000 UTC m=+6.209141619) (total time: 1m0.064488328s): Trace[1683817720]: [1m0.064488328s] [1m0.064456016s] END I1212 00:13:48.566514 7 trace.go:76] Trace[491490319]: "List /api/v1/persistentvolumes" (started: 2018-12-12 00:12:48.500986525 +0000 UTC m=+6.209187462) (total time: 1m0.065518757s): Trace[491490319]: [1m0.065518757s] [1m0.065482619s] END I1212 00:13:48.567591 7 trace.go:76] Trace[1474503645]: "List /apis/apiregistration.k8s.io/v1/apiservices" (started: 2018-12-12 00:12:48.500875642 +0000 UTC m=+6.209076622) (total time: 1m0.066706928s): Trace[1474503645]: [1m0.066706928s] [1m0.066656322s] END I1212 00:13:48.568677 7 trace.go:76] Trace[635852309]: "List /api/v1/secrets" (started: 2018-12-12 00:12:48.500940686 +0000 UTC m=+6.209141623) (total time: 1m0.06772371s): Trace[635852309]: [1m0.06772371s] [1m0.067687683s] END I1212 00:13:48.569755 7 trace.go:76] Trace[175882069]: "List /api/v1/limitranges" (started: 2018-12-12 00:12:48.500925548 +0000 UTC m=+6.209126486) (total time: 1m0.06881921s): Trace[175882069]: [1m0.06881921s] [1m0.068781117s] END I1212 00:13:48.570836 7 trace.go:76] Trace[122202535]: "List /api/v1/pods" (started: 2018-12-12 00:12:48.500951889 +0000 UTC m=+6.209152828) (total time: 1m0.069871581s): Trace[122202535]: [1m0.069871581s] [1m0.069830326s] END I1212 00:13:48.571908 7 trace.go:76] Trace[865708000]: "List /api/v1/resourcequotas" (started: 2018-12-12 00:12:48.501056066 +0000 UTC m=+6.209257003) (total time: 1m0.070840152s): Trace[865708000]: [1m0.070840152s] [1m0.070808759s] END I1212 00:13:48.572979 7 trace.go:76] Trace[955305514]: "List /apis/rbac.authorization.k8s.io/v1/rolebindings" (started: 2018-12-12 00:12:48.501055621 +0000 UTC m=+6.209256562) (total time: 1m0.071915466s): Trace[955305514]: [1m0.071915466s] [1m0.071884923s] END I1212 00:13:48.574060 7 trace.go:76] Trace[1423473229]: "List /api/v1/namespaces" (started: 2018-12-12 00:12:48.501149822 +0000 UTC m=+6.209350759) (total time: 1m0.072900808s): Trace[1423473229]: [1m0.072900808s] [1m0.072867725s] END I1212 00:13:48.575139 7 trace.go:76] Trace[802608035]: "List /apis/admissionregistration.k8s.io/v1beta1/validatingwebhookconfigurations" (started: 2018-12-12 00:12:48.501149182 +0000 UTC m=+6.209350109) (total time: 1m0.073979725s): Trace[802608035]: [1m0.073979725s] [1m0.073948799s] END I1212 00:13:48.576217 7 trace.go:76] Trace[1021760621]: "List /apis/admissionregistration.k8s.io/v1beta1/mutatingwebhookconfigurations" (started: 2018-12-12 00:12:48.501154269 +0000 UTC m=+6.209355207) (total time: 1m0.075052452s): Trace[1021760621]: [1m0.075052452s] [1m0.075012296s] END I1212 00:13:48.577292 7 trace.go:76] Trace[1969470568]: "List /api/v1/services" (started: 2018-12-12 00:12:48.501258385 +0000 UTC m=+6.209459322) (total time: 1m0.076025789s): Trace[1969470568]: [1m0.076025789s] [1m0.076004504s] END I1212 00:13:48.578373 7 trace.go:76] Trace[1871147953]: "List /apis/rbac.authorization.k8s.io/v1/roles" (started: 2018-12-12 00:12:48.501860956 +0000 UTC m=+6.210061881) (total time: 1m0.076503388s): Trace[1871147953]: [1m0.076503388s] [1m0.076480245s] END I1212 00:13:48.579453 7 trace.go:76] Trace[640462565]: "List /apis/storage.k8s.io/v1/storageclasses" (started: 2018-12-12 00:12:48.503571787 +0000 UTC m=+6.211772724) (total time: 1m0.075871435s): Trace[640462565]: [1m0.075871435s] [1m0.075846101s] END I1212 00:13:48.580530 7 trace.go:76] Trace[759626822]: "List /apis/rbac.authorization.k8s.io/v1/clusterrolebindings" (started: 2018-12-12 00:12:48.50357283 +0000 UTC m=+6.211773767) (total time: 1m0.076948558s): Trace[759626822]: [1m0.076948558s] [1m0.076917912s] END I1212 00:13:48.581612 7 trace.go:76] Trace[647924664]: "List /api/v1/endpoints" (started: 2018-12-12 00:12:48.503957566 +0000 UTC m=+6.212158503) (total time: 1m0.077645739s): Trace[647924664]: [1m0.077645739s] [1m0.077595063s] END E1212 00:13:49.455364 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:49.455402 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:49.455432 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:49.455591 7 storage_rbac.go:154] unable to initialize clusterroles: the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io) E1212 00:13:49.455602 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} W1212 00:13:49.455631 7 storage_scheduling.go:95] unable to get PriorityClass system-node-critical: the server was unable to return a response in the time allotted, but may still be processing the request (get priorityclasses.scheduling.k8s.io system-node-critical). Retrying... F1212 00:13:49.455641 7 hooks.go:188] PostStartHook "scheduling/bootstrap-system-priority-classes" failed: unable to add default system priority classes: the server was unable to return a response in the time allotted, but may still be processing the request (get priorityclasses.scheduling.k8s.io system-node-critical) E1212 00:13:49.489143 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:49.500198 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:49.511157 7 client_ca_hook.go:72] Post https://[::1]:443/api/v1/namespaces: dial tcp [::1]:443: connect: connection refused
I was able to replicate this consistently, the one time I was able to do a full upgrade and had it succeed I then proceeded to rotate the cluster 1 more time with no updates at that point I once again saw the corruption.
To ensure that this was not an issue with the k8s and etcd versions I picked I once again created a new Kops cluster and the updated k8s and etcd to the versions mentioned above. This time however I set the etcd provisioner in Kops to legacy, the cluster upgrade succeeded with no issues and following cluster rotations have not caused any visible issues.
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
While testing out Kops 1.11-beta.1 with K8s 1.12.3 I noticed some data corruption after migrating to etcd-manager.
Replication process, create a new k8s cluster with Kops.
Kops version: 1.10
Kubernetes version: 1.10
etcd version: 3.2.12
Update etcd and k8s version.
Kubernetes version: 1.13
etcd version: 3.2.18 / 3.2.24 (Tested with both and saw the same issue)
Below is the logs I'm seeing from the etcd-manager container when the corruption seems to happen. When this happens etcd does not start and unfortunately I have not been able to find any relevant logs as to why.
Flag --insecure-bind-address has been deprecated, This flag will be removed in a future version. Flag --insecure-port has been deprecated, This flag will be removed in a future version. I1212 00:12:42.352817 7 flags.go:33] FLAG: --address="127.0.0.1" I1212 00:12:42.352874 7 flags.go:33] FLAG: --admission-control="[]" I1212 00:12:42.352885 7 flags.go:33] FLAG: --admission-control-config-file="" I1212 00:12:42.352892 7 flags.go:33] FLAG: --advertise-address="<nil>" I1212 00:12:42.352896 7 flags.go:33] FLAG: --allow-privileged="true" I1212 00:12:42.352900 7 flags.go:33] FLAG: --alsologtostderr="false" I1212 00:12:42.352904 7 flags.go:33] FLAG: --anonymous-auth="false" I1212 00:12:42.352907 7 flags.go:33] FLAG: --apiserver-count="5" I1212 00:12:42.352911 7 flags.go:33] FLAG: --audit-log-batch-buffer-size="10000" I1212 00:12:42.352915 7 flags.go:33] FLAG: --audit-log-batch-max-size="1" I1212 00:12:42.352917 7 flags.go:33] FLAG: --audit-log-batch-max-wait="0s" I1212 00:12:42.352921 7 flags.go:33] FLAG: --audit-log-batch-throttle-burst="0" I1212 00:12:42.352924 7 flags.go:33] FLAG: --audit-log-batch-throttle-enable="false" I1212 00:12:42.352927 7 flags.go:33] FLAG: --audit-log-batch-throttle-qps="0" I1212 00:12:42.352934 7 flags.go:33] FLAG: --audit-log-format="json" I1212 00:12:42.352937 7 flags.go:33] FLAG: --audit-log-maxage="10" I1212 00:12:42.352940 7 flags.go:33] FLAG: --audit-log-maxbackup="5" I1212 00:12:42.352943 7 flags.go:33] FLAG: --audit-log-maxsize="100" I1212 00:12:42.352946 7 flags.go:33] FLAG: --audit-log-mode="blocking" I1212 00:12:42.352949 7 flags.go:33] FLAG: --audit-log-path="/var/log/kube-audit.log" I1212 00:12:42.352952 7 flags.go:33] FLAG: --audit-log-truncate-enabled="false" I1212 00:12:42.352955 7 flags.go:33] FLAG: --audit-log-truncate-max-batch-size="10485760" I1212 00:12:42.352960 7 flags.go:33] FLAG: --audit-log-truncate-max-event-size="102400" I1212 00:12:42.352963 7 flags.go:33] FLAG: --audit-log-version="audit.k8s.io/v1beta1" I1212 00:12:42.352966 7 flags.go:33] FLAG: --audit-policy-file="/srv/kubernetes/audit_policy.yaml" I1212 00:12:42.352969 7 flags.go:33] FLAG: --audit-webhook-batch-buffer-size="10000" I1212 00:12:42.352972 7 flags.go:33] FLAG: --audit-webhook-batch-initial-backoff="10s" I1212 00:12:42.352975 7 flags.go:33] FLAG: --audit-webhook-batch-max-size="400" I1212 00:12:42.352978 7 flags.go:33] FLAG: --audit-webhook-batch-max-wait="30s" I1212 00:12:42.352981 7 flags.go:33] FLAG: --audit-webhook-batch-throttle-burst="15" I1212 00:12:42.352984 7 flags.go:33] FLAG: --audit-webhook-batch-throttle-enable="true" I1212 00:12:42.352987 7 flags.go:33] FLAG: --audit-webhook-batch-throttle-qps="10" I1212 00:12:42.352990 7 flags.go:33] FLAG: --audit-webhook-config-file="" I1212 00:12:42.352993 7 flags.go:33] FLAG: --audit-webhook-initial-backoff="10s" I1212 00:12:42.352996 7 flags.go:33] FLAG: --audit-webhook-mode="batch" I1212 00:12:42.352999 7 flags.go:33] FLAG: --audit-webhook-truncate-enabled="false" I1212 00:12:42.353002 7 flags.go:33] FLAG: --audit-webhook-truncate-max-batch-size="10485760" I1212 00:12:42.353005 7 flags.go:33] FLAG: --audit-webhook-truncate-max-event-size="102400" I1212 00:12:42.353008 7 flags.go:33] FLAG: --audit-webhook-version="audit.k8s.io/v1beta1" I1212 00:12:42.353011 7 flags.go:33] FLAG: --authentication-token-webhook-cache-ttl="2m0s" I1212 00:12:42.353014 7 flags.go:33] FLAG: --authentication-token-webhook-config-file="/etc/kubernetes/authn.config" I1212 00:12:42.353017 7 flags.go:33] FLAG: --authorization-mode="[RBAC]" I1212 00:12:42.353021 7 flags.go:33] FLAG: --authorization-policy-file="" I1212 00:12:42.353024 7 flags.go:33] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" I1212 00:12:42.353027 7 flags.go:33] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" I1212 00:12:42.353030 7 flags.go:33] FLAG: --authorization-webhook-config-file="" I1212 00:12:42.353032 7 flags.go:33] FLAG: --basic-auth-file="/srv/kubernetes/basic_auth.csv" I1212 00:12:42.353036 7 flags.go:33] FLAG: --bind-address="0.0.0.0" I1212 00:12:42.353039 7 flags.go:33] FLAG: --cert-dir="/var/run/kubernetes" I1212 00:12:42.353042 7 flags.go:33] FLAG: --client-ca-file="/srv/kubernetes/ca.crt" I1212 00:12:42.353045 7 flags.go:33] FLAG: --cloud-config="/etc/kubernetes/cloud.config" I1212 00:12:42.353048 7 flags.go:33] FLAG: --cloud-provider="aws" I1212 00:12:42.353051 7 flags.go:33] FLAG: --cloud-provider-gce-lb-src-cidrs="130.211.0.0/22,209.85.152.0/22,209.85.204.0/22,35.191.0.0/16" I1212 00:12:42.353056 7 flags.go:33] FLAG: --contention-profiling="false" I1212 00:12:42.353059 7 flags.go:33] FLAG: --cors-allowed-origins="[]" I1212 00:12:42.353065 7 flags.go:33] FLAG: --default-not-ready-toleration-seconds="300" I1212 00:12:42.353068 7 flags.go:33] FLAG: --default-unreachable-toleration-seconds="300" I1212 00:12:42.353071 7 flags.go:33] FLAG: --default-watch-cache-size="100" I1212 00:12:42.353074 7 flags.go:33] FLAG: --delete-collection-workers="1" I1212 00:12:42.353077 7 flags.go:33] FLAG: --deserialization-cache-size="0" I1212 00:12:42.353080 7 flags.go:33] FLAG: --disable-admission-plugins="[]" I1212 00:12:42.353083 7 flags.go:33] FLAG: --enable-admission-plugins="[Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,NodeRestriction,ResourceQuota]" I1212 00:12:42.353100 7 flags.go:33] FLAG: --enable-aggregator-routing="false" I1212 00:12:42.353107 7 flags.go:33] FLAG: --enable-bootstrap-token-auth="false" I1212 00:12:42.353109 7 flags.go:33] FLAG: --enable-garbage-collector="true" I1212 00:12:42.353112 7 flags.go:33] FLAG: --enable-logs-handler="true" I1212 00:12:42.353115 7 flags.go:33] FLAG: --enable-swagger-ui="false" I1212 00:12:42.353118 7 flags.go:33] FLAG: --endpoint-reconciler-type="lease" I1212 00:12:42.353121 7 flags.go:33] FLAG: --etcd-cafile="" I1212 00:12:42.353123 7 flags.go:33] FLAG: --etcd-certfile="" I1212 00:12:42.353126 7 flags.go:33] FLAG: --etcd-compaction-interval="5m0s" I1212 00:12:42.353129 7 flags.go:33] FLAG: --etcd-count-metric-poll-period="1m0s" I1212 00:12:42.353132 7 flags.go:33] FLAG: --etcd-keyfile="" I1212 00:12:42.353135 7 flags.go:33] FLAG: --etcd-prefix="/registry" I1212 00:12:42.353138 7 flags.go:33] FLAG: --etcd-quorum-read="true" I1212 00:12:42.353141 7 flags.go:33] FLAG: --etcd-servers="[http://127.0.0.1:4001]" I1212 00:12:42.353145 7 flags.go:33] FLAG: --etcd-servers-overrides="[/events#http://127.0.0.1:4002]" I1212 00:12:42.353150 7 flags.go:33] FLAG: --event-ttl="1h0m0s" I1212 00:12:42.353156 7 flags.go:33] FLAG: --experimental-encryption-provider-config="" I1212 00:12:42.353159 7 flags.go:33] FLAG: --external-hostname="" I1212 00:12:42.353162 7 flags.go:33] FLAG: --feature-gates="" I1212 00:12:42.353167 7 flags.go:33] FLAG: --help="false" I1212 00:12:42.353170 7 flags.go:33] FLAG: --http2-max-streams-per-connection="0" I1212 00:12:42.353172 7 flags.go:33] FLAG: --insecure-bind-address="127.0.0.1" I1212 00:12:42.353176 7 flags.go:33] FLAG: --insecure-port="8080" I1212 00:12:42.353179 7 flags.go:33] FLAG: --kubelet-certificate-authority="" I1212 00:12:42.353182 7 flags.go:33] FLAG: --kubelet-client-certificate="/srv/kubernetes/kubelet-api.pem" I1212 00:12:42.353185 7 flags.go:33] FLAG: --kubelet-client-key="/srv/kubernetes/kubelet-api-key.pem" I1212 00:12:42.353188 7 flags.go:33] FLAG: --kubelet-https="true" I1212 00:12:42.353191 7 flags.go:33] FLAG: --kubelet-port="10250" I1212 00:12:42.353199 7 flags.go:33] FLAG: --kubelet-preferred-address-types="[InternalIP,Hostname,ExternalIP]" I1212 00:12:42.353203 7 flags.go:33] FLAG: --kubelet-read-only-port="10255" I1212 00:12:42.353206 7 flags.go:33] FLAG: --kubelet-timeout="5s" I1212 00:12:42.353209 7 flags.go:33] FLAG: --kubernetes-service-node-port="0" I1212 00:12:42.353212 7 flags.go:33] FLAG: --log-backtrace-at=":0" I1212 00:12:42.353219 7 flags.go:33] FLAG: --log-dir="" I1212 00:12:42.353222 7 flags.go:33] FLAG: --log-flush-frequency="5s" I1212 00:12:42.353225 7 flags.go:33] FLAG: --logtostderr="true" I1212 00:12:42.353228 7 flags.go:33] FLAG: --master-service-namespace="default" I1212 00:12:42.353231 7 flags.go:33] FLAG: --max-connection-bytes-per-sec="0" I1212 00:12:42.353234 7 flags.go:33] FLAG: --max-mutating-requests-inflight="200" I1212 00:12:42.353237 7 flags.go:33] FLAG: --max-requests-inflight="400" I1212 00:12:42.353240 7 flags.go:33] FLAG: --min-request-timeout="1800" I1212 00:12:42.353243 7 flags.go:33] FLAG: --oidc-ca-file="" I1212 00:12:42.353246 7 flags.go:33] FLAG: --oidc-client-id="" I1212 00:12:42.353249 7 flags.go:33] FLAG: --oidc-groups-claim="" I1212 00:12:42.353251 7 flags.go:33] FLAG: --oidc-groups-prefix="" I1212 00:12:42.353254 7 flags.go:33] FLAG: --oidc-issuer-url="" I1212 00:12:42.353257 7 flags.go:33] FLAG: --oidc-required-claim="" I1212 00:12:42.353261 7 flags.go:33] FLAG: --oidc-signing-algs="[RS256]" I1212 00:12:42.353266 7 flags.go:33] FLAG: --oidc-username-claim="sub" I1212 00:12:42.353269 7 flags.go:33] FLAG: --oidc-username-prefix="" I1212 00:12:42.353271 7 flags.go:33] FLAG: --port="8080" I1212 00:12:42.353274 7 flags.go:33] FLAG: --profiling="true" I1212 00:12:42.353277 7 flags.go:33] FLAG: --proxy-client-cert-file="/srv/kubernetes/apiserver-aggregator.cert" I1212 00:12:42.353281 7 flags.go:33] FLAG: --proxy-client-key-file="/srv/kubernetes/apiserver-aggregator.key" I1212 00:12:42.353284 7 flags.go:33] FLAG: --repair-malformed-updates="false" I1212 00:12:42.353287 7 flags.go:33] FLAG: --request-timeout="1m0s" I1212 00:12:42.353290 7 flags.go:33] FLAG: --requestheader-allowed-names="[aggregator]" I1212 00:12:42.353294 7 flags.go:33] FLAG: --requestheader-client-ca-file="/srv/kubernetes/apiserver-aggregator-ca.cert" I1212 00:12:42.353299 7 flags.go:33] FLAG: --requestheader-extra-headers-prefix="[X-Remote-Extra-]" I1212 00:12:42.353304 7 flags.go:33] FLAG: --requestheader-group-headers="[X-Remote-Group]" I1212 00:12:42.353307 7 flags.go:33] FLAG: --requestheader-username-headers="[X-Remote-User]" I1212 00:12:42.353313 7 flags.go:33] FLAG: --runtime-config="admissionregistration.k8s.io/v1alpha1=true" I1212 00:12:42.353320 7 flags.go:33] FLAG: --secure-port="443" I1212 00:12:42.353323 7 flags.go:33] FLAG: --service-account-api-audiences="[]" I1212 00:12:42.353326 7 flags.go:33] FLAG: --service-account-issuer="" I1212 00:12:42.353329 7 flags.go:33] FLAG: --service-account-key-file="[]" I1212 00:12:42.353338 7 flags.go:33] FLAG: --service-account-lookup="true" I1212 00:12:42.353341 7 flags.go:33] FLAG: --service-account-max-token-expiration="0s" I1212 00:12:42.353344 7 flags.go:33] FLAG: --service-account-signing-key-file="" I1212 00:12:42.353347 7 flags.go:33] FLAG: --service-cluster-ip-range="100.64.0.0/13" I1212 00:12:42.353352 7 flags.go:33] FLAG: --service-node-port-range="30000-32767" I1212 00:12:42.353359 7 flags.go:33] FLAG: --ssh-keyfile="" I1212 00:12:42.353362 7 flags.go:33] FLAG: --ssh-user="" I1212 00:12:42.353364 7 flags.go:33] FLAG: --stderrthreshold="2" I1212 00:12:42.353367 7 flags.go:33] FLAG: --storage-backend="etcd3" I1212 00:12:42.353370 7 flags.go:33] FLAG: --storage-media-type="application/vnd.kubernetes.protobuf" I1212 00:12:42.353374 7 flags.go:33] FLAG: --storage-versions="admission.k8s.io/v1beta1,admissionregistration.k8s.io/v1beta1,apps/v1,authentication.k8s.io/v1,authorization.k8s.io/v1,autoscaling/v1,batch/v1,certificates.k8s.io/v1beta1,coordination.k8s.io/v1beta1,events.k8s.io/v1beta1,extensions/v1beta1,imagepolicy.k8s.io/v1alpha1,networking.k8s.io/v1,policy/v1beta1,rbac.authorization.k8s.io/v1,scheduling.k8s.io/v1beta1,settings.k8s.io/v1alpha1,storage.k8s.io/v1,v1" I1212 00:12:42.353390 7 flags.go:33] FLAG: --target-ram-mb="0" I1212 00:12:42.353393 7 flags.go:33] FLAG: --tls-cert-file="/srv/kubernetes/server.cert" I1212 00:12:42.353396 7 flags.go:33] FLAG: --tls-cipher-suites="[]" I1212 00:12:42.353400 7 flags.go:33] FLAG: --tls-min-version="" I1212 00:12:42.353403 7 flags.go:33] FLAG: --tls-private-key-file="/srv/kubernetes/server.key" I1212 00:12:42.353406 7 flags.go:33] FLAG: --tls-sni-cert-key="[]" I1212 00:12:42.353410 7 flags.go:33] FLAG: --token-auth-file="/srv/kubernetes/known_tokens.csv" I1212 00:12:42.353413 7 flags.go:33] FLAG: --v="2" I1212 00:12:42.353416 7 flags.go:33] FLAG: --version="false" I1212 00:12:42.353421 7 flags.go:33] FLAG: --vmodule="" I1212 00:12:42.353424 7 flags.go:33] FLAG: --watch-cache="true" I1212 00:12:42.353427 7 flags.go:33] FLAG: --watch-cache-sizes="[]" I1212 00:12:42.353695 7 server.go:681] external host was not specified, using 10.5.0.30 I1212 00:12:42.354026 7 server.go:705] Initializing deserialization cache size based on 0MB limit I1212 00:12:42.354036 7 server.go:724] Initializing cache sizes based on 0MB limit I1212 00:12:42.354101 7 server.go:152] Version: v1.12.3 W1212 00:12:42.832684 7 admission.go:76] PersistentVolumeLabel admission controller is deprecated. Please remove this controller from your configuration files and scripts. I1212 00:12:42.832846 7 feature_gate.go:206] feature gates: &{map[Initializers:true]} I1212 00:12:42.832863 7 initialization.go:90] enabled Initializers feature as part of admission plugin setup I1212 00:12:42.833085 7 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,Priority,DefaultTolerationSeconds,PersistentVolumeLabel,DefaultStorageClass,MutatingAdmissionWebhook,Initializers. I1212 00:12:42.833094 7 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota. W1212 00:12:42.833382 7 admission.go:76] PersistentVolumeLabel admission controller is deprecated. Please remove this controller from your configuration files and scripts. I1212 00:12:42.833654 7 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,Priority,DefaultTolerationSeconds,PersistentVolumeLabel,DefaultStorageClass,MutatingAdmissionWebhook,Initializers. I1212 00:12:42.833664 7 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota. I1212 00:12:42.835749 7 store.go:1414] Monitoring customresourcedefinitions.apiextensions.k8s.io count at <storage-prefix>//apiextensions.k8s.io/customresourcedefinitions I1212 00:12:42.859202 7 master.go:240] Using reconciler: lease I1212 00:12:42.862882 7 store.go:1414] Monitoring podtemplates count at <storage-prefix>//podtemplates I1212 00:12:42.863313 7 store.go:1414] Monitoring events count at <storage-prefix>//events I1212 00:12:42.863693 7 store.go:1414] Monitoring limitranges count at <storage-prefix>//limitranges I1212 00:12:42.864078 7 store.go:1414] Monitoring resourcequotas count at <storage-prefix>//resourcequotas I1212 00:12:42.864499 7 store.go:1414] Monitoring secrets count at <storage-prefix>//secrets I1212 00:12:42.864886 7 store.go:1414] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes I1212 00:12:42.865271 7 store.go:1414] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims I1212 00:12:42.865659 7 store.go:1414] Monitoring configmaps count at <storage-prefix>//configmaps I1212 00:12:42.866063 7 store.go:1414] Monitoring namespaces count at <storage-prefix>//namespaces I1212 00:12:42.866465 7 store.go:1414] Monitoring endpoints count at <storage-prefix>//services/endpoints I1212 00:12:42.866890 7 store.go:1414] Monitoring nodes count at <storage-prefix>//minions I1212 00:12:42.867659 7 store.go:1414] Monitoring pods count at <storage-prefix>//pods I1212 00:12:42.868099 7 store.go:1414] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts I1212 00:12:42.868523 7 store.go:1414] Monitoring services count at <storage-prefix>//services/specs I1212 00:12:42.869296 7 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//controllers I1212 00:12:43.236425 7 master.go:432] Enabling API group "authentication.k8s.io". I1212 00:12:43.236452 7 master.go:432] Enabling API group "authorization.k8s.io". I1212 00:12:43.237028 7 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers I1212 00:12:43.237503 7 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers I1212 00:12:43.237908 7 store.go:1414] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers I1212 00:12:43.237922 7 master.go:432] Enabling API group "autoscaling". I1212 00:12:43.238316 7 store.go:1414] Monitoring jobs.batch count at <storage-prefix>//jobs I1212 00:12:43.238723 7 store.go:1414] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs I1212 00:12:43.238739 7 master.go:432] Enabling API group "batch". I1212 00:12:43.239112 7 store.go:1414] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests I1212 00:12:43.239127 7 master.go:432] Enabling API group "certificates.k8s.io". I1212 00:12:43.239556 7 store.go:1414] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases I1212 00:12:43.239572 7 master.go:432] Enabling API group "coordination.k8s.io". I1212 00:12:43.239956 7 store.go:1414] Monitoring replicationcontrollers count at <storage-prefix>//controllers I1212 00:12:43.240365 7 store.go:1414] Monitoring daemonsets.extensions count at <storage-prefix>//daemonsets I1212 00:12:43.240731 7 store.go:1414] Monitoring deployments.extensions count at <storage-prefix>//deployments I1212 00:12:43.241123 7 store.go:1414] Monitoring ingresses.extensions count at <storage-prefix>//ingress I1212 00:12:43.241545 7 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy I1212 00:12:43.241975 7 store.go:1414] Monitoring replicasets.extensions count at <storage-prefix>//replicasets I1212 00:12:43.242372 7 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies I1212 00:12:43.242385 7 master.go:432] Enabling API group "extensions". I1212 00:12:43.242779 7 store.go:1414] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies I1212 00:12:43.242791 7 master.go:432] Enabling API group "networking.k8s.io". I1212 00:12:43.243237 7 store.go:1414] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets I1212 00:12:43.243653 7 store.go:1414] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy I1212 00:12:43.243666 7 master.go:432] Enabling API group "policy". I1212 00:12:43.243998 7 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles I1212 00:12:43.244431 7 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings I1212 00:12:43.244808 7 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles I1212 00:12:43.245201 7 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings I1212 00:12:43.245546 7 store.go:1414] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles I1212 00:12:43.245916 7 store.go:1414] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings I1212 00:12:43.246314 7 store.go:1414] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles I1212 00:12:43.246702 7 store.go:1414] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings I1212 00:12:43.246718 7 master.go:432] Enabling API group "rbac.authorization.k8s.io". I1212 00:12:43.247889 7 store.go:1414] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses I1212 00:12:43.247908 7 master.go:432] Enabling API group "scheduling.k8s.io". I1212 00:12:43.247920 7 master.go:424] Skipping disabled API group "settings.k8s.io". I1212 00:12:43.248329 7 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses I1212 00:12:43.248726 7 store.go:1414] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments I1212 00:12:43.249164 7 store.go:1414] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses I1212 00:12:43.249176 7 master.go:432] Enabling API group "storage.k8s.io". I1212 00:12:43.249588 7 store.go:1414] Monitoring deployments.extensions count at <storage-prefix>//deployments I1212 00:12:43.249996 7 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets I1212 00:12:43.250453 7 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions I1212 00:12:43.250895 7 store.go:1414] Monitoring deployments.extensions count at <storage-prefix>//deployments I1212 00:12:43.251298 7 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets I1212 00:12:43.251706 7 store.go:1414] Monitoring daemonsets.extensions count at <storage-prefix>//daemonsets I1212 00:12:43.252085 7 store.go:1414] Monitoring replicasets.extensions count at <storage-prefix>//replicasets I1212 00:12:43.274505 7 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions I1212 00:12:43.275002 7 store.go:1414] Monitoring deployments.extensions count at <storage-prefix>//deployments I1212 00:12:43.276141 7 store.go:1414] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets I1212 00:12:43.277721 7 store.go:1414] Monitoring daemonsets.extensions count at <storage-prefix>//daemonsets I1212 00:12:43.279482 7 store.go:1414] Monitoring replicasets.extensions count at <storage-prefix>//replicasets I1212 00:12:43.279883 7 store.go:1414] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions I1212 00:12:43.279895 7 master.go:432] Enabling API group "apps". I1212 00:12:43.280238 7 store.go:1414] Monitoring initializerconfigurations.admissionregistration.k8s.io count at <storage-prefix>//initializerconfigurations I1212 00:12:43.280641 7 store.go:1414] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations I1212 00:12:43.280968 7 store.go:1414] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations I1212 00:12:43.280979 7 master.go:432] Enabling API group "admissionregistration.k8s.io". I1212 00:12:43.281301 7 store.go:1414] Monitoring events count at <storage-prefix>//events I1212 00:12:43.281312 7 master.go:432] Enabling API group "events.k8s.io". W1212 00:12:43.516919 7 genericapiserver.go:325] Skipping API batch/v2alpha1 because it has no resources. W1212 00:12:43.835670 7 genericapiserver.go:325] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. W1212 00:12:43.848163 7 genericapiserver.go:325] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. W1212 00:12:43.869772 7 genericapiserver.go:325] Skipping API storage.k8s.io/v1alpha1 because it has no resources. [restful] 2018/12/12 00:12:44 log.go:33: [restful/swagger] listing is available at https://10.5.0.30:443/swaggerapi [restful] 2018/12/12 00:12:44 log.go:33: [restful/swagger] https://10.5.0.30:443/swaggerui/ is mapped to folder /swagger-ui/ [restful] 2018/12/12 00:12:45 log.go:33: [restful/swagger] listing is available at https://10.5.0.30:443/swaggerapi [restful] 2018/12/12 00:12:45 log.go:33: [restful/swagger] https://10.5.0.30:443/swaggerui/ is mapped to folder /swagger-ui/ W1212 00:12:45.683798 7 admission.go:76] PersistentVolumeLabel admission controller is deprecated. Please remove this controller from your configuration files and scripts. I1212 00:12:45.684127 7 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,Priority,DefaultTolerationSeconds,PersistentVolumeLabel,DefaultStorageClass,MutatingAdmissionWebhook,Initializers. I1212 00:12:45.684138 7 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota. I1212 00:12:45.686223 7 store.go:1414] Monitoring apiservices.apiregistration.k8s.io count at <storage-prefix>//apiregistration.k8s.io/apiservices I1212 00:12:45.686707 7 store.go:1414] Monitoring apiservices.apiregistration.k8s.io count at <storage-prefix>//apiregistration.k8s.io/apiservices I1212 00:12:48.453407 7 deprecated_insecure_serving.go:50] Serving insecurely on 127.0.0.1:8080 I1212 00:12:48.454725 7 secure_serving.go:116] Serving securely on [::]:443 I1212 00:12:48.454763 7 autoregister_controller.go:136] Starting autoregister controller I1212 00:12:48.454770 7 cache.go:32] Waiting for caches to sync for autoregister controller I1212 00:12:48.454874 7 apiservice_controller.go:90] Starting APIServiceRegistrationController I1212 00:12:48.454892 7 controller.go:84] Starting OpenAPI AggregationController I1212 00:12:48.454902 7 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I1212 00:12:48.454935 7 crdregistration_controller.go:112] Starting crd-autoregister controller I1212 00:12:48.454932 7 crd_finalizer.go:242] Starting CRDFinalizer I1212 00:12:48.454962 7 available_controller.go:278] Starting AvailableConditionController I1212 00:12:48.454967 7 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I1212 00:12:48.454969 7 naming_controller.go:284] Starting NamingConditionController I1212 00:12:48.454994 7 establishing_controller.go:73] Starting EstablishingController I1212 00:12:48.454950 7 controller_utils.go:1027] Waiting for caches to sync for crd-autoregister controller I1212 00:12:48.455033 7 customresource_discovery_controller.go:199] Starting DiscoveryController I1212 00:12:58.923688 7 trace.go:76] Trace[1029194318]: "Create /api/v1/namespaces/kube-system/serviceaccounts" (started: 2018-12-12 00:12:48.921492128 +0000 UTC m=+6.629693070) (total time: 10.002174722s): Trace[1029194318]: [10.002174722s] [10.00039192s] END I1212 00:13:08.925557 7 trace.go:76] Trace[645995136]: "Create /apis/rbac.authorization.k8s.io/v1beta1/clusterrolebindings" (started: 2018-12-12 00:12:58.924586395 +0000 UTC m=+16.632787332) (total time: 10.000946296s): Trace[645995136]: [10.000946296s] [10.00036682s] END I1212 00:13:24.847128 7 shared_informer.go:119] stop requested I1212 00:13:24.847145 7 shared_informer.go:119] stop requested I1212 00:13:24.847146 7 shared_informer.go:119] stop requested I1212 00:13:24.847144 7 secure_serving.go:156] Stopped listening on 127.0.0.1:8080 I1212 00:13:24.847158 7 shared_informer.go:119] stop requested I1212 00:13:24.847160 7 shared_informer.go:119] stop requested I1212 00:13:24.847158 7 shared_informer.go:119] stop requested E1212 00:13:24.847165 7 customresource_discovery_controller.go:202] timed out waiting for caches to sync I1212 00:13:24.847168 7 crd_finalizer.go:246] Shutting down CRDFinalizer E1212 00:13:24.847171 7 controller_utils.go:1030] Unable to sync caches for crd-autoregister controller I1212 00:13:24.847172 7 shared_informer.go:119] stop requested I1212 00:13:24.847171 7 customresource_discovery_controller.go:203] Shutting down DiscoveryController E1212 00:13:24.847180 7 cache.go:35] Unable to sync caches for autoregister controller E1212 00:13:24.847148 7 cache.go:35] Unable to sync caches for APIServiceRegistrationController controller I1212 00:13:24.847157 7 establishing_controller.go:77] Shutting down EstablishingController I1212 00:13:24.847135 7 shared_informer.go:119] stop requested I1212 00:13:24.847215 7 secure_serving.go:156] Stopped listening on [::]:443 I1212 00:13:24.847215 7 controller.go:171] Shutting down kubernetes service endpoint reconciler E1212 00:13:24.847225 7 cache.go:35] Unable to sync caches for AvailableConditionController controller I1212 00:13:24.847152 7 naming_controller.go:288] Shutting down NamingConditionController I1212 00:13:24.847186 7 controller.go:90] Shutting down OpenAPI AggregationController I1212 00:13:24.848248 7 crdregistration_controller.go:117] Shutting down crd-autoregister controller I1212 00:13:24.849329 7 autoregister_controller.go:141] Shutting down autoregister controller I1212 00:13:24.850406 7 apiservice_controller.go:94] Shutting down APIServiceRegistrationController I1212 00:13:24.851479 7 available_controller.go:282] Shutting down AvailableConditionController E1212 00:13:34.847575 7 controller.go:173] rpc error: code = Unavailable desc = transport is closing E1212 00:13:48.464293 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.464359 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.465402 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} I1212 00:13:48.466507 7 trace.go:76] Trace[1330970842]: "List /apis/admissionregistration.k8s.io/v1alpha1/initializerconfigurations" (started: 2018-12-12 00:12:48.464197188 +0000 UTC m=+6.172398126) (total time: 1m0.00229233s): Trace[1330970842]: [1m0.00229233s] [1m0.002288147s] END E1212 00:13:48.466971 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.467596 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.468649 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} I1212 00:13:48.469741 7 trace.go:76] Trace[1868745693]: "List /apis/admissionregistration.k8s.io/v1alpha1/initializerconfigurations" (started: 2018-12-12 00:12:48.466884694 +0000 UTC m=+6.175085674) (total time: 1m0.002842133s): Trace[1868745693]: [1m0.002842133s] [1m0.002837372s] END E1212 00:13:48.470629 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.470821 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.470927 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.471076 7 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.Secret: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets) E1212 00:13:48.471550 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.471675 7 reflector.go:134] k8s.io/apiextensions-apiserver/pkg/client/informers/internalversion/factory.go:117: Failed to list *apiextensions.CustomResourceDefinition: the server was unable to return a response in the time allotted, but may still be processing the request (get customresourcedefinitions.apiextensions.k8s.io) E1212 00:13:48.471884 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} I1212 00:13:48.472979 7 trace.go:76] Trace[1800478074]: "List /apis/admissionregistration.k8s.io/v1alpha1/initializerconfigurations" (started: 2018-12-12 00:12:48.470532433 +0000 UTC m=+6.178733370) (total time: 1m0.002432073s): Trace[1800478074]: [1m0.002432073s] [1m0.002427554s] END E1212 00:13:48.474023 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.475085 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.477257 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout I1212 00:13:48.479430 7 trace.go:76] Trace[1280622339]: "List /api/v1/secrets" (started: 2018-12-12 00:12:48.470911773 +0000 UTC m=+6.179112712) (total time: 1m0.008501979s): Trace[1280622339]: [1m0.008501979s] [1m0.008459411s] END E1212 00:13:48.480486 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} I1212 00:13:48.481577 7 trace.go:76] Trace[1804652784]: "List /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions" (started: 2018-12-12 00:12:48.471466491 +0000 UTC m=+6.179667429) (total time: 1m0.010100941s): Trace[1804652784]: [1m0.010100941s] [1m0.010060801s] END E1212 00:13:48.500678 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.500713 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.500806 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.500845 7 reflector.go:134] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:130: Failed to list *scheduling.PriorityClass: the server was unable to return a response in the time allotted, but may still be processing the request (get priorityclasses.scheduling.k8s.io) E1212 00:13:48.500882 7 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.ClusterRole: the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io) E1212 00:13:48.500903 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.500953 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.500957 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.500957 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.500979 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.500987 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.501007 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.501090 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.501091 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.501147 7 reflector.go:134] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:130: Failed to list *storage.StorageClass: the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io) E1212 00:13:48.501156 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.501160 7 reflector.go:134] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:130: Failed to list *core.LimitRange: the server was unable to return a response in the time allotted, but may still be processing the request (get limitranges) E1212 00:13:48.501238 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.501241 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.501275 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.501284 7 reflector.go:134] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:130: Failed to list *core.Secret: the server was unable to return a response in the time allotted, but may still be processing the request (get secrets) E1212 00:13:48.501395 7 reflector.go:134] k8s.io/kube-aggregator/pkg/client/informers/internalversion/factory.go:117: Failed to list *apiregistration.APIService: the server was unable to return a response in the time allotted, but may still be processing the request (get apiservices.apiregistration.k8s.io) E1212 00:13:48.501398 7 reflector.go:134] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:130: Failed to list *core.PersistentVolume: the server was unable to return a response in the time allotted, but may still be processing the request (get persistentvolumes) E1212 00:13:48.501439 7 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.ServiceAccount: the server was unable to return a response in the time allotted, but may still be processing the request (get serviceaccounts) E1212 00:13:48.501457 7 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1beta1.ValidatingWebhookConfiguration: the server was unable to return a response in the time allotted, but may still be processing the request (get validatingwebhookconfigurations.admissionregistration.k8s.io) E1212 00:13:48.501501 7 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.Service: the server was unable to return a response in the time allotted, but may still be processing the request (get services) E1212 00:13:48.501524 7 reflector.go:134] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:130: Failed to list *core.ResourceQuota: the server was unable to return a response in the time allotted, but may still be processing the request (get resourcequotas) E1212 00:13:48.501628 7 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.Pod: the server was unable to return a response in the time allotted, but may still be processing the request (get pods) E1212 00:13:48.501687 7 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1beta1.MutatingWebhookConfiguration: the server was unable to return a response in the time allotted, but may still be processing the request (get mutatingwebhookconfigurations.admissionregistration.k8s.io) E1212 00:13:48.501731 7 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.Namespace: the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces) E1212 00:13:48.501747 7 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.RoleBinding: the server was unable to return a response in the time allotted, but may still be processing the request (get rolebindings.rbac.authorization.k8s.io) E1212 00:13:48.501776 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.501974 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.502090 7 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.Role: the server was unable to return a response in the time allotted, but may still be processing the request (get roles.rbac.authorization.k8s.io) I1212 00:13:48.502863 7 trace.go:76] Trace[2003208653]: "List /apis/scheduling.k8s.io/v1beta1/priorityclasses" (started: 2018-12-12 00:12:48.50058919 +0000 UTC m=+6.208790128) (total time: 1m0.002260482s): Trace[2003208653]: [1m0.002260482s] [1m0.002225647s] END E1212 00:13:48.503663 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.503680 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.503783 7 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.StorageClass: the server was unable to return a response in the time allotted, but may still be processing the request (get storageclasses.storage.k8s.io) E1212 00:13:48.503809 7 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.ClusterRoleBinding: the server was unable to return a response in the time allotted, but may still be processing the request (get clusterrolebindings.rbac.authorization.k8s.io) E1212 00:13:48.503945 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.503984 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:48.504107 7 reflector.go:134] k8s.io/client-go/informers/factory.go:131: Failed to list *v1.Endpoints: the server was unable to return a response in the time allotted, but may still be processing the request (get endpoints) E1212 00:13:48.504981 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.508235 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.509303 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.510393 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.511474 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.512543 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.513624 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.514705 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.515781 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.516852 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.517948 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.521205 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.522313 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.523383 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.536332 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.538482 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.539550 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:48.542790 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout I1212 00:13:48.544949 7 trace.go:76] Trace[1776629030]: "List /apis/rbac.authorization.k8s.io/v1/clusterroles" (started: 2018-12-12 00:12:48.500703254 +0000 UTC m=+6.208904191) (total time: 1m0.044230748s): Trace[1776629030]: [1m0.044230748s] [1m0.044191519s] END E1212 00:13:48.546005 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.547081 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.548160 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.549233 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.550326 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.551402 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.552483 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.553559 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.554642 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.555717 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.556795 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.557885 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.558957 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.560038 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.561114 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.562191 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:48.563270 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} I1212 00:13:48.564370 7 trace.go:76] Trace[1439133424]: "List /apis/storage.k8s.io/v1/storageclasses" (started: 2018-12-12 00:12:48.50081678 +0000 UTC m=+6.209017718) (total time: 1m0.063540529s): Trace[1439133424]: [1m0.063540529s] [1m0.063506486s] END I1212 00:13:48.565439 7 trace.go:76] Trace[1683817720]: "List /api/v1/serviceaccounts" (started: 2018-12-12 00:12:48.500940682 +0000 UTC m=+6.209141619) (total time: 1m0.064488328s): Trace[1683817720]: [1m0.064488328s] [1m0.064456016s] END I1212 00:13:48.566514 7 trace.go:76] Trace[491490319]: "List /api/v1/persistentvolumes" (started: 2018-12-12 00:12:48.500986525 +0000 UTC m=+6.209187462) (total time: 1m0.065518757s): Trace[491490319]: [1m0.065518757s] [1m0.065482619s] END I1212 00:13:48.567591 7 trace.go:76] Trace[1474503645]: "List /apis/apiregistration.k8s.io/v1/apiservices" (started: 2018-12-12 00:12:48.500875642 +0000 UTC m=+6.209076622) (total time: 1m0.066706928s): Trace[1474503645]: [1m0.066706928s] [1m0.066656322s] END I1212 00:13:48.568677 7 trace.go:76] Trace[635852309]: "List /api/v1/secrets" (started: 2018-12-12 00:12:48.500940686 +0000 UTC m=+6.209141623) (total time: 1m0.06772371s): Trace[635852309]: [1m0.06772371s] [1m0.067687683s] END I1212 00:13:48.569755 7 trace.go:76] Trace[175882069]: "List /api/v1/limitranges" (started: 2018-12-12 00:12:48.500925548 +0000 UTC m=+6.209126486) (total time: 1m0.06881921s): Trace[175882069]: [1m0.06881921s] [1m0.068781117s] END I1212 00:13:48.570836 7 trace.go:76] Trace[122202535]: "List /api/v1/pods" (started: 2018-12-12 00:12:48.500951889 +0000 UTC m=+6.209152828) (total time: 1m0.069871581s): Trace[122202535]: [1m0.069871581s] [1m0.069830326s] END I1212 00:13:48.571908 7 trace.go:76] Trace[865708000]: "List /api/v1/resourcequotas" (started: 2018-12-12 00:12:48.501056066 +0000 UTC m=+6.209257003) (total time: 1m0.070840152s): Trace[865708000]: [1m0.070840152s] [1m0.070808759s] END I1212 00:13:48.572979 7 trace.go:76] Trace[955305514]: "List /apis/rbac.authorization.k8s.io/v1/rolebindings" (started: 2018-12-12 00:12:48.501055621 +0000 UTC m=+6.209256562) (total time: 1m0.071915466s): Trace[955305514]: [1m0.071915466s] [1m0.071884923s] END I1212 00:13:48.574060 7 trace.go:76] Trace[1423473229]: "List /api/v1/namespaces" (started: 2018-12-12 00:12:48.501149822 +0000 UTC m=+6.209350759) (total time: 1m0.072900808s): Trace[1423473229]: [1m0.072900808s] [1m0.072867725s] END I1212 00:13:48.575139 7 trace.go:76] Trace[802608035]: "List /apis/admissionregistration.k8s.io/v1beta1/validatingwebhookconfigurations" (started: 2018-12-12 00:12:48.501149182 +0000 UTC m=+6.209350109) (total time: 1m0.073979725s): Trace[802608035]: [1m0.073979725s] [1m0.073948799s] END I1212 00:13:48.576217 7 trace.go:76] Trace[1021760621]: "List /apis/admissionregistration.k8s.io/v1beta1/mutatingwebhookconfigurations" (started: 2018-12-12 00:12:48.501154269 +0000 UTC m=+6.209355207) (total time: 1m0.075052452s): Trace[1021760621]: [1m0.075052452s] [1m0.075012296s] END I1212 00:13:48.577292 7 trace.go:76] Trace[1969470568]: "List /api/v1/services" (started: 2018-12-12 00:12:48.501258385 +0000 UTC m=+6.209459322) (total time: 1m0.076025789s): Trace[1969470568]: [1m0.076025789s] [1m0.076004504s] END I1212 00:13:48.578373 7 trace.go:76] Trace[1871147953]: "List /apis/rbac.authorization.k8s.io/v1/roles" (started: 2018-12-12 00:12:48.501860956 +0000 UTC m=+6.210061881) (total time: 1m0.076503388s): Trace[1871147953]: [1m0.076503388s] [1m0.076480245s] END I1212 00:13:48.579453 7 trace.go:76] Trace[640462565]: "List /apis/storage.k8s.io/v1/storageclasses" (started: 2018-12-12 00:12:48.503571787 +0000 UTC m=+6.211772724) (total time: 1m0.075871435s): Trace[640462565]: [1m0.075871435s] [1m0.075846101s] END I1212 00:13:48.580530 7 trace.go:76] Trace[759626822]: "List /apis/rbac.authorization.k8s.io/v1/clusterrolebindings" (started: 2018-12-12 00:12:48.50357283 +0000 UTC m=+6.211773767) (total time: 1m0.076948558s): Trace[759626822]: [1m0.076948558s] [1m0.076917912s] END I1212 00:13:48.581612 7 trace.go:76] Trace[647924664]: "List /api/v1/endpoints" (started: 2018-12-12 00:12:48.503957566 +0000 UTC m=+6.212158503) (total time: 1m0.077645739s): Trace[647924664]: [1m0.077645739s] [1m0.077595063s] END E1212 00:13:49.455364 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:49.455402 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:49.455432 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} E1212 00:13:49.455591 7 storage_rbac.go:154] unable to initialize clusterroles: the server was unable to return a response in the time allotted, but may still be processing the request (get clusterroles.rbac.authorization.k8s.io) E1212 00:13:49.455602 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"} W1212 00:13:49.455631 7 storage_scheduling.go:95] unable to get PriorityClass system-node-critical: the server was unable to return a response in the time allotted, but may still be processing the request (get priorityclasses.scheduling.k8s.io system-node-critical). Retrying... F1212 00:13:49.455641 7 hooks.go:188] PostStartHook "scheduling/bootstrap-system-priority-classes" failed: unable to add default system priority classes: the server was unable to return a response in the time allotted, but may still be processing the request (get priorityclasses.scheduling.k8s.io system-node-critical) E1212 00:13:49.489143 7 status.go:64] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"} E1212 00:13:49.500198 7 writers.go:168] apiserver was unable to write a JSON response: http: Handler timeout E1212 00:13:49.511157 7 client_ca_hook.go:72] Post https://[::1]:443/api/v1/namespaces: dial tcp [::1]:443: connect: connection refused
I was able to replicate this consistently, the one time I was able to do a full upgrade and had it succeed I then proceeded to rotate the cluster 1 more time with no updates at that point I once again saw the corruption.
To ensure that this was not an issue with the k8s and etcd versions I picked I once again created a new Kops cluster and the updated k8s and etcd to the versions mentioned above. This time however I set the etcd provisioner in Kops to legacy, the cluster upgrade succeeded with no issues and following cluster rotations have not caused any visible issues.
The text was updated successfully, but these errors were encountered: