Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[local-preview] Not working with fedora #12084

Closed
Pothulapati opened this issue Aug 12, 2022 · 2 comments
Closed

[local-preview] Not working with fedora #12084

Pothulapati opened this issue Aug 12, 2022 · 2 comments
Assignees
Labels

Comments

@Pothulapati
Copy link
Contributor

Bug description

For users, Who are on cgroupsv2 there seems to be an issue during the startup of
k3s preventing the local-preview to work

Logs are as below
+ REQUIRED_CORES=4
+ nproc
+ total_cores=24
+ '[' 24 -lt 4 ]
+ echo 'Gitpod Domain: preview.gitpod-self-hosted.com'
Gitpod Domain: preview.gitpod-self-hosted.com
+ '[' -f /sys/fs/cgroup/cgroup.controllers ]
+ date -Iseconds
+ echo '[2022-08-12T08:16:03+00:00] [CgroupV2 Fix] Evacuating Root Cgroup ...'
[2022-08-12T08:16:03+00:00] [CgroupV2 Fix] Evacuating Root Cgroup ...
+ mkdir -p /sys/fs/cgroup/init
+ busybox xargs -rn1
+ sed -e 's/ / +/g' -e s/^/+/
+ date -Iseconds
+ echo '[2022-08-12T08:16:03+00:00] [CgroupV2 Fix] Done'
[2022-08-12T08:16:03+00:00] [CgroupV2 Fix] Done
+ mount --make-shared /sys/fs/cgroup
+ mount --make-shared /proc
+ mount --make-shared /var/gitpod
+ mkcert -install
Created a new local CA 💥
Installing to the system store is not yet supported on this Linux 😣 but Firefox and/or Chrome/Chromium will still work.
You can also manually install the root certificate at "/.local/share/mkcert/rootCA.pem".

+ cat //.local/share/mkcert/rootCA.pem
+ cat //.local/share/mkcert/rootCA.pem
+ FN_CACERT=./ca.pem
+ FN_SSLCERT=./ssl.crt
+ FN_SSLKEY=./ssl.key
+ cat //.local/share/mkcert/rootCA.pem
+ mkcert -cert-file ./ssl.crt -key-file ./ssl.key '*.ws.preview.gitpod-self-hosted.com' '*.preview.gitpod-self-hosted.com' preview.gitpod-self-hosted.com reg.preview.gitpod-self-hosted.com registry.default.svc.cluster.local gitpod.default ws-manager.default.svc ws-manager ws-manager-dev registry-facade server ws-manager-bridge ws-proxy ws-manager ws-daemon.default.svc ws-daemon wsdaemon

Created a new certificate valid for the following names 📜
 - "*.ws.preview.gitpod-self-hosted.com"
 - "*.preview.gitpod-self-hosted.com"
 - "preview.gitpod-self-hosted.com"
 - "reg.preview.gitpod-self-hosted.com"
 - "registry.default.svc.cluster.local"
 - "gitpod.default"
 - "ws-manager.default.svc"
 - "ws-manager"
 - "ws-manager-dev"
 - "registry-facade"
 - "server"
 - "ws-manager-bridge"
 - "ws-proxy"
 - "ws-manager"
 - "ws-daemon.default.svc"
 - "ws-daemon"
 - "wsdaemon"

Reminder: X.509 wildcards only go one level deep, so this won't match a.b.ws.preview.gitpod-self-hosted.com ℹ️

The certificate is at "./ssl.crt" and the key at "./ssl.key" ✅

It will expire on 12 November 2024 🗓

+ base64 -w0
+ CACERT='LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVlVENDQXVHZ0F3SUJBZ0lRQmptcFJzTFllZXVFRXJod3FiNDFJVEFOQmdrcWhraUc5dzBCQVFzRkFEQlYKTVI0d0hBWURWUVFLRXhWdGEyTmxjblFnWkdWMlpXeHZjRzFsYm5RZ1EwRXhGVEFUQmdOVkJBc1REREJsWldNNQpNbU15Wm1ZME1ERWNNQm9HQTFVRUF4TVRiV3RqWlhKMElEQmxaV001TW1NeVptWTBNREFlRncweU1qQTRNVEl3Ck9ERTJNRE5hRncwek1qQTRNVEl3T0RFMk1ETmFNRlV4SGpBY0JnTlZCQW9URlcxclkyVnlkQ0JrWlhabGJHOXcKYldWdWRDQkRRVEVWTUJNR0ExVUVDeE1NTUdWbFl6a3lZekptWmpRd01Sd3dHZ1lEVlFRREV4TnRhMk5sY25RZwpNR1ZsWXpreVl6Sm1aalF3TUlJQm9qQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FZOEFNSUlCaWdLQ0FZRUE1bUJKClJFeVV2cm1XbTJRNmlKclQ5KzQ2YXh4ZUxuRFBReWtPVktoTVZLZDBmdGR0emNzVVNtOFVRYjJWN2JmelovQzYKajJUVXJsbVVuc0ptTFpKSjZtOHN2YUw1c1NDODRXOS9ybCtrOG9Kb01Ta3R5dnoyekNxeWtEZGFiSWppQ2wrQQpnQk1GUFlEMyt3K3d4TzcrVUNRVDA3VFBiaFBIK09yU1JmQkt0MVVyU2NLTE5iM1ZnRW9ScCsxT28vYkpWcThMClZ5dG5rMnhoV2JWQ3FKMzlkSUxVOXYzV2QzMUxLdzFNckNTc01mNGhBeUxCT3JnZTVDQ1FGTGRpMmZXZ0ZDVisKQUxwUi9VZ1BRbU51aWtVb21ZVTl4ekttcEVPbG9vYUlVb2N0VUl4Tzg1Uk1CSFk4eGFhQkNXK3R2STJlYjdyagpCOEpDaERLZVNneCtPNzhCSHBSZ3ZuT2QvTkFoZ3ZURmlkUHgzZTVkY3h6S3VNRGtrM2RJcitLNFdWaWh1TkJSCmNpY09ER21lSUlURGlHMEJ3UTdkRFRaM2xCZXR4eGFMNkJMMHVndkthWllLMlU2TFg0TnpLdmJ5N0dVN0hFR3AKOHR1eCtzaWYrSGVjUVNHK2VKZGFpdFY0Vy9xN09vcmlFWkpxSUJpZ0ZiZnFkY2tTOWhnVDQra0U5ZTl6QWdNQgpBQUdqUlRCRE1BNEdBMVVkRHdFQi93UUVBd0lDQkRBU0JnTlZIUk1CQWY4RUNEQUdBUUgvQWdFQU1CMEdBMVVkCkRnUVdCQlRPQk8rUW9KUHcyckViRG1lTHNiYUpsR1R5bGpBTkJna3Foa2lHOXcwQkFRc0ZBQU9DQVlFQVFzT1kKdUtEcXdscWdhamczSGFjTlVWYkVmU2N1T3NoZU5PZjNWRmxvTVB5SVVONVNNbXRyYzcrU3RvNGh4dk9ja2Ezdgp6WWxqeVZocE5CWEdmUWRyK1hHWFBteEVpVWVlMHREbGYvaXU3TFI3akpGY3k1dzZhbDdRMEJiQ0hWaEI1b0Z2CmFqSkNzNllOOCtGVm5RRkxndXVZWkVXVllSem1kUXdINmliV3lPTDk3Q3BTVUZ0dzlJcGZTcjl4c2t6RmthaW4KVTUva1dTcjNWRzRkblZ0SmVZa3ZKSUZ3VlNtTVkvVTM4dXJ2VTMzTFVkOXBhTVpRQ0duR1hEY0UvVkZqcDBndApLQnkwRUw3K25tdHE3cWlzckVyYzlzVnQvZ3N3Mzhua1VoUkVoakJPQTVVK2JraHZuRDZzNGMxM2g1dUQrMVMxClcreFNFR0FMUnl1U0s3OTZyNG8zWlFURkptWlpXUEphdEhBcWgwbFZBajBhVVRQNjZrUHd2ZHVKTWE5R1lwcVIKTCtjaGdydks5dFlLbUVFVjROZnBQSlJQbzhLLzFSMXJOaUFTeWp6YWduT3hSb0xQdk1qZXNraTR0em9XNnNZMwpncng4L0pYWU1hYmYxaHR0WWZpbjJ3bTNZM3FvNWNYdjllWkF3cjhRbk8zak5nRzJ2TkxhUXpBL29hUTAKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo='
+ base64 -w0
+ SSLCERT=LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZYakNDQThhZ0F3SUJBZ0lSQU9BSU9sTlVTR2JOd2FuQVdhZnY1ekV3RFFZSktvWklodmNOQVFFTEJRQXcKVlRFZU1Cd0dBMVVFQ2hNVmJXdGpaWEowSUdSbGRtVnNiM0J0Wlc1MElFTkJNUlV3RXdZRFZRUUxFd3d3WldWagpPVEpqTW1abU5EQXhIREFhQmdOVkJBTVRFMjFyWTJWeWRDQXdaV1ZqT1RKak1tWm1OREF3SGhjTk1qSXdPREV5Ck1EZ3hOakF6V2hjTk1qUXhNVEV5TURneE5qQXpXakJBTVNjd0pRWURWUVFLRXg1dGEyTmxjblFnWkdWMlpXeHYKY0cxbGJuUWdZMlZ5ZEdsbWFXTmhkR1V4RlRBVEJnTlZCQXNURERCbFpXTTVNbU15Wm1ZME1EQ0NBU0l3RFFZSgpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFMbCt0NDgzdEZadVhUZWZiU2JSbGlBVWFuSk1zUEpPCjBCMXc2bnJiSm84NVRSQXhWWUo1UTlGZ3gwb3N4aFhPdWoxNzZSLzN0WGhzRkZLcXRPQ24rQ3E5T21nOFVLeUgKYXRETEZLcG9DczZEcks5amtUWUhHSGxuZzY3dFQydzdTNlljcHJER0w2ZXZLWkI5NmVibHVYbW84Sk9oQ1VqYwpPdUVFdUxjcFR4eUpmN1ZPeVUvY2Vtb3Nvd1I3MFRwN3ZXOG9QMmRLK20vbzkyODFZWnBJcFFTVmNKMXNrN0xRClAxakR2M3pmb3JKUFNydXdBTjN3N2VueEdoYjBmVmk5Tisxdmd5NDlBdUhkWDhETUgzemtldzE0Y09IcjFTU3QKRzRNRmVVd2xQWmxGRUU2b09SNzhRQ0RRbHBnOGFScGtnUHgzTlFscXl2NmtvVFhUbEpLeUhza0NBd0VBQWFPQwpBYnd3Z2dHNE1BNEdBMVVkRHdFQi93UUVBd0lGb0RBVEJnTlZIU1VFRERBS0JnZ3JCZ0VGQlFjREFUQWZCZ05WCkhTTUVHREFXZ0JUT0JPK1FvSlB3MnJFYkRtZUxzYmFKbEdUeWxqQ0NBVzRHQTFVZEVRU0NBV1V3Z2dGaGdpTXEKTG5kekxuQnlaWFpwWlhjdVoybDBjRzlrTFhObGJHWXRhRzl6ZEdWa0xtTnZiWUlnS2k1d2NtVjJhV1YzTG1kcApkSEJ2WkMxelpXeG1MV2h2YzNSbFpDNWpiMjJDSG5CeVpYWnBaWGN1WjJsMGNHOWtMWE5sYkdZdGFHOXpkR1ZrCkxtTnZiWUlpY21WbkxuQnlaWFpwWlhjdVoybDBjRzlrTFhObGJHWXRhRzl6ZEdWa0xtTnZiWUlpY21WbmFYTjAKY25rdVpHVm1ZWFZzZEM1emRtTXVZMngxYzNSbGNpNXNiMk5oYklJT1oybDBjRzlrTG1SbFptRjFiSFNDRm5kegpMVzFoYm1GblpYSXVaR1ZtWVhWc2RDNXpkbU9DQ25kekxXMWhibUZuWlhLQ0RuZHpMVzFoYm1GblpYSXRaR1YyCmdnOXlaV2RwYzNSeWVTMW1ZV05oWkdXQ0JuTmxjblpsY29JUmQzTXRiV0Z1WVdkbGNpMWljbWxrWjJXQ0NIZHoKTFhCeWIzaDVnZ3AzY3kxdFlXNWhaMlZ5Z2hWM2N5MWtZV1Z0YjI0dVpHVm1ZWFZzZEM1emRtT0NDWGR6TFdSaApaVzF2Ym9JSWQzTmtZV1Z0YjI0d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dHQkFGM282L05oSmtvNXNJR3BTYVQ4ClVCS1ZtR3JPcVVWeExIaDVTd3F3MUlCeDYydE1mTENTOE1OM3BCbENaeWM4UHc4aTBPaTJCVGQweFlFYjVOZ1kKNGQyZnRzbStzZ2pUcVYwbWRHU3cvTXNSNG1UQzFPZTJyemtrTS9NOTVHODZiWDRsVFBBNnFSSVpQa1MwZVo4ZApER2lrckJpaEhoRFJwOHBTeWhVUzRxZUhqdVVnQ3ZWZ3hBMkx3ZW10SDUyNUZTeFBPc0xuRE9uQXNaMG1sN1VzCmp0bDNSV05ZWXNGeFJnYmdENVNWcEs5OHM0MTdzZEVyQXJnZTRqV1Njb3NubjhWbUhVSXNGdG5wZmhUZ2w1YXoKNFo2SzlOSWVZM1VydUF3bjdsWWdxTVBQOXpuTCthamNwSHNITkNwZWJkV1VhTC8zZXdCVFNaQ3ZNTURmeTFQZAo4Nm83UUJ4OHhpbWRTOHZCTURZV0RnSHBHVU5xS2JaWDZ2UDRNQzMrUzZ6NkJZTkczVU1QUUFwd1MrMDJSMURlCnFJNGJjcUpvSGozR0hJUmVsODdsYUN5clVUOVgwaUl0SmcyRFlpcTdrTmtJQWlGU3h6d2gvRE12WWZpL1dOUzAKZlpYMnEyMUpod2xNeElzbU1SL2xaR2w2QmUrNHpacGRnd296UmpEa2oxS2ZvUT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
+ base64 -w0
+ SSLKEY=LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2Z0lCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktnd2dnU2tBZ0VBQW9JQkFRQzVmcmVQTjdSV2JsMDMKbjIwbTBaWWdGR3B5VExEeVR0QWRjT3A2MnlhUE9VMFFNVldDZVVQUllNZEtMTVlWenJvOWUra2Y5N1Y0YkJSUwpxclRncC9ncXZUcG9QRkNzaDJyUXl4U3FhQXJPZzZ5dlk1RTJCeGg1WjRPdTdVOXNPMHVtSEthd3hpK25yeW1RCmZlbm01Ymw1cVBDVG9RbEkzRHJoQkxpM0tVOGNpWCsxVHNsUDNIcHFMS01FZTlFNmU3MXZLRDluU3ZwdjZQZHYKTldHYVNLVUVsWENkYkpPeTBEOVl3Nzk4MzZLeVQwcTdzQURkOE8zcDhSb1c5SDFZdlRmdGI0TXVQUUxoM1YvQQp6Qjk4NUhzTmVIRGg2OVVrclJ1REJYbE1KVDJaUlJCT3FEa2UvRUFnMEphWVBHa2FaSUQ4ZHpVSmFzcitwS0UxCjA1U1NzaDdKQWdNQkFBRUNnZ0VBSy9CY1FzeUxKejRWVHF1eEMxVHlIcjgzUjhQcTFqcmRDVnhKN3JnaXRpSjQKb3JGTTlBOE5oWGRMUGNMRldUMFMyS1dWWDBFcDkxQ0NyK0pIM2o5cmhaUTFWYU9UNklwYlB3SWI3eEdlSGJVTApIckNUSVIwbEt2emVNSDErSnNFVTlsQXJIQXlXRlQ1a3RobGRZcGhnQ3ZWOXB6cXFIRnd1aGthOENvYjZlbU9nClYwOGl5WC9SQk9aY1dzc0ttTmtlbUdQSCtVZnh3dlJMWHkzdzFNdHlEU05nTFVHVlhjL3hZTjZWekVIVjg0YUUKOVBEcXhLTjRFT29BNHhXYW5ENzZJNlcwV0tHVXE4eUc2V1NndnZqVmRacjZSdFpZYTVDYVZsemoxaFhEMHNJKwoyNFFaMjgrKzFLcWZHcEQyWnZCZ0hxTjZISVc2aXhsQ1dzN1hqdUxxUVFLQmdRRGdzNlBRSjBOMUpqeS9STDViCkV4RVF2YXJvNVJ0QzNXZUhoOXVxMmxuTmp6bXczaHpubkFSUEZxMG4xdGZZL2JIMGlGQjdMUXNiemRHc0lFeHoKL2h0a0hlYVBuWHNEQnRSVGovUU16TThEWlVjNURQMVpMaC9RMHRvclVzenR0VVIxQ3NHWklhV3E1UlM2MXJXRQpJRXgwWFQ5UTEyb3V1NG5UQmtCN3BzdXhUUUtCZ1FEVFZRMk5ibm1VdU1SRWtBOHFmemVnL2pzS0xDdzhvdlpaCm5mcUhyR2RPMXprWTJpRCtXcVo4TjRqdFdrL0lURUVkQlNzbkpDbkFHSWJVTCsvR2tkRUVZYlhPVXZ3NUJiRjQKaVFzWHRTTU1iWXRqL3lNaWE2OHFRSEVPZUJrbmxEaEFpYlRNK2pZbTN4bEQwelFwWWF6UFcvRi9SZHdWS0ZQRgozNXZEN3ZXbGJRS0JnRFQ0NUpON3ppRmVCRkFyQ3AwNTMzb00zSy9PNHlCZVJidmp3VnVENGt2ZGlnSXlPcW8zClU2UzVlZFM4aDJJMlhLK0RPMFh1bG9IVmdhcU1hcm1sbkJ0OEdSQ2VWWk9mRm9za2txbzUxa3U4b28vR2lpdHQKL2o0aWx5QkRndUEvTFlaU0pOWE80dGxvNi93b0JkN0NKb1FBUDU3MVNhait1VDB3YWg4OGNTUzVBb0dCQUk1SAp0ZzhoY05PekxkaW5VTDZnMWZnYkVlN0FYS3dhWDFkb3FCS04vU08wZlNtQk9qTmxIcStFeURoYzFGZ2JGcitPCkNrYVk3MDc0ZEZZSlRCcFpjK3JLU2hmMkFQLzNHRXY1b0RFKzc3RGZVN2hvUHVSZXNaajF0K2d3N1dhYlFPQWEKbGxKbXB1eTJ5WkREY2x2bCtlM0ZqaXJOQXVadnR5OENaQ0dmRVYxbEFvR0JBTFpGalZhMTNyMFEwV0tydDhPeQpGZVVLOTBvWExydEZYMDRNWGNqWE4rTUJBbHJzQnVVaVRCQndUSE9zMUNCWXo4M2tienFydDRZZ3p4MnV6SmtGClU3a1F0MTdOM0YwMUNTYVAyWWo2MHZwNGpyanZjVHhUM0N1eWpFTHNrUjlaUWFyV2wycDRwdEt6TDJpSHpVNGwKUEVLV0F3YVdUQklDdmtBNk5DeHJmL0VzCi0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K
+ mkdir -p /var/lib/rancher/k3s/server/manifests/gitpod
+ cat
+ cat
+ cat
+ cat
+ cat
+ cat
+ cat
+ /gitpod-installer init
+ yq e -i '.domain = "preview.gitpod-self-hosted.com"' config.yaml
+ yq e -i '.certificate.name = "https-certificates"' config.yaml
+ yq e -i '.certificate.kind = "secret"' config.yaml
+ yq e -i '.customCACert.name = "ca-key-pair"' config.yaml
+ yq e -i '.customCACert.kind = "secret"' config.yaml
+ yq e -i '.observability.logLevel = "debug"' config.yaml
+ yq e -i '.workspace.runtime.containerdSocket = "/run/k3s/containerd/containerd.sock"' config.yaml
+ yq e -i '.workspace.runtime.containerdRuntimeDir = "/var/lib/rancher/k3s/agent/containerd/io.containerd.runtime.v2.task/k8s.io/"' config.yaml
+ yq e -i '.experimental.telemetry.data.platform = "local-preview"' config.yaml
+ echo 'extracting images to download ahead...'
extracting images to download ahead...
+ /gitpod-installer render --use-experimental-config --config config.yaml
+ grep image:
+ sed 's/ *//g'
+ sed s/image://g
+ sed 's/\"//g'
+ sed s/^-//g
+ sort
+ uniq
+ rm -rf /var/lib/rancher/k3s/server/manifests/gitpod
+ /bin/k3s server --disable traefik --node-label 'gitpod.io/workload_meta=true' --node-label 'gitpod.io/workload_ide=true' --node-label 'gitpod.io/workload_workspace_services=true' --node-label 'gitpod.io/workload_workspace_regular=true' --node-label 'gitpod.io/workload_workspace_headless=true'
+ run_telemetry
+ sleep 100
time="2022-08-12T08:16:05.816831079Z" level=info msg="Starting k3s v1.21.12+k3s1 (1db3ab57)"
time="2022-08-12T08:16:05.820069160Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
time="2022-08-12T08:16:05.820095179Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
time="2022-08-12T08:16:05.823604719Z" level=info msg="Database tables and indexes are up to date"
time="2022-08-12T08:16:05.825212439Z" level=info msg="Kine listening on unix://kine.sock"
time="2022-08-12T08:16:05.832049589Z" level=info msg="certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1660292165: notBefore=2022-08-12 08:16:05 +0000 UTC notAfter=2023-08-12 08:16:05 +0000 UTC"
time="2022-08-12T08:16:05.832422958Z" level=info msg="certificate CN=system:kube-controller-manager signed by CN=k3s-client-ca@1660292165: notBefore=2022-08-12 08:16:05 +0000 UTC notAfter=2023-08-12 08:16:05 +0000 UTC"
time="2022-08-12T08:16:05.832786488Z" level=info msg="certificate CN=system:kube-scheduler signed by CN=k3s-client-ca@1660292165: notBefore=2022-08-12 08:16:05 +0000 UTC notAfter=2023-08-12 08:16:05 +0000 UTC"
time="2022-08-12T08:16:05.833154687Z" level=info msg="certificate CN=system:apiserver,O=system:masters signed by CN=k3s-client-ca@1660292165: notBefore=2022-08-12 08:16:05 +0000 UTC notAfter=2023-08-12 08:16:05 +0000 UTC"
time="2022-08-12T08:16:05.833515132Z" level=info msg="certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1660292165: notBefore=2022-08-12 08:16:05 +0000 UTC notAfter=2023-08-12 08:16:05 +0000 UTC"
time="2022-08-12T08:16:05.833824882Z" level=info msg="certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1660292165: notBefore=2022-08-12 08:16:05 +0000 UTC notAfter=2023-08-12 08:16:05 +0000 UTC"
time="2022-08-12T08:16:05.834199373Z" level=info msg="certificate CN=k3s-cloud-controller-manager signed by CN=k3s-client-ca@1660292165: notBefore=2022-08-12 08:16:05 +0000 UTC notAfter=2023-08-12 08:16:05 +0000 UTC"
time="2022-08-12T08:16:05.834834682Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-server-ca@1660292165: notBefore=2022-08-12 08:16:05 +0000 UTC notAfter=2023-08-12 08:16:05 +0000 UTC"
time="2022-08-12T08:16:05.835439835Z" level=info msg="certificate CN=system:auth-proxy signed by CN=k3s-request-header-ca@1660292165: notBefore=2022-08-12 08:16:05 +0000 UTC notAfter=2023-08-12 08:16:05 +0000 UTC"
time="2022-08-12T08:16:05.836021905Z" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1660292165: notBefore=2022-08-12 08:16:05 +0000 UTC notAfter=2023-08-12 08:16:05 +0000 UTC"
time="2022-08-12T08:16:05.836345942Z" level=info msg="certificate CN=etcd-client signed by CN=etcd-server-ca@1660292165: notBefore=2022-08-12 08:16:05 +0000 UTC notAfter=2023-08-12 08:16:05 +0000 UTC"
time="2022-08-12T08:16:05.836947858Z" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1660292165: notBefore=2022-08-12 08:16:05 +0000 UTC notAfter=2023-08-12 08:16:05 +0000 UTC"
time="2022-08-12T08:16:06.038885186Z" level=info msg="certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1660292165: notBefore=2022-08-12 08:16:05 +0000 UTC notAfter=2023-08-12 08:16:06 +0000 UTC"
time="2022-08-12T08:16:06.039235231Z" level=info msg="Active TLS secret  (ver=) (count 9): map[listener.cattle.io/cn-0eec92c2ff40:0eec92c2ff40 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.17.0.2:172.17.0.2 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=3A8D99E27B0DD569E0E54209560EBAD6A15370C2]"
time="2022-08-12T08:16:06.043934218Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
I0812 08:16:06.045241     982 server.go:656] external host was not specified, using 172.17.0.2
I0812 08:16:06.045489     982 server.go:195] Version: v1.21.12+k3s1
time="2022-08-12T08:16:06.047613716Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0"
time="2022-08-12T08:16:06.047696511Z" level=info msg="Waiting for API server to become available"
time="2022-08-12T08:16:06.048079188Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
time="2022-08-12T08:16:06.048564706Z" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --node-status-update-frequency=1m0s --port=0 --profiling=false"
time="2022-08-12T08:16:06.049409197Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"
time="2022-08-12T08:16:06.049471724Z" level=info msg="To join node to cluster: k3s agent -s https://172.17.0.2:6443 -t ${NODE_TOKEN}"
time="2022-08-12T08:16:06.050472979Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
time="2022-08-12T08:16:06.050495761Z" level=info msg="Run: k3s kubectl"
time="2022-08-12T08:16:06.081109692Z" level=info msg="certificate CN=0eec92c2ff40 signed by CN=k3s-server-ca@1660292165: notBefore=2022-08-12 08:16:05 +0000 UTC notAfter=2023-08-12 08:16:06 +0000 UTC"
time="2022-08-12T08:16:06.083549359Z" level=info msg="certificate CN=system:node:0eec92c2ff40,O=system:nodes signed by CN=k3s-client-ca@1660292165: notBefore=2022-08-12 08:16:05 +0000 UTC notAfter=2023-08-12 08:16:06 +0000 UTC"
time="2022-08-12T08:16:06.115071300Z" level=info msg="Module overlay was already loaded"
time="2022-08-12T08:16:06.115118759Z" level=info msg="Module nf_conntrack was already loaded"
time="2022-08-12T08:16:06.115133657Z" level=info msg="Module br_netfilter was already loaded"
time="2022-08-12T08:16:06.115772162Z" level=warning msg="Failed to load kernel module iptable_nat with modprobe"
time="2022-08-12T08:16:06.120262158Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_max' to 786432"
time="2022-08-12T08:16:06.120282025Z" level=error msg="Failed to set sysctl: open /proc/sys/net/netfilter/nf_conntrack_max: permission denied"
time="2022-08-12T08:16:06.120304958Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400"
time="2022-08-12T08:16:06.120342278Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600"
time="2022-08-12T08:16:06.121456674Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
time="2022-08-12T08:16:06.121543336Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
I0812 08:16:06.297130     982 shared_informer.go:240] Waiting for caches to sync for node_authorizer
I0812 08:16:06.297713     982 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0812 08:16:06.297725     982 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0812 08:16:06.298286     982 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0812 08:16:06.298293     982 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0812 08:16:06.316827     982 instance.go:283] Using reconciler: lease
I0812 08:16:06.339782     982 rest.go:130] the default service ipfamily for this cluster is: IPv4
W0812 08:16:06.560698     982 genericapiserver.go:425] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0812 08:16:06.567690     982 genericapiserver.go:425] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0812 08:16:06.570225     982 genericapiserver.go:425] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0812 08:16:06.574576     982 genericapiserver.go:425] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0812 08:16:06.576312     982 genericapiserver.go:425] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
W0812 08:16:06.580634     982 genericapiserver.go:425] Skipping API apps/v1beta2 because it has no resources.
W0812 08:16:06.580647     982 genericapiserver.go:425] Skipping API apps/v1beta1 because it has no resources.
I0812 08:16:06.587943     982 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0812 08:16:06.587963     982 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
time="2022-08-12T08:16:07.123372414Z" level=info msg="Containerd is now running"
time="2022-08-12T08:16:07.128440351Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
time="2022-08-12T08:16:07.134845191Z" level=info msg="Handling backend connection request [0eec92c2ff40]"
time="2022-08-12T08:16:07.135543047Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=0eec92c2ff40 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-labels=gitpod.io/workload_meta=true,gitpod.io/workload_ide=true,gitpod.io/workload_workspace_services=true,gitpod.io/workload_workspace_regular=true,gitpod.io/workload_workspace_headless=true --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/etc/resolv.conf --runtime-cgroups=/k3s --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Flag --cni-bin-dir has been deprecated, will be removed along with dockershim.
Flag --cni-conf-dir has been deprecated, will be removed along with dockershim.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
I0812 08:16:07.136994     982 server.go:436] "Kubelet version" kubeletVersion="v1.21.12+k3s1"
time="2022-08-12T08:16:07.141740169Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6443/v1-k3s/readyz: 500 Internal Server Error"
W0812 08:16:07.159096     982 manager.go:159] Cannot detect current cgroup on cgroup v2
I0812 08:16:07.159114     982 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt
I0812 08:16:07.426947     982 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt
I0812 08:16:07.426977     982 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
I0812 08:16:07.427071     982 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key
I0812 08:16:07.427243     982 secure_serving.go:202] Serving securely on 127.0.0.1:6444
I0812 08:16:07.427275     982 controller.go:83] Starting OpenAPI AggregationController
I0812 08:16:07.427292     982 tlsconfig.go:240] Starting DynamicServingCertificateController
I0812 08:16:07.427511     982 available_controller.go:475] Starting AvailableConditionController
I0812 08:16:07.427530     982 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0812 08:16:07.427598     982 apiservice_controller.go:97] Starting APIServiceRegistrationController
I0812 08:16:07.427606     982 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0812 08:16:07.427598     982 apf_controller.go:307] Starting API Priority and Fairness config controller
I0812 08:16:07.427696     982 autoregister_controller.go:141] Starting autoregister controller
I0812 08:16:07.427713     982 cache.go:32] Waiting for caches to sync for autoregister controller
I0812 08:16:07.427752     982 crdregistration_controller.go:111] Starting crd-autoregister controller
I0812 08:16:07.427761     982 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
I0812 08:16:07.427813     982 customresource_discovery_controller.go:209] Starting DiscoveryController
I0812 08:16:07.427862     982 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key
I0812 08:16:07.427904     982 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt
I0812 08:16:07.427933     982 dynamic_cafile_content.go:167] Starting request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt
I0812 08:16:07.427982     982 controller.go:86] Starting OpenAPI controller
I0812 08:16:07.428004     982 naming_controller.go:291] Starting NamingConditionController
I0812 08:16:07.428024     982 establishing_controller.go:76] Starting EstablishingController
I0812 08:16:07.428047     982 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0812 08:16:07.428066     982 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0812 08:16:07.428092     982 crd_finalizer.go:266] Starting CRDFinalizer
I0812 08:16:07.427879     982 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0812 08:16:07.428113     982 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
I0812 08:16:07.436811     982 controller.go:611] quota admission added evaluator for: namespaces
E0812 08:16:07.442856     982 controller.go:151] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.43.0.1"}: failed to allocated ip:10.43.0.1 with error:cannot allocate resources of type serviceipallocations at this time
E0812 08:16:07.443914     982 controller.go:156] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.2, ResourceVersion: 0, AdditionalErrorMsg: 
I0812 08:16:07.497582     982 shared_informer.go:247] Caches are synced for node_authorizer 
I0812 08:16:07.528585     982 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0812 08:16:07.528593     982 cache.go:39] Caches are synced for AvailableConditionController controller
I0812 08:16:07.528617     982 shared_informer.go:247] Caches are synced for crd-autoregister 
I0812 08:16:07.528627     982 cache.go:39] Caches are synced for autoregister controller
I0812 08:16:07.528631     982 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
I0812 08:16:07.528681     982 apf_controller.go:312] Running API Priority and Fairness config worker
I0812 08:16:08.427145     982 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0812 08:16:08.433206     982 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
I0812 08:16:08.436475     982 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
I0812 08:16:08.436494     982 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I0812 08:16:08.440889     982 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0812 08:16:08.735792     982 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0812 08:16:08.762067     982 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0812 08:16:08.868399     982 lease.go:233] Resetting endpoints for master service "kubernetes" to [172.17.0.2]
I0812 08:16:08.869072     982 controller.go:611] quota admission added evaluator for: endpoints
I0812 08:16:08.871999     982 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
time="2022-08-12T08:16:09.438574406Z" level=info msg="Kube API server is now running"
time="2022-08-12T08:16:09.438588242Z" level=info msg="Waiting for cloud-controller-manager privileges to become available"
time="2022-08-12T08:16:09.438607428Z" level=info msg="k3s is up and running"
Flag --address has been deprecated, see --bind-address instead.
I0812 08:16:09.440778     982 controllermanager.go:175] Version: v1.21.12+k3s1
I0812 08:16:09.441056     982 deprecated_insecure_serving.go:56] Serving insecurely on 127.0.0.1:10252
time="2022-08-12T08:16:09.445728980Z" level=info msg="Creating CRD addons.k3s.cattle.io"
time="2022-08-12T08:16:09.448354755Z" level=info msg="Creating CRD helmcharts.helm.cattle.io"
time="2022-08-12T08:16:09.450539475Z" level=info msg="Creating CRD helmchartconfigs.helm.cattle.io"
time="2022-08-12T08:16:09.456079627Z" level=info msg="Waiting for CRD addons.k3s.cattle.io to become available"
time="2022-08-12T08:16:09.958336729Z" level=info msg="Done waiting for CRD addons.k3s.cattle.io to become available"
time="2022-08-12T08:16:09.958359381Z" level=info msg="Waiting for CRD helmcharts.helm.cattle.io to become available"
time="2022-08-12T08:16:10.461081554Z" level=info msg="Done waiting for CRD helmcharts.helm.cattle.io to become available"
time="2022-08-12T08:16:10.461110187Z" level=info msg="Waiting for CRD helmchartconfigs.helm.cattle.io to become available"
time="2022-08-12T08:16:10.963896560Z" level=info msg="Done waiting for CRD helmchartconfigs.helm.cattle.io to become available"
time="2022-08-12T08:16:10.970242681Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-crd-10.14.100.tgz"
time="2022-08-12T08:16:10.970464105Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-10.14.100.tgz"
time="2022-08-12T08:16:10.970571647Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml"
time="2022-08-12T08:16:10.970708513Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml"
time="2022-08-12T08:16:10.970821985Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml"
time="2022-08-12T08:16:10.970899490Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml"
time="2022-08-12T08:16:10.970984710Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml"
time="2022-08-12T08:16:10.971059410Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml"
time="2022-08-12T08:16:10.971173022Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml"
time="2022-08-12T08:16:10.971264383Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml"
time="2022-08-12T08:16:10.971354913Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml"
time="2022-08-12T08:16:10.971507439Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
time="2022-08-12T08:16:10.971604410Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
time="2022-08-12T08:16:11.072515742Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
time="2022-08-12T08:16:11.072581535Z" level=info msg="Creating deploy event broadcaster"
time="2022-08-12T08:16:11.072588378Z" level=info msg="Starting /v1, Kind=Secret controller"
I0812 08:16:11.073943     982 controller.go:611] quota admission added evaluator for: addons.k3s.cattle.io
time="2022-08-12T08:16:11.074234499Z" level=info msg="Waiting for control-plane node 0eec92c2ff40 startup: nodes \"0eec92c2ff40\" not found"
time="2022-08-12T08:16:11.075007015Z" level=info msg="Active TLS secret k3s-serving (ver=217) (count 9): map[listener.cattle.io/cn-0eec92c2ff40:0eec92c2ff40 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.17.0.2:172.17.0.2 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=3A8D99E27B0DD569E0E54209560EBAD6A15370C2]"
time="2022-08-12T08:16:11.077015435Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"ccm\", UID:\"c9c03f71-5a16-45fb-9a6e-004d4c9b06f5\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"218\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/ccm.yaml\""
time="2022-08-12T08:16:11.089420458Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"ccm\", UID:\"c9c03f71-5a16-45fb-9a6e-004d4c9b06f5\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"218\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/ccm.yaml\""
time="2022-08-12T08:16:11.095992882Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"custom-coredns\", UID:\"c3aebc38-3dec-4eaf-9e08-b2bc6401a00e\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"224\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/custom-coredns.yaml\""
I0812 08:16:11.100249     982 controller.go:611] quota admission added evaluator for: serviceaccounts
I0812 08:16:11.131772     982 controller.go:611] quota admission added evaluator for: deployments.apps
time="2022-08-12T08:16:11.141691402Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"custom-coredns\", UID:\"c3aebc38-3dec-4eaf-9e08-b2bc6401a00e\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"224\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/custom-coredns.yaml\""
I0812 08:16:11.197515     982 request.go:668] Waited for 1.048042351s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:6444/apis/networking.k8s.io/v1beta1?timeout=32s
time="2022-08-12T08:16:11.203364037Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"gitpod\", UID:\"ebfe0f6c-8d5d-4718-ba84-f027e31a0bf8\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"235\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/gitpod.yaml\""
I0812 08:16:11.247607     982 controller.go:611] quota admission added evaluator for: poddisruptionbudgets.policy
time="2022-08-12T08:16:11.450678162Z" level=info msg="Starting /v1, Kind=Pod controller"
time="2022-08-12T08:16:11.450673663Z" level=info msg="Starting /v1, Kind=Node controller"
time="2022-08-12T08:16:11.450686517Z" level=info msg="Starting /v1, Kind=Service controller"
time="2022-08-12T08:16:11.450692619Z" level=info msg="Starting /v1, Kind=Endpoints controller"
time="2022-08-12T08:16:11.474233579Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChartConfig controller"
time="2022-08-12T08:16:11.474270348Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller"
time="2022-08-12T08:16:11.474279916Z" level=info msg="Starting batch/v1, Kind=Job controller"
time="2022-08-12T08:16:11.646083340Z" level=info msg="Cluster dns configmap has been set successfully"
I0812 08:16:11.766730     982 serving.go:354] Generated self-signed cert in-memory
W0812 08:16:12.097454     982 authentication.go:308] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0812 08:16:12.097464     982 authentication.go:332] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0812 08:16:12.097471     982 authorization.go:184] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0812 08:16:12.099334     982 controllermanager.go:142] Version: v1.21.12+k3s1
I0812 08:16:12.100032     982 secure_serving.go:202] Serving securely on 127.0.0.1:10258
I0812 08:16:12.100091     982 tlsconfig.go:240] Starting DynamicServingCertificateController
time="2022-08-12T08:16:12.142875071Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=0eec92c2ff40 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
W0812 08:16:12.143151     982 server.go:224] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
W0812 08:16:12.143654     982 proxier.go:659] Failed to read file /lib/modules/5.18.13-200.fc36.x86_64/modules.builtin with error open /lib/modules/5.18.13-200.fc36.x86_64/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0812 08:16:12.144174     982 proxier.go:669] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0812 08:16:12.144642     982 proxier.go:669] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0812 08:16:12.145163     982 proxier.go:669] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0812 08:16:12.145407     982 proxier.go:669] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0812 08:16:12.145669     982 proxier.go:669] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
time="2022-08-12T08:16:12.147410011Z" level=warning msg="Running modprobe ip_vs failed with message: `modprobe: can't change directory to '5.18.13-200.fc36.x86_64': No such file or directory`, error: exit status 1"
E0812 08:16:12.154777     982 node.go:161] Failed to retrieve node info: nodes "0eec92c2ff40" not found
W0812 08:16:12.163245     982 fs.go:214] stat failed on /dev/mapper/luks-b627714b-8701-4a2a-bcbb-2bf0fb1d0957 with error: no such file or directory
W0812 08:16:12.186700     982 info.go:53] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"
I0812 08:16:12.187152     982 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
I0812 08:16:12.187303     982 container_manager_linux.go:291] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
I0812 08:16:12.187389     982 container_manager_linux.go:296] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName:/k3s SystemCgroupsName: KubeletCgroupsName:/k3s ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none Rootless:false}
I0812 08:16:12.187418     982 topology_manager.go:120] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
I0812 08:16:12.187431     982 container_manager_linux.go:327] "Initializing Topology Manager" policy="none" scope="container"
I0812 08:16:12.187438     982 container_manager_linux.go:332] "Creating device plugin manager" devicePluginEnabled=true
I0812 08:16:12.187622     982 kubelet.go:404] "Attempting to sync node with API server"
I0812 08:16:12.187639     982 kubelet.go:272] "Adding static pod path" path="/var/lib/rancher/k3s/agent/pod-manifests"
I0812 08:16:12.187658     982 kubelet.go:283] "Adding apiserver pod source"
I0812 08:16:12.187670     982 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
I0812 08:16:12.188293     982 kuberuntime_manager.go:222] "Container runtime initialized" containerRuntime="containerd" version="v1.4.13-k3s1" apiVersion="v1alpha2"
W0812 08:16:12.188443     982 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
I0812 08:16:12.188919     982 server.go:1191] "Started kubelet"
I0812 08:16:12.188974     982 server.go:149] "Starting to listen" address="0.0.0.0" port=10250
W0812 08:16:12.189732     982 fs.go:588] stat failed on /dev/mapper/luks-b627714b-8701-4a2a-bcbb-2bf0fb1d0957 with error: no such file or directory
E0812 08:16:12.189783     982 cri_stats_provider.go:369] "Failed to get the info of the filesystem with mountpoint" err="failed to get device for dir \"/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs\": could not find device with major: 0, minor: 35 in cached partitions map" mountpoint="/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs"
E0812 08:16:12.189826     982 kubelet.go:1306] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
I0812 08:16:12.189983     982 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
I0812 08:16:12.190046     982 volume_manager.go:279] "Starting Kubelet Volume Manager"
I0812 08:16:12.190086     982 desired_state_of_world_populator.go:141] "Desired state populator starts to run"
I0812 08:16:12.190420     982 server.go:409] "Adding debug handlers to kubelet server"
I0812 08:16:12.199830     982 shared_informer.go:240] Waiting for caches to sync for tokens
I0812 08:16:12.200784     982 cpu_manager.go:199] "Starting CPU manager" policy="none"
I0812 08:16:12.200794     982 cpu_manager.go:200] "Reconciling" reconcilePeriod="10s"
I0812 08:16:12.200804     982 state_mem.go:36] "Initialized new in-memory state store"
I0812 08:16:12.201617     982 policy_none.go:44] "None policy: Start"
W0812 08:16:12.201639     982 fs.go:588] stat failed on /dev/mapper/luks-b627714b-8701-4a2a-bcbb-2bf0fb1d0957 with error: no such file or directory
E0812 08:16:12.201658     982 kubelet.go:1384] "Failed to start ContainerManager" err="failed to get rootfs info: failed to get device for dir \"/var/lib/kubelet\": could not find device with major: 0, minor: 35 in cached partitions map"
+ kubectl wait '--timeout=-1s' '--for=condition=ready' pod -l 'app=gitpod,component!=migrations'
The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?

Steps to reproduce

Run local-preview on a linux distro with cgroups v2

Workspace affected

No response

Expected behavior

No response

Example repository

No response

Anything else?

No response

@Pothulapati
Copy link
Contributor Author

So, Digging more on the specific logs. It seems more to be a issue around fedora and filesystem mounts inside kuberentes. Similar issues have been reported with other minimal kubernetes tools as per kubernetes-sigs/kind#2411

The work around here seems to be to manually mounting that into the container. I'll update the specific user on the same, and see from there!

@Pothulapati Pothulapati changed the title [local-preview] Not working with cgroups v2 [local-preview] Not working with fedora Sep 7, 2022
@Pothulapati
Copy link
Contributor Author

Closing this as its not a specific GItpod issue and only happens with fedora and a filesystem configuration that already seems to have a fix.

Feel free to re-open if more users fall into this specific issue so that we can document or do something else here.

Repository owner moved this from ⚒In Progress to ✨Done in 🚚 Security, Infrastructure, and Delivery Team (SID) Sep 19, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
No open projects
Development

No branches or pull requests

1 participant