You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Environmental Info:
K3s Version:
k3s version v1.27.2+k3s-b66a1183
Node(s) CPU architecture, OS, and Version:
SLES15 SP3
Cluster Configuration:
Single-node or multi-node
Describe the bug:
After installing k3s with tailscale, pods are in CrashLoopBackOff state due to leaving old obsolete routes in the table.
Steps To Reproduce:
Install multiple clusters using default route
Expected behavior:
routes in table 52 should be removed while uninstalling to avoid network issues.
Actual behavior:
routes in table 52 are present after uninstall
Additional context / logs:
After uninstalling k3s
ip route show table 52
10.42.0.0/24 dev tailscale0
10.42.1.0/24 dev tailscale0
10.42.2.0/24 dev tailscale0
10.42.3.0/24 dev tailscale0
10.50.0.0/24 dev tailscale0
1.2.3.4 dev tailscale0
1.2.3.5 dev tailscale0
1.2.3.6 dev tailscale0
1.2.3.7 dev tailscale0
1.2.3.8 dev tailscale0
1.2.3.9 dev tailscale0
1.2.3.10 dev tailscale0
The text was updated successfully, but these errors were encountered:
While uninstalling on k3s version v1.27.3-rc1+k3s1, routes in table 52 seem to persist and tailscale interface continues to have an IP
$ k3s -v
-bash: /usr/local/bin/k3s: No such file or directory
$ ip route show table 52
10.42.0.0/24 dev tailscale0
1.2.3.4 dev tailscale0
1.2.3.5 dev tailscale0
1.2.3.6 dev tailscale0
1.2.3.7 dev tailscale0
$ ip a|grep tailscale
3: tailscale0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1280 qdisc fq_codel state UNKNOWN group default qlen 500
inet <REDACTED>/32 scope global tailscale0
$ tailscale status --json
{
"Version": "1.42.0-t3a83d61ec-g6702f39bf",
...
This is related to an incorrect scope of the tailscale key which makes all nodes share the same tailscale network, even if nodes belong to different k3s clusters. As a consequence, their subnets get pushed in table 52 as soon as the tailscale client logs in
Environmental Info:
K3s Version:
k3s version v1.27.2+k3s-b66a1183
Node(s) CPU architecture, OS, and Version:
SLES15 SP3
Cluster Configuration:
Single-node or multi-node
Describe the bug:
After installing k3s with tailscale, pods are in CrashLoopBackOff state due to leaving old obsolete routes in the table.
Steps To Reproduce:
Install multiple clusters using default route
Expected behavior:
routes in table 52 should be removed while uninstalling to avoid network issues.
Actual behavior:
routes in table 52 are present after uninstall
Additional context / logs:
After uninstalling k3s
The text was updated successfully, but these errors were encountered: