-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Headless endpoint mirrors are incorrectly cleaned up as part of GC #12499
Labels
Comments
adleong
pushed a commit
that referenced
this issue
Apr 26, 2024
…12500) Subject Fixes a bug where headless endpoint mirrors get cleaned up during GC Problem When GC is triggered (which also happens at startup or when the link watch disconnects), the service mirror controller attempts to look for services that can be GC'ed. This is done by looping through the local mirrored services on the cluster, then extracting the name of the original service in the remote (by dropping the target name suffix). However, this check doesn't account for the headless endpoint service mirrors (the per pod cluster IP services). For example, if you have nginx-svc in the west cluster and two replicas, the source cluster will end up with nginx-svc-west, nginx-set-0-west and nginx-set-1-west. The logic would then parse the resource name for the latter two services as nginx-set-0 and nginx-set-1 which won't exist on the remote and ends up deleting them as part of GC. The next sync would recreate those mirrors but you end up with downtime. Solution For those cases, instead of parsing the remote resource from the local service name, retrieve the info from the `mirror.linkerd.io/headless-mirror-svc-name` label. Validation Unit tests Fixes #12499 Signed-off-by: Marwan Ahmed <[email protected]>
the-wondersmith
pushed a commit
to the-wondersmith/linkerd2
that referenced
this issue
Apr 27, 2024
…inkerd#12500) Subject Fixes a bug where headless endpoint mirrors get cleaned up during GC Problem When GC is triggered (which also happens at startup or when the link watch disconnects), the service mirror controller attempts to look for services that can be GC'ed. This is done by looping through the local mirrored services on the cluster, then extracting the name of the original service in the remote (by dropping the target name suffix). However, this check doesn't account for the headless endpoint service mirrors (the per pod cluster IP services). For example, if you have nginx-svc in the west cluster and two replicas, the source cluster will end up with nginx-svc-west, nginx-set-0-west and nginx-set-1-west. The logic would then parse the resource name for the latter two services as nginx-set-0 and nginx-set-1 which won't exist on the remote and ends up deleting them as part of GC. The next sync would recreate those mirrors but you end up with downtime. Solution For those cases, instead of parsing the remote resource from the local service name, retrieve the info from the `mirror.linkerd.io/headless-mirror-svc-name` label. Validation Unit tests Fixes linkerd#12499 Signed-off-by: Marwan Ahmed <[email protected]> Signed-off-by: Mark S <[email protected]>
the-wondersmith
pushed a commit
to the-wondersmith/linkerd2
that referenced
this issue
Apr 29, 2024
…inkerd#12500) Subject Fixes a bug where headless endpoint mirrors get cleaned up during GC Problem When GC is triggered (which also happens at startup or when the link watch disconnects), the service mirror controller attempts to look for services that can be GC'ed. This is done by looping through the local mirrored services on the cluster, then extracting the name of the original service in the remote (by dropping the target name suffix). However, this check doesn't account for the headless endpoint service mirrors (the per pod cluster IP services). For example, if you have nginx-svc in the west cluster and two replicas, the source cluster will end up with nginx-svc-west, nginx-set-0-west and nginx-set-1-west. The logic would then parse the resource name for the latter two services as nginx-set-0 and nginx-set-1 which won't exist on the remote and ends up deleting them as part of GC. The next sync would recreate those mirrors but you end up with downtime. Solution For those cases, instead of parsing the remote resource from the local service name, retrieve the info from the `mirror.linkerd.io/headless-mirror-svc-name` label. Validation Unit tests Fixes linkerd#12499 Signed-off-by: Marwan Ahmed <[email protected]> Signed-off-by: Mark S <[email protected]>
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
What is the issue?
When GC is triggered (which also happens at startup or when the link watch disconnects), the service mirror controller attempts to look for services that can be GC'ed. This is done by looping through the local mirrored services on the cluster, then extracting the name of the original service in the remote (by dropping the target name suffix).
linkerd2/multicluster/service-mirror/cluster_watcher.go
Lines 311 to 321 in 5760ed2
However, this check doesn't account for the headless endpoint service mirrors (the per pod cluster IP services). For example, if you have
nginx-svc
in thewest
cluster and two replicas, the source cluster will end up withnginx-svc-west
,nginx-set-0-west
andnginx-set-1-west
. The logic would then parse the resource name for the latter two services asnginx-set-0
andnginx-set-1
which won't exist on the remote and ends up deleting them as part of GC.The next sync would recreate those mirrors but you end up with downtime.
How can it be reproduced?
Logs, error output, etc
output of
linkerd check -o short
N/A
Environment
Possible solution
Parsing the
mirror.linkerd.io/headless-mirror-svc-name
label from the mirrored headless endpoints could be a solution because this will have the root headless service name, then we can drop the remote suffix and keep the same logic.Additional context
No response
Would you like to work on fixing this bug?
yes
The text was updated successfully, but these errors were encountered: