You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We ran into a Deployment resource error while migrating jaeger-operator from version v1.18.0 to v1.24.0.
time="2021-09-29T14:57:01Z" level=error msg="failed to apply the changes" error="Deployment.apps \"tracingstack-2a1696c6-sg-cce2e81c\" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{\"app\":\"jaeger\", \"app.kubernetes.io/component\":\"all-in-one\", \"app.kubernetes.io/instance\":\"tracingstack-2a1696c6-sg-cce2e81c\", \"app.kubernetes.io/managed-by\":\"jaeger-operator\", \"app.kubernetes.io/name\":\"tracingstack-2a1696c6-sg-cce2e81c\", \"app.kubernetes.io/part-of\":\"jaeger\", \"tracing.fleet.ubisoft.com/stack-hash\":\"2a1696c6\", \"tracing.fleet.ubisoft.com/stack-name\":\"tracingstack-2a1696c6\"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable" execution="2021-09-29 14:57:01.8305477 +0000 UTC" instance=tracingstack-2a1696c6-sg-cce2e81c namespace=fleet-system
This is related to this fix: #1153,
itself being a fix of this issue #629.
And the following issue #1531 is requesting a similar fix.
The reason why this breaks the migration is because deployment.spec.selector is an immutable field.
Therefore, as soon as the jaeger.spec.allInOne.labels is not identical to the labels that were previoussly hard-coded by the jaeger-operator (see here)
The migration from v1.18.1 to v1.19.0 of the Deployment resource fails.
I can see that @abstulo in #629 was proposing a solution that wouldn't caused any migration errors since he was proposing to:
Keep the deployment selector labels from the default labels only.
And to merge the jaeger.spec.allInOne.labels with the default labels into the deployment.spec.template.objectMeta.labels
example :
return&appsv1.Deployment{
...Spec: appsv1.DeploymentSpec{
Selector: &metav1.LabelSelector{
MatchLabels: a.labels(), // <--- same as before
},
Template: corev1.PodTemplateSpec{
ObjectMeta: metav1.ObjectMeta{
Labels: mergeLabels(commonSpec.Labels, a.labels()), // <--- merge between the default selector labels and the user defined labels...
},
},
},
}
Changing back to what it was or implementing the present suggestion will also cause the same migration error, since this will modify the LabelSelector.
Therefore, a solution that will always guaranty no migration error would be to:
Fetch the current Deployment from kubernetes
If it exist:
2.1 Reuse the same labels as already in the deployment.spec.selector.matchLabels.
If it doesn't exist:
3.1 Use your standart selector labels a.labels()
This is not a big issue since the solution is as simple as deleting the faulty Deployment and let the jaeger-operator re-create it.
For now it only affect the AllInOne strategy that we only use for local developpement.
However, if #1531 is done the same way, it might cause some headache to our team 😁.
To Reproduce
Steps to reproduce the behavior:
Deploy the jaeger-operator version 1.18.1.
Deploy a Jaeger custom resource with valid and additional jaeger.spec.allInOne.labels.
Wait for it to stabilize.
Update the jaeger-operator to version 1.19.0. You should see a similar error:
time="2021-09-29T14:57:01Z" level=error msg="failed to apply the changes" error="Deployment.apps \"tracingstack-2a1696c6-sg-cce2e81c\" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{\"app\":\"jaeger\", \"app.kubernetes.io/component\":\"all-in-one\", \"app.kubernetes.io/instance\":\"tracingstack-2a1696c6-sg-cce2e81c\", \"app.kubernetes.io/managed-by\":\"jaeger-operator\", \"app.kubernetes.io/name\":\"tracingstack-2a1696c6-sg-cce2e81c\", \"app.kubernetes.io/part-of\":\"jaeger\", \"tracing.fleet.ubisoft.com/stack-hash\":\"2a1696c6\", \"tracing.fleet.ubisoft.com/stack-name\":\"tracingstack-2a1696c6\"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable" execution="2021-09-29 14:57:01.8305477 +0000 UTC" instance=tracingstack-2a1696c6-sg-cce2e81c namespace=fleet-system
Expected behavior
Updating from one version to another should ideally not require any external modification to the operator owned resources.
In this case, the only way we can fix it is by deleting the whole Jaeger custom resource or to delete the faulty Deployment resource.
Screenshots
N/A
Version (please complete the following information):
OS: Linux
Jaeger version: 1.18
Deployment: Kubernetes
What troubleshooting steps did you try?
I looked at the diff for the deployment selector labels between version v1.18.0 to v1.24.0 in order to understand why they were different.
Additional context
No additional context
The text was updated successfully, but these errors were encountered:
Describe the bug
We ran into a
Deployment
resource error while migrating jaeger-operator from versionv1.18.0
tov1.24.0
.This is related to this fix: #1153,
itself being a fix of this issue #629.
And the following issue #1531 is requesting a similar fix.
The reason why this breaks the migration is because
deployment.spec.selector
is an immutable field.Therefore, as soon as the
jaeger.spec.allInOne.labels
is not identical to the labels that were previoussly hard-coded by the jaeger-operator (see here)The migration from
v1.18.1
tov1.19.0
of theDeployment
resource fails.I can see that @abstulo in #629 was proposing a solution that wouldn't caused any migration errors since he was proposing to:
jaeger.spec.allInOne.labels
with the default labels into thedeployment.spec.template.objectMeta.labels
example :
where mergeLabels is something like:
That being said!
Changing back to what it was or implementing the present suggestion will also cause the same migration error, since this will modify the
LabelSelector
.Therefore, a solution that will always guaranty no migration error would be to:
Deployment
from kubernetes2.1 Reuse the same labels as already in the
deployment.spec.selector.matchLabels
.3.1 Use your standart selector labels
a.labels()
This is not a big issue since the solution is as simple as deleting the faulty
Deployment
and let thejaeger-operator
re-create it.For now it only affect the AllInOne strategy that we only use for local developpement.
However, if #1531 is done the same way, it might cause some headache to our team 😁.
To Reproduce
Steps to reproduce the behavior:
1.18.1
.Jaeger
custom resource with valid and additionaljaeger.spec.allInOne.labels
.1.19.0
. You should see a similar error:Expected behavior
Updating from one version to another should ideally not require any external modification to the operator owned resources.
In this case, the only way we can fix it is by deleting the whole
Jaeger
custom resource or to delete the faultyDeployment
resource.Screenshots
N/A
Version (please complete the following information):
What troubleshooting steps did you try?
v1.18.0
tov1.24.0
in order to understand why they were different.Additional context
No additional context
The text was updated successfully, but these errors were encountered: