-
Notifications
You must be signed in to change notification settings - Fork 349
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Jaeger instance is not getting upgraded automatically #1242
Comments
Could you please share the operator logs upon start? The operator should indeed take care of upgrading all instances at previous versions. I also need the output for |
Operator logs:
|
That's just the first line of the log. Is that all you see? There should be a lot more :-) |
I could see below log:
|
Are you sure your Kubernetes cluster is in a sane state? |
Yes it is |
@chandu9333 looking at the example you are using, it references a very old version of Jaeger:
Could you either try with a more recent version (1.19) - or remove the image line from the CR? |
I changed image: jaegertracing/all-in-one:1.13 to image: jaegertracing/all-in-one:1.19 Observed few things:
If you see the above output, 1st Jaeger-operator is upgraded then instance got updated automatically.
instance
Do we need to make any changes to show the upgraded version under Thanks |
Could you run the operator with The debug-level log entries will help us understand if the Jaeger Operator is even finding the Jaeger instances. |
I have enabled --log-level=debug in Operator.yaml file and deployed the operator Log during the deployment using jaeger-operator version 1.19 (cr points to 1.19 image)
**Logs during the upgrade (Using 1.20 image which is again created a new operator pod) **
Once the Operator is up and running I can see the Jaeger-operator and Jaeger version as 1.20 on the pod (exec to pod) as below Jaeger-operator version display as below for old deployment using 1.19 image
When I start deployment again using 1.20 image (operator.yaml file), pod gets terminated and re-created the new pod with new version
After the upgrade also still shows the jaeger version as 1.19.2
One more question (sorry for my layman terms as I am new to k8s) ? Why pods get terminated and re-created when we update image. Is this expected behavior? Lets assume, I have an operator and all-in-one deployment ready with 1.19 and able to see the traces |
Looks like the upgrade is indeed somehow broken, I was able to reproduce your case. @rubenvp8510 is it perhaps because of the semantic versioning changes? Could you investigate it ? |
This is the expected Kubernetes behavior: whenever a deployment changes, new pods are created to use the new configuration and the old pods are killed.
If you are using the in-memory storage, then yes. Otherwise, the collector should gracefully shutdown and Kubernetes will only shift traffic to the new pod once it's determined to be healthy. |
Okay. So if I use Production/streaming strategy with elastic search as backend storage, we don't loose any data after upgradation right? |
You won't lose any data that is already in the storage. You should also not lose any in-flight data while the old pod is shutting down and the new one is starting, but I wouldn't be surprised if a few spans would be lost during this process. |
Thanks. |
We might have it ready for the next release (1.21.0), which should be due in a month or so. But no promises. |
Jaeger instance is not getting upgraded automatically when I upgraded the Jaeger Operator is from 1.19.0 to 1.20.0
My environment: GKE Cluster
BTW, I am using https://github.com/jaegertracing/jaeger-operator/blob/master/deploy/operator.yaml for Operator installation
and using https://github.com/jaegertracing/jaeger-operator/blob/master/deploy/examples/all-in-one-with-options.yaml for Jaeger instance.
Is my understanding wrong, that we need to upgrade Jaeger instance independently once Operator is upgraded.
Thanks
The text was updated successfully, but these errors were encountered: