Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate e2e failure: Upgrading a multi-DC cluster, with a random pod deleted during the staging phase Upgrade #1605

Closed
johscheuer opened this issue May 2, 2023 · 1 comment
Labels
bug Something isn't working

Comments

@johscheuer
Copy link
Member

What happened?

Test failure: #1603 (comment)

What did you expect to happen?

Test should have been passed/

How can we reproduce it (as minimally and precisely as possible)?

We have to investigate this further.

Anything else we need to know?

No response

FDB Kubernetes operator

$ kubectl fdb version
# paste output here

Kubernetes version

$ kubectl version
# paste output here

Cloud provider

@johscheuer johscheuer added the bug Something isn't working label May 2, 2023
@johscheuer
Copy link
Member Author

Fixed in the latest changes. Was a timing issue in the test framework.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant