-
Notifications
You must be signed in to change notification settings - Fork 199
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update Kafka PDB to use live Pods instead of the CR Spec #770
Conversation
3a3f558
to
8855602
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the PR!
@alungu relying on the number of Kafka pods might not be the best approach. For example if you have a 6 broker cluster and all 6 broker pods are up and running thus the PDB:min available will be 5. When a broker pod is deleted for whatever reason outside of If the PDB handling https://github.com/banzaicloud/koperator/blob/master/pkg/resources/kafka/kafka.go#L162 is moved as a last step after https://github.com/banzaicloud/koperator/blob/master/pkg/resources/kafka/kafka.go#L287 than it not might be an issue. What I'm thinking of is to determine the number of brokers for PDB handling from Status as that contains the list of running brokers and brokers pending graceful upscale/downscale. In this case the PDB handling could stay where it is now: https://github.com/banzaicloud/koperator/blob/master/pkg/resources/kafka/kafka.go#L162 What do you think? |
@stoader Thank you for mentioning this issue! |
What's in this PR?
The Kafka PDB logic is update to consider the total number of healthy Kafka brokers as being the number of Running Kafka pods (instead of the number of Kafka brokers specified in the Kafka CR)
Why?
The previous implementation lead to a faulty PDB during scaling (both in and out) since adding or removing brokers requires some time.
Additional context
Checklist