Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue 94: Fix for zk scale up problem #96

Merged
merged 5 commits into from
Nov 13, 2019
Merged

Conversation

pbelgundi
Copy link
Contributor

@pbelgundi pbelgundi commented Oct 31, 2019

Signed-off-by: pbelgundi [email protected]

Change log description

This issue occurs because on scale down PVCs attached to the scaled down servers are not deleted.
The fix is about deleting orphaned PVCs once a cluster has been scaled down.
For this fix to take effect VolumeReclaimPolicy in zookeeperSpec needs to be set to Delete

Purpose of the change

Fixes #94

How to verify it

Verified by scaling the cluster up and down 5 times after this fix and there were no issues with cluster scale and pod restarts.
Also PVCs attached to old pods were deleted after the scale down.

@pbelgundi pbelgundi requested review from spiegela and fpj October 31, 2019 12:48
@codecov-io
Copy link

codecov-io commented Oct 31, 2019

Codecov Report

Merging #96 into master will decrease coverage by 1.45%.
The diff coverage is 0%.

Impacted file tree graph

@@            Coverage Diff            @@
##           master     #96      +/-   ##
=========================================
- Coverage   49.26%   47.8%   -1.46%     
=========================================
  Files           5       5              
  Lines         885     912      +27     
=========================================
  Hits          436     436              
- Misses        421     448      +27     
  Partials       28      28
Impacted Files Coverage Δ
...kg/apis/zookeeper/v1beta1/zz_generated.deepcopy.go 0% <ø> (ø) ⬆️
...er/zookeepercluster/zookeepercluster_controller.go 33.5% <0%> (-2.51%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update fb0a2c4...0609a23. Read the comment docs.

Copy link
Contributor

@spiegela spiegela left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned inline, I'm of the opinion that the PVCs should be retained by default, and deleted explicitly. This matches the behavior of a StatefulSet, and, IMO, makes better sense.

Other than that small change, and a couple of small comments, this looks good!

Thanks!

pkg/apis/zookeeper/v1beta1/zookeepercluster_types.go Outdated Show resolved Hide resolved
pkg/utils/finalizer_utils.go Outdated Show resolved Hide resolved
@pbelgundi pbelgundi requested a review from Tristan1900 November 8, 2019 05:07
Signed-off-by: pbelgundi <[email protected]>
@pbelgundi pbelgundi requested review from fpj and removed request for fpj November 12, 2019 07:24
err = r.client.Delete(context.TODO(), pvcDelete)
if err != nil {
return err
if all {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A conditional switch statement can be used to clean up this nested if / else block.

Copy link

@fpj fpj Nov 13, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we instead have separate functions for clean all and clean orphan? I find the separation of functionality based on a flag error prone. We can avoid the duplication of code by having a common support function.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure. Will do this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

Copy link
Contributor

@spiegela spiegela left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’ve made a comment regarding the style — using a switch statement in place of a nested if else statement. Please make that change, but otherwise, this looks great!

Thanks!

Copy link

@fpj fpj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left one comment, otherwise it looks good.

err = r.client.Delete(context.TODO(), pvcDelete)
if err != nil {
return err
if all {
Copy link

@fpj fpj Nov 13, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we instead have separate functions for clean all and clean orphan? I find the separation of functionality based on a flag error prone. We can avoid the duplication of code by having a common support function.

@pbelgundi pbelgundi removed the request for review from Tristan1900 November 13, 2019 09:44
Copy link

@fpj fpj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1, thanks @pbelgundi

@pbelgundi pbelgundi merged commit cea93ee into master Nov 13, 2019
amuraru pushed a commit to amuraru/zookeeper-operator that referenced this pull request Mar 4, 2020
* Issue 94: Fix for zk scale up problem

Signed-off-by: pbelgundi <[email protected]>

* code review comments

Signed-off-by: pbelgundi <[email protected]>

* code refactor for review comments

Signed-off-by: pbelgundi <[email protected]>

* Copyright notice fix

Signed-off-by: pbelgundi <[email protected]>

* fixed formatting

Signed-off-by: pbelgundi <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Zookeeper instances fail to start on second scale up
4 participants