Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ISSUE-147 - Upgrade k8s dependencies #156

Merged
merged 1 commit into from
Jun 15, 2020
Merged

Conversation

amuraru
Copy link
Contributor

@amuraru amuraru commented Mar 30, 2020

  • upgrade to k8s 1.17.5 deps
  • upgrade to operator-sdk 0.17
  • upgrade controller-runtime to 0.5.2

Fixes #147

@codecov-io
Copy link

codecov-io commented Mar 30, 2020

Codecov Report

Merging #156 into master will not change coverage.
The diff coverage is 50.00%.

Impacted file tree graph

@@           Coverage Diff           @@
##           master     #156   +/-   ##
=======================================
  Coverage   47.89%   47.89%           
=======================================
  Files           6        6           
  Lines         952      952           
=======================================
  Hits          456      456           
  Misses        467      467           
  Partials       29       29           
Impacted Files Coverage Δ
...er/zookeepercluster/zookeepercluster_controller.go 35.91% <50.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 6fd0216...7bcaaed. Read the comment docs.

Copy link
Contributor

@anishakj anishakj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

whrere is the script update-kube-version.sh called?

@amuraru
Copy link
Contributor Author

amuraru commented Mar 31, 2020

whrere is the script update-kube-version.sh called?

good call @anishakj - added it as a Makefile target

@amuraru amuraru changed the title ISSUE-147 - Upgrade controller-runtime library to 0.5 ISSUE-147 - Upgrade k8s dependencies May 3, 2020
@codecov-commenter
Copy link

codecov-commenter commented Jun 3, 2020

Codecov Report

Merging #156 into master will not change coverage.
The diff coverage is 100.00%.

Impacted file tree graph

@@           Coverage Diff           @@
##           master     #156   +/-   ##
=======================================
  Coverage   82.39%   82.39%           
=======================================
  Files          11       11           
  Lines        1193     1193           
=======================================
  Hits          983      983           
  Misses        142      142           
  Partials       68       68           
Impacted Files Coverage Δ
...er/zookeepercluster/zookeepercluster_controller.go 59.37% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 273a51c...ce60485. Read the comment docs.

@anishakj
Copy link
Contributor

Currently, I am doing testing with the changes. Will keep posted.

@anishakj
Copy link
Contributor

Could you please resolve the conflicts and re base to master?

- upgrade to k8s 1.17.5 deps
- upgrade to operator-sdk 0.17
- upgrade controller-runtime to 0.5.2
@amuraru
Copy link
Contributor Author

amuraru commented Jun 10, 2020

rebased

@anishakj
Copy link
Contributor

Currently, I am doing testing with the changes. Will keep posted.

In my testing I am seeing the below issue.

  1. Install zookeeper-operator 0.2.7
  2. Create zookeeper cluster
  3. upgrade the zookeeper-operator image from 0.2.7 to new image that is built
  4. Operator is getting upgraded, but the pods also getting restarted
NAME                                  READY   STATUS        RESTARTS   AGE
zookeeper-0                           1/1     Running       0          15m
zookeeper-1                           1/1     Terminating   0          18m
zookeeper-2                           1/1     Running       0          48s
zookeeper-3                           1/1     Running       0          4m20s
zookeeper-4                           1/1     Running       0          7m44s
zookeeper-operator-6c9f8f5654-sd496   1/1     Running       0          11m

Doing further analysis

@anishakj
Copy link
Contributor

Currently, I am doing testing with the changes. Will keep posted.

In my testing I am seeing the below issue.

  1. Install zookeeper-operator 0.2.7
  2. Create zookeeper cluster
  3. upgrade the zookeeper-operator image from 0.2.7 to new image that is built
  4. Operator is getting upgraded, but the pods also getting restarted
NAME                                  READY   STATUS        RESTARTS   AGE
zookeeper-0                           1/1     Running       0          15m
zookeeper-1                           1/1     Terminating   0          18m
zookeeper-2                           1/1     Running       0          48s
zookeeper-3                           1/1     Running       0          4m20s
zookeeper-4                           1/1     Running       0          7m44s
zookeeper-operator-6c9f8f5654-sd496   1/1     Running       0          11m

Doing further analysis

Rolling upgrade was happening as termination grace period was changed from 30s to 180s as part of other PR.
I have tested the below scenarios

  1. Zookeeper operator upgrade
  2. Leader Election by creating operator with 3 replicas
  3. Creation and Deletion of zookeeper cluster
  4. Scale up/Down of zookeeper cluster
  5. Upgrade of zookeeper cluster

Copy link
Contributor

@anishakj anishakj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@anishakj anishakj merged commit ded6a82 into pravega:master Jun 15, 2020
@amuraru amuraru deleted the issue-147 branch June 15, 2020 18:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Upgrade controller-runtime library to 0.5
4 participants