Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to launch downgrade if label version already set to dest #2551

Closed
TeddyAndrieux opened this issue May 15, 2020 · 0 comments
Closed

Unable to launch downgrade if label version already set to dest #2551

TeddyAndrieux opened this issue May 15, 2020 · 0 comments
Assignees
Labels
complexity:easy Something that requires less than a day to fix kind:bug Something isn't working priority:medium Medium priority issues, should only be postponed if no other option topic:lifecycle Issues related to upgrade or downgrade of MetalK8s

Comments

@TeddyAndrieux
Copy link
Collaborator

TeddyAndrieux commented May 15, 2020

Component:

'lifecycle'

What happened:

If for whatever reason (downgrade already failed or ....) all the node already have the label version set to the destination version then downgrade fail.

NOTE: The label version is set before downgrading the node so this version does not necessarily represent the running version but the desired version!

[root@test-solution ~]# kubectl get node -o go-template='{{range .items}}{{index .metadata.labels "metalk8s.scality.com/version"}}{{end}}{{printf "\n"}}'
2.4.2
[root@test-solution ~]# /srv/scality/metalk8s-2.4.3/downgrade.sh --destination-version 2.4.2
> Performing Pre-Downgrade checks... fail [45s]

Failure while running step 'Performing Pre-Downgrade checks'

Command: precheck_downgrade

Output:

<< BEGIN >>
[ERROR   ] Failure!
bootstrap_master:
----------
          ID: Invalid saltenv "metalk8s-2.4.3" consider using "metalk8s-2.4.2"
    Function: test.fail_without_changes
      Result: False
     Comment: Failure!
     Started: 08:44:54.030307
    Duration: 2.577 ms
     Changes:   
----------
          ID: Node bootstrap already in version 2.4.2
    Function: test.succeed_without_changes
      Result: True
     Comment: Success!
     Started: 08:44:54.033391
    Duration: 0.728 ms
     Changes:   

Summary for bootstrap_master
------------
Succeeded: 1
Failed:    1
------------
Total states run:     2
Total run time:   3.305 ms
time="2020-05-15T08:44:54Z" level=fatal msg="execing command in container failed: command terminated with exit code 1"
<< END >>

This script will now exit

What was expected:

This pre-check shouldn't fail if the script version is higher than ALL nodes versions

Steps to reproduce

Set label metalk8s.scality.com/version of all nodes to destination version and try to downgrade

Resolution proposal (optional):

Check for saltenv version to be higher than all nodes version and not strictly equal


Workaround:

Set metalk8s.scality.com/version of at least one node to the current running version

E.g.: Downgrading from 2.4.3 to 2.4.2 just set one node version to 2.4.3 and then you can run the downgrade script again

@TeddyAndrieux TeddyAndrieux added kind:bug Something isn't working topic:lifecycle Issues related to upgrade or downgrade of MetalK8s complexity:easy Something that requires less than a day to fix priority:medium Medium priority issues, should only be postponed if no other option labels May 15, 2020
@TeddyAndrieux TeddyAndrieux self-assigned this May 15, 2020
@TeddyAndrieux TeddyAndrieux changed the title Unable to launch downgrade if label already set Unable to launch downgrade if label version already set May 15, 2020
@TeddyAndrieux TeddyAndrieux changed the title Unable to launch downgrade if label version already set Unable to launch downgrade if label version already set to dest May 15, 2020
TeddyAndrieux added a commit that referenced this issue May 15, 2020
Node version represent the version desired on a specific node and not
the actual deployed version so if for any reason a downgrade failed
after all node version being set to the destination one, we want to be
able to run the downgrade again so the saltenv specified may be higher
that all the node versions

Fixes: #2551
bert-e added a commit that referenced this issue May 15, 2020
TeddyAndrieux added a commit that referenced this issue May 15, 2020
Node version represent the version desired on a specific node and not
the actual deployed version so if for any reason a downgrade failed
after all node version being set to the destination one, we want to be
able to run the downgrade again so the saltenv specified may be higher
that all the node versions

Fixes: #2551
bert-e added a commit that referenced this issue May 15, 2020
bert-e added a commit that referenced this issue May 18, 2020
bert-e added a commit that referenced this issue May 18, 2020
bert-e added a commit that referenced this issue May 18, 2020
bert-e added a commit that referenced this issue May 18, 2020
bert-e added a commit that referenced this issue May 18, 2020
bert-e added a commit that referenced this issue May 18, 2020
@bert-e bert-e closed this as completed in 41a3583 May 18, 2020
TeddyAndrieux added a commit that referenced this issue May 20, 2020
In 41a3583 we added a check to allow `saltenv` to be superior or equal
to higher node version but we did the oposite.
It's fix now in this commit

Fixes: #2551
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
complexity:easy Something that requires less than a day to fix kind:bug Something isn't working priority:medium Medium priority issues, should only be postponed if no other option topic:lifecycle Issues related to upgrade or downgrade of MetalK8s
Projects
None yet
Development

No branches or pull requests

1 participant