-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Behaviour upgrading cluster using --kubernetes-version with version that cannot be found #5809
Comments
/assign @nanikjava |
cc: @tstromberg |
@nanikjava I noticed your two commands have one differnce I wonder if that could be source of the problem ? since 20 doesn't exist and 2 exists? |
Yes, it is done intentionally to test the behaviour. What I thought should happened was when v1.15.2 specified since it is not available and cannot be installed it should not stop the user from using the correct version, in this case v1.15.20 What is happening is that minikube detect that the previous version used is v1.15.2 but it did not detect that the installation has been successful. Think minikube should be able to detect this and allow the user to install the correct version. In my opinion this will need to be fixed but what to understand first whether this is the correct behaviour ? |
Ah sorry I didnt missed that part you wrote @nanikjava because of some issues, we have had, we can not Downgrade a minikube to a lower version, and you are right, if the kubenetes version they entered is invalid, we should not store it as something that is on the VM ! it worth noting that, we still do not plan on supporting down-grades. thank you! you found a bug ! it would be wonderful to fix it ! |
Yes...I found a bug 👍 Will assign this to me |
/assign @nanikjava |
On further testing the issue can be resolved by stopping minikube from going forward with the process if there is an error in the download process
|
To make things works faster inside minikube the process of downloading images is running in a separate goroutine, as it will allow the other process of initializing the VM to continue. There is a checkpoint of checking the state of success/failure of the images inside |
Doing the above create issue as the abrupt termination of the VM as it is running on
Exiting the app during waitCacheImages(..) will only report the last recorded error To make it useful to the user the error should print out all the different Something like this:
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@nanikjava are you still interested in this issue ? |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
While working fixing #2570 noticed a very strange behaviour. Here is how to reproduce (this is using master branch - no changes for 2570 has been applied)
it will throw error which is the right behaviour
run again the command with the correct version
will throw an error
which means the user will not be able to use the current profile VM unless it is deleted. Is this the correct behaviour ?
The text was updated successfully, but these errors were encountered: