-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix cluster version upgrades #577
Conversation
I'm seeing this still fail the updateVersion test. I'm going to look into why/what it takes to fix it. |
This appears to be happening because |
Switching from the latest and 2nd-latest master/node versions in the updateVersion to the 2nd-latest and 3rd-latest resolves the issue, which makes me suspect it really is a problem with the (mistakenly?) reported latest node version. @danawillow do we want to:
I'm updating the docs for this now, but I don't think I can push to the branch this is on. I'll push to the paddy_577 branch on this repo. |
google/resource_container_cluster.go
Outdated
if d.HasChange("master_version") { | ||
desiredMasterVersion := d.Get("master_version").(string) | ||
if d.HasChange("min_master_version") { | ||
desiredMasterVersion := d.Get("min_master_version").(string) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't take into consideration if the current master_version
is already greater than min_master_version
. I think we want to ignore the version in that case, instead of downgrading?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, fixed.
Sorry for the lots-of-comments on this one. :(
I'm in favor of doing it in read. That also ensures that other things that may depend on the cluster being in a RUNNING state also get the benefit of this. |
google/resource_container_cluster.go
Outdated
return fmt.Errorf("cannot set node_version on create, use master_version instead") | ||
if v, ok := d.GetOk("node_version"); ok { | ||
// ignore -gke.X suffix for now. if it becomes a problem later, we can fix it. | ||
mv := strings.Split(cluster.InitialClusterVersion, "-")[0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If min_master_version
is not specified, then cluster.InitialClusterVersion
will be empty right? This means mv
will be an empty string. Is that what we want? Does that mean that if we don't set min_master_version
, we should also leave node_version
empty?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's what @paddycarver and I ended up deciding on yesterday but now I can't remember for sure why. I went and changed it because it seems reasonable to just specify the node_version
(since that's what the nodes will be set to anyway) and let GKE handle the master.
@paddycarver yeah, that's the bug I was talking about where I spoke to the person and it'll be fixed soon. In the meantime I just went ahead and changed it to the 2nd/third latest. |
Added docs based on @paddycarver's commit- I changed the wording around a little but it's the same basic idea.
|
Ok, I remembered why we weren't going to allow setting the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
* wait for running status on a cluster on read * add min_master_version field * respond to comments * add docs * no node_version on create
* wait for running status on a cluster on read * add min_master_version field * respond to comments * add docs * no node_version on create
* wait for running status on a cluster on read * add min_master_version field * respond to comments * add docs * no node_version on create
Signed-off-by: Modular Magician <[email protected]>
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks! |
This PR does two things (each in a separate commit):
Adds a retry when reading the cluster to make sure it's in the RUNNING state. This (sort of) solves the issue of trying to upgrade a cluster while it's already upgrading. Though it would make sense to have the retry be on the upgrade, I don't want users to get confused when they run plan, see a certain version, and then run upgrade and find out they were at a different version. Let me know your thoughts on whether this should actually be in read or in update (or in both).
Adds a new field,
min_master_version
and setsmaster_version
to computed. As per discussions with @rosbo and @paddycarver, having just onemaster_version
field doesn't work when the master gets upgraded automatically. This allows the users to set the minimum master version while also using the current master version for interpolation in other resources.@paddycarver is in charge of running tests and adding documentation because I have to go catch my bus :)