Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

#1300 Supporting regional clusters for node pools #1320

Merged
merged 28 commits into from
Apr 25, 2018

Conversation

darrenhaken
Copy link
Contributor

This is a WIP for creating node pools with regional clusters.

@darrenhaken
Copy link
Contributor Author

darrenhaken commented Apr 11, 2018

See #1300 issue for further context

@danawillow
Copy link
Contributor

Hey @darrenhaken, are you looking for preliminary feedback or did you want us to wait until it's no longer WIP?

@kri5
Copy link

kri5 commented Apr 12, 2018

It seems that you are assuming people are using the v1Beta1 api.

Why remove the code handling the stable api calls?

@darrenhaken
Copy link
Contributor Author

darrenhaken commented Apr 12, 2018

@kri5 I tested using the v1Beta1 API to create zonal clusters and it seemed to work fine (the tests passed). With that, I felt it wouldn't impact TF users which version of the API to use and it simplifies the codebase.

Is there some context I may have missed?

@kri5
Copy link

kri5 commented Apr 12, 2018

If you look at the cluster_container resource, they choose to maintain both version of the API, by adding a "region" field, and handling location by generating it from region or zone for the beta api only.

As the beta API are provided as-is, with no SLA, i think we should always have a way to call the stable API, which is provided with SLA.

@darrenhaken
Copy link
Contributor Author

darrenhaken commented Apr 12, 2018

@kri5 So Justification over the SLA makes sense, I can look to add back in the logic to use the different API version when a zone is provided.

I'm not completely sold on parsing zone to the location in the code, I'd prefer to add location as a new param to the TF. location is part of the API and zone is in fact deprecated so encouraging TF users to migrate to location felt sensible.

Example:

resource "google_container_node_pool" "np" {
	name = "%s"
	cluster = "regional-test-2"
	location = "us-central1"
	initial_node_count = 1
}

Thoughts?

Also, is there a way in TF to mark a field as deprecated?

@kri5
Copy link

kri5 commented Apr 12, 2018

My point here is, that until this API is GA, zone is the proper way of provision it, and location should be marked Beta, but no the other way around, ie. marking zone as deprecated, which is not yet the case.

@darrenhaken
Copy link
Contributor Author

Sure, so how about I add location and in the docs I say that's a beta param for regional clusters?

@kri5
Copy link

kri5 commented Apr 12, 2018

I'm not against adding location to the nodepool ressource, but the main issue for me is the non uniformity of having location in the node pool resource, and region in the container cluster resource.

That might confuse users, as theses two resources are usually used together.

What do you think?

@darrenhaken
Copy link
Contributor Author

darrenhaken commented Apr 12, 2018

@kri5 do you think region for conformity then? The negative to this is it looks like the underlying SDK uses the location language so it is a deviation.

@kri5
Copy link

kri5 commented Apr 12, 2018

@darrenhaken I do not know what would be the more appropriate.

If you really want to reflect the changes of API in the beta, maybe the right thing would be to change region to location in the cluster resource?

@darrenhaken
Copy link
Contributor Author

darrenhaken commented Apr 12, 2018 via email

@danawillow
Copy link
Contributor

Hey all, just piping in on the conversation (without looking at the code)- I'd prefer that node pool matches cluster with having region and zone separately instead of location. Both approaches (region/zone vs location) have their merits, but from talking to my team and the original PR author for regional clusters, we came to the decision to do region/zone. Whether it's really the best way or not is still up for debate, but for consistency the two resources should definitely match.

As for the beta/v1 APIs thing, it actually doesn't matter which endpoint of the API you call (this is new information my team and I learned not too long ago), but for code review purposes it's much easier if a given PR does exactly one thing, and so I'd recommend keeping the logic as-is. We have some changes coming down the road with how we handle beta features (part of our codegen efforts), so stay tuned for that as well.

@darrenhaken
Copy link
Contributor Author

darrenhaken commented Apr 12, 2018 via email

@danawillow
Copy link
Contributor

For now let's say tentatively yes

@darrenhaken
Copy link
Contributor Author

darrenhaken commented Apr 12, 2018 via email

Note: Seems to be an issue when setting initial node count, this is the
next thing to investigate.
- Had to remove retrieving the node pool info (which was added in when
    refactoring the reusable code) as it causes an error to occur.
- Change the regional test case to create the test cluster
- Migrate to using region instead of location.
@darrenhaken
Copy link
Contributor Author

darrenhaken commented Apr 13, 2018

@danawillow I believe this PR is code complete now.
I have also updated the documentation.

I am seeing a couple of flaky tests so I'm just investigating whether these were pre-existing to the work I did.

@Stono is going to test this as well with the existing TF he uses

Copy link
Contributor

@danawillow danawillow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @darrenhaken, this looks great! Here's some feedback for you :)

Also as an FYI, I'm going to be on vacation for two weeks starting Monday evening (pacific time), so if this isn't merged by then, feel free to ping rosbo/ndmckinley/paddycarver to have one of them finish the review.

return GetResourceNameFromSelfLink(res.(string)), nil
}

func generateLocation(d TerraformResourceData, config *Config) (string, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It probably makes sense to use the same logic here and in resource_container_cluster

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep makes sense.

containerApiVersion := getContainerApiVersion(d, ContainerNodePoolBaseApiVersion, ContainerNodePoolVersionedFeatures)
config := meta.(*Config)
func getNodePoolRegion(d TerraformResourceData, config *Config) (string, error) {
res, ok := d.GetOk("region")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if the node pool is defined inside a cluster resource? (this will affect the call in nodePoolUpdate)

Copy link
Contributor Author

@darrenhaken darrenhaken Apr 14, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@danawillow Need further info please, I'm not too sure what you mean. Are you saying an error would occur when the node pool is nested?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, happy to elaborate. You can define a cluster with a node pool resource inside of it:

resource "google_container_cluster" "with_node_pool" {
	name = "cluster"
	zone = "us-central1-a"

	node_pool {
		name               = "np"
		initial_node_count = 2
	}
}

If you check out the code for google_container_cluster, you can see where it calls into some of the functions inside resource_container_node_pool.go. If you do a d.Get() from there, you're still doing it on the cluster resource, and so it'll use the cluster schema rather than the node pool schema. That's why a lot of the code inside of the node pool resource looks like d.Get(prefix + "something"): that lets the cluster and node pool resource share the code.

d.SetId(fmt.Sprintf("%s/%s/%s", zone, cluster, nodePool.Name))
waitErr := containerBetaOperationWait(config,
operation, nodePoolInfo.project,
nodePoolInfo.location, "timeout creating GKE NodePool", timeoutInMinutes, 3)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you leave this as "creating GKE NodePool"? That way the full sentence that gets returned in case of an error is "Error waiting for creating GKE NodePool"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure changed this.

if waitErr != nil {
// The resource didn't actually create
d.SetId("")
return waitErr
}

d.SetId(fmt.Sprintf("%s/%s/%s", nodePoolInfo.location, nodePoolInfo.cluster, nodePool.Name))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason to move this? Leaving it before the wait means that if the Terraform process ended before we were done waiting, the resource would still be stored in state, and so the next time Terraform is run it would know to do a read() and fill in the resource in the state, instead of trying to create it again.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Moved it back up, I didn't realise it could have such an impact

@@ -327,6 +351,11 @@ func resourceContainerNodePoolDelete(d *schema.ResourceData, meta interface{}) e
return nil
}

func getNodePoolName(id string) string {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just curious, why move this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've moved it back to the bottom of the file. I had originally deleted the function as it wasn't needed but of course it was in the end :)

@@ -435,7 +481,7 @@ func flattenNodePool(d *schema.ResourceData, config *Config, np *containerBeta.N
if len(matches) < 4 {
return nil, fmt.Errorf("Error reading instance group manage URL '%q'", url)
}
igm, err := config.clientCompute.InstanceGroupManagers.Get(matches[1], matches[2], matches[3]).Do()
igm, err := config.clientComputeBeta.InstanceGroupManagers.Get(matches[1], matches[2], matches[3]).Do()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Either API is fine here, we're not trying to get any beta features out of the IGM. feel free to leave it or change it back.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks

if err != nil {
return err
}

npName := d.Get(prefix + "name").(string)
lockKey := containerClusterMutexKey(project, zone, clusterName)

name := getNodePoolName(d.Id())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we need two separate name variables? if so, can you comment on the difference between them? Also, this won't work if the node pool is defined inside the cluster resource.

google/utils.go Outdated
@@ -48,6 +48,17 @@ func getZone(d TerraformResourceData, config *Config) (string, error) {
return GetResourceNameFromSelfLink(res.(string)), nil
}

func getParent(d TerraformResourceData, config *Config) (string, error) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this gets used anywhere


resource "google_container_node_pool" "regional-np" {
name = "my-node-pool"
region = "us-central1"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: = alignment

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually appreciate all your attention to detail. This TF code is very vast! 👍

@@ -59,7 +50,12 @@ func resourceContainerNodePool() *schema.Resource {
},
"cluster": &schema.Schema{
Type: schema.TypeString,
Required: true,
Optional: true,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is Cluster optional now?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can revert this back, my original design was to expose the parent parameter and expect the TF user to construct the string to match. As we have gone with region this can definitely be reverted.

- Fix alignment of `=` in docs
- Share functions for regional logic in both cluster and node pool.
The extra functions have been moved to the regional_utils.
- Removed unused getParent
- Moved getRegion back to bottom of node_pool
- Change cluster back to required for the node pool
- Move setting node pool ID during creation back to above the wait
- Used the node pool info within the exists function for node pool
@darrenhaken
Copy link
Contributor Author

@danawillow I have resolved most of the things you asked me to look at. There's a couple of things I wasn't clear on, can you get back to me and I'll try and fix them today.

As an aside, do you know if there's any appetite to explore unit tests or writing some tests against a stubbed interface? It feels like we could shorten the feedback loop on errors.

I am also having a couple of tests failing (one of them does on master too because of a K8 version which is no longer provided). Got any time to give some input if I share the test log?

Copy link
Contributor Author

@darrenhaken darrenhaken left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is worth a review again, couple of things I was unsure about. I've left comments to them and I will try and turn this around ASAP.

@danawillow you're in Seattle right? I'll try to be around on your timezone

@darrenhaken
Copy link
Contributor Author

darrenhaken commented Apr 14, 2018

Thought I'd mention I am seeing intermittent raise conditions, an example is deleting a cluster when it's got another operation being applied to it.

Got any ideas looking at the code? I'm not sure how reliable the tests generally are, I noticed some of them randomly failing on master when I tried that too.

An example failure I am seeing:

* google_container_cluster.cluster: google_container_cluster.cluster: Error reading Container Cluster "tf-nodepool-test-end4cflprw": Cluster "tf-nodepool-test-end4cflprw" has status "RECONCILING" with message ""

@darrenhaken
Copy link
Contributor Author

Wow OK, I suspect I would never have gotten to the bottom of that one!

Suppressing the diff sounds fine with me, how do I actually do that?

@nat-henderson
Copy link
Contributor

You'll want to look at the DiffSuppressFunc here: https://github.com/hashicorp/terraform/blob/master/helper/schema/schema.go and you'll want to apply that to the taint field in node_config.go, I think. I'm only pretty sure that's a good solution... Mind giving it a try?

@darrenhaken
Copy link
Contributor Author

darrenhaken commented Apr 24, 2018

@ndmckinley looking now. Would the string comparison be on the old or new string value?

I have something like this on taint

DiffSuppressFunc: func(k, old, new string, d *schema.ResourceData) bool {
					return strings.ToLower(new) == "nvidia.com/gpu"
				},

@nat-henderson
Copy link
Contributor

You may also want to try CustomDiff, depending - I admit, I am not 100% sure which solution will work. I think DiffSuppressFunc might be limited to single-field diffs and won't let you suppress a diff in taint conditional on the GPU setting.

@darrenhaken
Copy link
Contributor Author

Trying a run through of the test with DiffSuppressFunc

@darrenhaken
Copy link
Contributor Author

darrenhaken commented Apr 24, 2018

@ndmckinley The above function didn't work with the DiffSuppressFunc.
I might struggle to do much more tonight (getting late here), would you be able to carry on?
Feel free to commit anything to the branch

One other idea I had is adding back in the old v1 API calls for zonal clusters, would that fix the issue?
This would mean introducing the switch based on region.

@nat-henderson
Copy link
Contributor

Sure, I'll give it a shot.

@darrenhaken
Copy link
Contributor Author

darrenhaken commented Apr 24, 2018 via email

@nat-henderson
Copy link
Contributor

:) I've proven to myself that this is definitely 100% the issue, and I have a fix that works, but is too overbroad. I'll get it down to the minimum possible interference and send it to your branch there.

Assuming it's all right with you if I merge the PR once that's done?

@darrenhaken
Copy link
Contributor Author

darrenhaken commented Apr 24, 2018 via email

Note included here for posterity:
=====
The problem is, we add a GPU, and as per the docs, GKE adds a taint to
the node pool saying "don't schedule here unless you tolerate GPUs",
which is pretty sensible.

Terraform doesn't know about that, because it didn't ask for the taint
to be added. So after apply, on refresh, it sees the state of the world
(1 taint) and the state of the config (0 taints) and wants to set the
world equal to the config. This introduces a diff, which makes the test
fail - tests fail if there's a diff after they run.

Taints are a beta feature, though. :) And since the config doesn't
contain any taints, terraform didn't see any beta features in that node
pool ... so it used to send the request to the v1 API. And since the v1
API didn't return anything about taints (since they're a beta feature),
terraform happily checked the state of the world (0 taints I know about)
vs the config (0 taints), and all was well.

This PR makes every node pool refresh request hit the beta API. So now
terraform finds out about the taints (which were always there) and the
test fails (which it always should have done).

The solution is probably to write a little bit of code which suppresses
the report of the diff of any taint with value 'nvidia.com/gpu', but
only if GPUs are enabled. I think that's something that can be done.
@nat-henderson
Copy link
Contributor

Tests running now. GPU test passes locally ... fingers crossed!

@nat-henderson
Copy link
Contributor

All right, I've got one thing which I think is a spurious test failure ... I'm re-running it.

@nat-henderson
Copy link
Contributor

Good news - the failed test passes on a retry, as do all the others. It's going in! Happy birthday, @Stono. 😆

@nat-henderson nat-henderson merged commit 2b1b668 into hashicorp:master Apr 25, 2018
@morgante
Copy link

Yay! So happy to see this merged—it's not my birthday, but it might as well be. 🎆

@darrenhaken darrenhaken deleted the regional_node_pools branch April 25, 2018 08:46
@darrenhaken
Copy link
Contributor Author

@ndmckinley awesome work! Thanks so much for helping me get over the final line with it. When can we expect it to be released?

@Stono
Copy link

Stono commented Apr 25, 2018

IM SO HAPPY I COULD CRY.
@ndmckinley we had brownies to celebrate. I ate yours for you

@darrenhaken
Copy link
Contributor Author

Brownie celebrations should be when its released 😆

@herrewig
Copy link

Thank you!!!!!! ♥️ 🌟 ♥️ 🌟 ♥️ 🌟 ♥️ 🌟 ♥️ 🌟 ♥️ 🌟 ♥️ 🌟 ♥️ 🌟 ♥️ 🌟 ♥️ 🌟 ♥️ 🌟

@nat-henderson
Copy link
Contributor

@darrenhaken I see the questions around release! Our releases are usually on a biweekly schedule, and last release was last Friday.

@AndrewFarley
Copy link

Heck yes! Just built the provider myself to test, I really needed this, and it works great. Thanks!!! +1 +1 +1

chrisst pushed a commit to chrisst/terraform-provider-google that referenced this pull request Nov 9, 2018
…#1320)

This PR also switched us to using the beta API in all cases, and that had a side effect which is worth noting, note included here for posterity.

=====
The problem is, we add a GPU, and as per the docs, GKE adds a taint to
the node pool saying "don't schedule here unless you tolerate GPUs",
which is pretty sensible.

Terraform doesn't know about that, because it didn't ask for the taint
to be added. So after apply, on refresh, it sees the state of the world
(1 taint) and the state of the config (0 taints) and wants to set the
world equal to the config. This introduces a diff, which makes the test
fail - tests fail if there's a diff after they run.

Taints are a beta feature, though. :) And since the config doesn't
contain any taints, terraform didn't see any beta features in that node
pool ... so it used to send the request to the v1 API. And since the v1
API didn't return anything about taints (since they're a beta feature),
terraform happily checked the state of the world (0 taints I know about)
vs the config (0 taints), and all was well.

This PR makes every node pool refresh request hit the beta API. So now
terraform finds out about the taints (which were always there) and the
test fails (which it always should have done).

The solution is probably to write a little bit of code which suppresses
the report of the diff of any taint with value 'nvidia.com/gpu', but
only if GPUs are enabled. I think that's something that can be done.
@ghost
Copy link

ghost commented Nov 18, 2018

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Nov 18, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants