Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

#1300 Supporting regional clusters for node pools #1320

Merged
merged 28 commits into from
Apr 25, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
b450384
WIP for supporting regional clusters
darrenhaken Apr 11, 2018
b312a0c
Check if location is zone and set fields as needed
darrenhaken Apr 11, 2018
8df3682
Fix update and delete functions to support regionals
darrenhaken Apr 12, 2018
6e8b290
Create regional node pools.
darrenhaken Apr 13, 2018
6dd0ae3
Finished regional clusters.
darrenhaken Apr 13, 2018
6b6c9ab
Updated the docs for regional node pools
darrenhaken Apr 13, 2018
2a8c44a
Merge branch 'master' of github.com:terraform-providers/terraform-pro…
darrenhaken Apr 13, 2018
70d4b3a
Fix fmt issue
darrenhaken Apr 13, 2018
4d3e0be
Resolve some of the points raised during review:
darrenhaken Apr 14, 2018
9ffd2d8
Extract lock key logic into struct
darrenhaken Apr 14, 2018
b8f4c1c
Merge remote-tracking branch 'upstream/master' into regional_node_pools
darrenhaken Apr 16, 2018
c50b7ad
Add retry handlers to create/delete node pools
darrenhaken Apr 17, 2018
ef4b6e4
Merge remote-tracking branch 'upstream/master' into regional_node_pools
darrenhaken Apr 17, 2018
5de8bf1
Change node pool update to use the cluster name passed in and the ID now
darrenhaken Apr 17, 2018
c30cdb8
Increase number of retries for deleting node pools
darrenhaken Apr 18, 2018
0d18c77
Add cluster test for regional cluster with nested node pool
darrenhaken Apr 18, 2018
84e29b0
Fix panic conditions on TestAccContainerCluster_withAddons and other
darrenhaken Apr 18, 2018
80f4690
Fix `TestAccContainerNodePool_autoscaling` on node pool
darrenhaken Apr 19, 2018
7693a28
Add test for regional node pool with an update
darrenhaken Apr 19, 2018
18bb8d1
Merge remote-tracking branch 'upstream/master' into regional_node_pools
darrenhaken Apr 19, 2018
ac4098d
Merge remote-tracking branch 'upstream/master' into regional_node_pools
darrenhaken Apr 20, 2018
855fec4
Fix TestAccContainerNodePool_version
darrenhaken Apr 21, 2018
1aba204
Merge remote-tracking branch 'upstream/master' into regional_node_pools
darrenhaken Apr 21, 2018
7436f8c
Fix FMT
darrenhaken Apr 21, 2018
fd447e2
Add retry to cluster deletion
darrenhaken Apr 21, 2018
512f49f
Merge remote-tracking branch 'upstream/master' into regional_node_pools
darrenhaken Apr 24, 2018
96e5385
Add diff suppress to taint so that GPU test passes.
nat-henderson Apr 24, 2018
b8138ea
remove log statements.
nat-henderson Apr 24, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 23 additions & 3 deletions google/node_config.go
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@ import (
"github.com/hashicorp/terraform/helper/schema"
"github.com/hashicorp/terraform/helper/validation"
containerBeta "google.golang.org/api/container/v1beta1"
"strconv"
"strings"
)

// Matches gke-default scope from https://cloud.google.com/sdk/gcloud/reference/container/clusters/create
Expand Down Expand Up @@ -132,9 +134,10 @@ var schemaNodeConfig = &schema.Schema{
},

"taint": {
Type: schema.TypeList,
Optional: true,
ForceNew: true,
Type: schema.TypeList,
Optional: true,
ForceNew: true,
DiffSuppressFunc: taintDiffSuppress,
Elem: &schema.Resource{
Schema: map[string]*schema.Schema{
"key": {
Expand Down Expand Up @@ -353,3 +356,20 @@ func flattenWorkloadMetadataConfig(c *containerBeta.WorkloadMetadataConfig) []ma
}
return result
}

func taintDiffSuppress(k, old, new string, d *schema.ResourceData) bool {
if strings.HasSuffix(k, "#") {
oldCount, oldErr := strconv.Atoi(old)
newCount, newErr := strconv.Atoi(new)
// If either of them isn't a number somehow, or if there's one that we didn't have before.
return oldErr != nil || newErr != nil || oldCount == newCount+1
} else {
lastDot := strings.LastIndex(k, ".")
taintKey := d.Get(k[:lastDot] + ".key").(string)
if taintKey == "nvidia.com/gpu" {
return true
} else {
return false
}
}
}
37 changes: 37 additions & 0 deletions google/regional_utils.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
package google

import (
"fmt"
"github.com/hashicorp/terraform/helper/schema"
"strings"
)

//These functions are used by both the `resource_container_node_pool` and `resource_container_cluster` for handling regional clusters

func isZone(location string) bool {
return len(strings.Split(location, "-")) == 3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer the more explicit "matches the regex for 'contains two dashes with alphanumeric (lowercase) text between and after them'."

edit: I see that this isn't your code and you just extracted it - I still prefer this, but if you would rather not then I'll make the change later.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't think some of these functions were particularly robust, if I get time I can look to use Regex instead. If I do that I'll add unit tests for the function too.

}

func getLocation(d *schema.ResourceData, config *Config) (string, error) {
if v, isRegionalCluster := d.GetOk("region"); isRegionalCluster {
return v.(string), nil
} else {
// If region is not explicitly set, use "zone" (or fall back to the provider-level zone).
// For now, to avoid confusion, we require region to be set in the config to create a regional
// cluster rather than falling back to the provider-level region.
return getZone(d, config)
}
}

// getZone reads the "zone" value from the given resource data and falls back
// to provider's value if not given. If neither is provided, returns an error.
func getZone(d TerraformResourceData, config *Config) (string, error) {
res, ok := d.GetOk("zone")
if !ok {
if config.Zone != "" {
return config.Zone, nil
}
return "", fmt.Errorf("Cannot determine zone: set in this resource, or set provider-level zone.")
}
return GetResourceNameFromSelfLink(res.(string)), nil
}
56 changes: 37 additions & 19 deletions google/resource_container_cluster.go
Original file line number Diff line number Diff line change
Expand Up @@ -1159,7 +1159,12 @@ func resourceContainerClusterUpdate(d *schema.ResourceData, meta interface{}) er

if n, ok := d.GetOk("node_pool.#"); ok {
for i := 0; i < n.(int); i++ {
if err := nodePoolUpdate(d, meta, clusterName, fmt.Sprintf("node_pool.%d.", i), timeoutInMinutes); err != nil {
nodePoolInfo, err := extractNodePoolInformationFromCluster(d, config, clusterName)
if err != nil {
return err
}

if err := nodePoolUpdate(d, meta, nodePoolInfo, fmt.Sprintf("node_pool.%d.", i), timeoutInMinutes); err != nil {
return err
}
}
Expand Down Expand Up @@ -1254,7 +1259,6 @@ func resourceContainerClusterUpdate(d *schema.ResourceData, meta interface{}) er
}

func resourceContainerClusterDelete(d *schema.ResourceData, meta interface{}) error {
containerAPIVersion := getContainerApiVersion(d, ContainerClusterBaseApiVersion, ContainerClusterVersionedFeatures)
config := meta.(*Config)

project, err := getProject(d, config)
Expand Down Expand Up @@ -1282,15 +1286,26 @@ func resourceContainerClusterDelete(d *schema.ResourceData, meta interface{}) er
defer mutexKV.Unlock(containerClusterMutexKey(project, location, clusterName))

var op interface{}
switch containerAPIVersion {
case v1:
op, err = config.clientContainer.Projects.Zones.Clusters.Delete(project, location, clusterName).Do()
case v1beta1:
var count = 0
err = resource.Retry(30*time.Second, func() *resource.RetryError {
count++

name := containerClusterFullName(project, location, clusterName)
op, err = config.clientContainerBeta.Projects.Locations.Clusters.Delete(name).Do()
}

if err != nil {
log.Printf("[WARNING] Cluster is still not ready to delete, retrying %s", clusterName)
return resource.RetryableError(err)
}

if count == 15 {
return resource.NonRetryableError(fmt.Errorf("Error retrying to delete cluster %s", clusterName))
}
return nil
})

if err != nil {
return err
return fmt.Errorf("Error deleting Cluster: %s", err)
}

// Wait until it's deleted
Expand Down Expand Up @@ -1597,17 +1612,20 @@ func containerClusterFullName(project, location, cluster string) string {
return fmt.Sprintf("projects/%s/locations/%s/clusters/%s", project, location, cluster)
}

func getLocation(d *schema.ResourceData, config *Config) (string, error) {
if v, isRegionalCluster := d.GetOk("region"); isRegionalCluster {
return v.(string), nil
} else {
// If region is not explicitly set, use "zone" (or fall back to the provider-level zone).
// For now, to avoid confusion, we require region to be set in the config to create a regional
// cluster rather than falling back to the provider-level region.
return getZone(d, config)
func extractNodePoolInformationFromCluster(d *schema.ResourceData, config *Config, clusterName string) (*NodePoolInformation, error) {
project, err := getProject(d, config)
if err != nil {
return nil, err
}

location, err := getLocation(d, config)
if err != nil {
return nil, err
}
}

func isZone(location string) bool {
return len(strings.Split(location, "-")) == 3
return &NodePoolInformation{
project: project,
location: location,
cluster: d.Get("name").(string),
}, nil
}
36 changes: 36 additions & 0 deletions google/resource_container_cluster_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -272,6 +272,30 @@ func TestAccContainerCluster_regional(t *testing.T) {
})
}

func TestAccContainerCluster_regionalWithNodePool(t *testing.T) {
t.Parallel()

clusterName := fmt.Sprintf("cluster-test-regional-%s", acctest.RandString(10))
npName := fmt.Sprintf("tf-cluster-nodepool-test-%s", acctest.RandString(10))

resource.Test(t, resource.TestCase{
PreCheck: func() { testAccPreCheck(t) },
Providers: testAccProviders,
CheckDestroy: testAccCheckContainerClusterDestroy,
Steps: []resource.TestStep{
{
Config: testAccContainerCluster_regionalWithNodePool(clusterName, npName),
},
{
ResourceName: "google_container_cluster.regional",
ImportStateIdPrefix: "us-central1/",
ImportState: true,
ImportStateVerify: true,
},
},
})
}

func TestAccContainerCluster_withAdditionalZones(t *testing.T) {
t.Parallel()

Expand Down Expand Up @@ -1373,6 +1397,18 @@ resource "google_container_cluster" "regional" {
}`, clusterName)
}

func testAccContainerCluster_regionalWithNodePool(cluster, nodePool string) string {
return fmt.Sprintf(`
resource "google_container_cluster" "regional" {
name = "%s"
region = "us-central1"

node_pool {
name = "%s"
}
}`, cluster, nodePool)
}

func testAccContainerCluster_withAdditionalZones(clusterName string) string {
return fmt.Sprintf(`
resource "google_container_cluster" "with_additional_zones" {
Expand Down
Loading