Skip to content

Commit

Permalink
Document update variants
Browse files Browse the repository at this point in the history
  • Loading branch information
kopachevsky committed Dec 23, 2019
1 parent d04c926 commit 65e76f6
Show file tree
Hide file tree
Showing 5 changed files with 110 additions and 0 deletions.
22 changes: 22 additions & 0 deletions autogen/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,28 @@ If you are using these features with a private cluster, you will need to either:
3. Include the external IP of your Terraform deployer in the `master_authorized_networks` configuration. Note that only IP addresses reserved in Google Cloud (such as in other VPCs) can be whitelisted.
4. Deploy a [bastion host](https://github.com/terraform-google-modules/terraform-google-bastion-host) or [proxy](https://cloud.google.com/solutions/creating-kubernetes-engine-private-clusters-with-net-proxies) in the same VPC as your GKE cluster.

## Private Cluster Update

In [#256] update variants added support for node pools to be created before being destroyed.

Before, if a node pool has to be recreated for any number of reasons,
the node pool is deleted then, created. This can be a problem if it is the only node pool in the GKE
cluster and the new node pool cannot be provisioned. In this scenario, pods could not be scheduled.
[#256] allows a node pool to be created before it is deleted so that any issues with node pool creation
and/or provisioning are discovered before the node pool is removed. This feature is controlled by the
variable `node_pools_create_before_destroy`. In order to avoid node pool name collisions,
a 4 character alphanumeric is added as a suffix to the name.

The benefit is that you always have some node pools active.
We don't actually cordon/drain the traffic beyond what the GKE API itself will do,
but we do make sure the new node pool is created before the old one is destroyed.

The implications of this are that:

- We append a random ID on the node pool names (since you can't have two simultaneously active node pools)
- For a brief period, you'll have 2x as many resources/node pools
- You will indeed need sufficient IP space (and compute capacity) to create both node pools

{% endif %}

## Compatibility
Expand Down
22 changes: 22 additions & 0 deletions modules/beta-private-cluster-update-variant/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,28 @@ If you are using these features with a private cluster, you will need to either:
3. Include the external IP of your Terraform deployer in the `master_authorized_networks` configuration. Note that only IP addresses reserved in Google Cloud (such as in other VPCs) can be whitelisted.
4. Deploy a [bastion host](https://github.com/terraform-google-modules/terraform-google-bastion-host) or [proxy](https://cloud.google.com/solutions/creating-kubernetes-engine-private-clusters-with-net-proxies) in the same VPC as your GKE cluster.

## Private Cluster Update

In [#256] update variants added support for node pools to be created before being destroyed.

Before, if a node pool has to be recreated for any number of reasons,
the node pool is deleted then, created. This can be a problem if it is the only node pool in the GKE
cluster and the new node pool cannot be provisioned. In this scenario, pods could not be scheduled.
[#256] allows a node pool to be created before it is deleted so that any issues with node pool creation
and/or provisioning are discovered before the node pool is removed. This feature is controlled by the
variable `node_pools_create_before_destroy`. In order to avoid node pool name collisions,
a 4 character alphanumeric is added as a suffix to the name.

The benefit is that you always have some node pools active.
We don't actually cordon/drain the traffic beyond what the GKE API itself will do,
but we do make sure the new node pool is created before the old one is destroyed.

The implications of this are that:

- We append a random ID on the node pool names (since you can't have two simultaneously active node pools)
- For a brief period, you'll have 2x as many resources/node pools
- You will indeed need sufficient IP space (and compute capacity) to create both node pools


## Compatibility

Expand Down
22 changes: 22 additions & 0 deletions modules/beta-private-cluster/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,28 @@ If you are using these features with a private cluster, you will need to either:
3. Include the external IP of your Terraform deployer in the `master_authorized_networks` configuration. Note that only IP addresses reserved in Google Cloud (such as in other VPCs) can be whitelisted.
4. Deploy a [bastion host](https://github.com/terraform-google-modules/terraform-google-bastion-host) or [proxy](https://cloud.google.com/solutions/creating-kubernetes-engine-private-clusters-with-net-proxies) in the same VPC as your GKE cluster.

## Private Cluster Update

In [#256] update variants added support for node pools to be created before being destroyed.

Before, if a node pool has to be recreated for any number of reasons,
the node pool is deleted then, created. This can be a problem if it is the only node pool in the GKE
cluster and the new node pool cannot be provisioned. In this scenario, pods could not be scheduled.
[#256] allows a node pool to be created before it is deleted so that any issues with node pool creation
and/or provisioning are discovered before the node pool is removed. This feature is controlled by the
variable `node_pools_create_before_destroy`. In order to avoid node pool name collisions,
a 4 character alphanumeric is added as a suffix to the name.

The benefit is that you always have some node pools active.
We don't actually cordon/drain the traffic beyond what the GKE API itself will do,
but we do make sure the new node pool is created before the old one is destroyed.

The implications of this are that:

- We append a random ID on the node pool names (since you can't have two simultaneously active node pools)
- For a brief period, you'll have 2x as many resources/node pools
- You will indeed need sufficient IP space (and compute capacity) to create both node pools


## Compatibility

Expand Down
22 changes: 22 additions & 0 deletions modules/private-cluster-update-variant/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,28 @@ If you are using these features with a private cluster, you will need to either:
3. Include the external IP of your Terraform deployer in the `master_authorized_networks` configuration. Note that only IP addresses reserved in Google Cloud (such as in other VPCs) can be whitelisted.
4. Deploy a [bastion host](https://github.com/terraform-google-modules/terraform-google-bastion-host) or [proxy](https://cloud.google.com/solutions/creating-kubernetes-engine-private-clusters-with-net-proxies) in the same VPC as your GKE cluster.

## Private Cluster Update

In [#256] update variants added support for node pools to be created before being destroyed.

Before, if a node pool has to be recreated for any number of reasons,
the node pool is deleted then, created. This can be a problem if it is the only node pool in the GKE
cluster and the new node pool cannot be provisioned. In this scenario, pods could not be scheduled.
[#256] allows a node pool to be created before it is deleted so that any issues with node pool creation
and/or provisioning are discovered before the node pool is removed. This feature is controlled by the
variable `node_pools_create_before_destroy`. In order to avoid node pool name collisions,
a 4 character alphanumeric is added as a suffix to the name.

The benefit is that you always have some node pools active.
We don't actually cordon/drain the traffic beyond what the GKE API itself will do,
but we do make sure the new node pool is created before the old one is destroyed.

The implications of this are that:

- We append a random ID on the node pool names (since you can't have two simultaneously active node pools)
- For a brief period, you'll have 2x as many resources/node pools
- You will indeed need sufficient IP space (and compute capacity) to create both node pools


## Compatibility

Expand Down
22 changes: 22 additions & 0 deletions modules/private-cluster/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,28 @@ If you are using these features with a private cluster, you will need to either:
3. Include the external IP of your Terraform deployer in the `master_authorized_networks` configuration. Note that only IP addresses reserved in Google Cloud (such as in other VPCs) can be whitelisted.
4. Deploy a [bastion host](https://github.com/terraform-google-modules/terraform-google-bastion-host) or [proxy](https://cloud.google.com/solutions/creating-kubernetes-engine-private-clusters-with-net-proxies) in the same VPC as your GKE cluster.

## Private Cluster Update

In [#256] update variants added support for node pools to be created before being destroyed.

Before, if a node pool has to be recreated for any number of reasons,
the node pool is deleted then, created. This can be a problem if it is the only node pool in the GKE
cluster and the new node pool cannot be provisioned. In this scenario, pods could not be scheduled.
[#256] allows a node pool to be created before it is deleted so that any issues with node pool creation
and/or provisioning are discovered before the node pool is removed. This feature is controlled by the
variable `node_pools_create_before_destroy`. In order to avoid node pool name collisions,
a 4 character alphanumeric is added as a suffix to the name.

The benefit is that you always have some node pools active.
We don't actually cordon/drain the traffic beyond what the GKE API itself will do,
but we do make sure the new node pool is created before the old one is destroyed.

The implications of this are that:

- We append a random ID on the node pool names (since you can't have two simultaneously active node pools)
- For a brief period, you'll have 2x as many resources/node pools
- You will indeed need sufficient IP space (and compute capacity) to create both node pools


## Compatibility

Expand Down

0 comments on commit 65e76f6

Please sign in to comment.