Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[azurerm_kubernetes_cluster] working with autoscaler #2502

Closed
raphaelquati opened this issue Dec 12, 2018 · 14 comments
Closed

[azurerm_kubernetes_cluster] working with autoscaler #2502

raphaelquati opened this issue Dec 12, 2018 · 14 comments

Comments

@raphaelquati
Copy link
Contributor

In our AKS (deployed using terraform), we've setup autoscaler (https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/aks/autoscaler.md) manually (not using terraform - planned to use).

At creation time, we setup with only 1 agent.

resource "azurerm_kubernetes_cluster" "k8s" {
    name = "${var.cluster_name}"
    location = "${azurerm_resource_group.test.location}"
    resource_group_name = "${azurerm_resource_group.test.name}"
    dns_prefix = "${var.dns_prefix}"
    kubernetes_version = "1.11.4"

    agent_pool_profile {
        name            = "agentpool"
        count           = 1
        vm_size         = "Standard_F4s_v2"
        os_type         = "Linux"
        os_disk_size_gb = 30
        vnet_subnet_id  = "${azurerm_subnet.default.id}"
    }

After a while, autoscaler increase the number of agents. 
Terraform has only 1 agent at state. Any try to execute terraform apply, try to force AKS update.




@pawelpabich
Copy link

Would it make sense to rename this property from count to initial_count and change the behaviour so this value is only used once?

@tombuildsstuff
Copy link
Contributor

hi @raphaelquati @pawelpabich

Thanks for opening this issue - apologies for the delayed response here!

If you're managing the count for the AKS Cluster externally it's possible to use the ignore_changes field within the lifecycle block to ignore changes to that value, as shown here:

resource "azurerm_kubernetes_cluster" "test" {
  lifecycle {
    ignore_changes = [ "agent_pool_profile.0.count" ]
  }
}

Whilst that will ignore changes to the count being different; if another field changes Terraform will still try and update the number of nodes in the cluster to match this. Would you be able to take a look and see if this approach works for you for the moment?

We could investigate obtaining the current cluster count and re-submitting that if the users explicitly ignored the value of the count field during an update - however this isn't something we support at this time.

Thanks!

@gree-gorey
Copy link

Didn't find any related issue so I'll ask here.
Can you tell me if there are plans for including cluster autoscaler into azurerm_kubernetes_cluster resource? Maybe in 2.0? It would be great to reproduce this with Terraform:

$ az aks create \
  ...
  --enable-cluster-autoscaler \
  --min-count 1 \
  --max-count 3

Thanks

@ghost ghost removed the waiting-response label Feb 12, 2019
@MattMencel
Copy link
Contributor

Autoscaling for AKS is in preview.

I don't think there's anything for it in azure-sdk-for-go yet.

It's enabled using this aks-preview extension. There's an SDK there.

Assuming this eventually goes GA, would it be possible for the aurerm_kubernetes_cluster resource to use min_count and max_count instead of count only if enable_cluster_autoscaler is set to true? And then ignore it as a change if it's above the min threshold?

resource "azurerm_kubernetes_cluster" "test" {
  name                = "acctestaks1"
  location            = "${azurerm_resource_group.test.location}"
  resource_group_name = "${azurerm_resource_group.test.name}"
  dns_prefix          = "acctestagent1"

  agent_pool_profile {
    name            = "default"
    enable_cluster_autoscaler = true
    min_count       = 1
    max_count       = 3
    vm_size         = "Standard_D1_v2"
    os_type         = "Linux"
    os_disk_size_gb = 30
  }

  service_principal {
    client_id     = "00000000-0000-0000-0000-000000000000"
    client_secret = "00000000000000000000000000000000"
  }

  tags {
    Environment = "Production"
  }
}

@bq1756
Copy link

bq1756 commented Apr 8, 2019

I am requesting the same feature. It would be great to add "enable_cluster_autoscaler = true" into agent_pool_profile. Or, expand the "addon_profile" block to accept "cluster_autoscaler" settings.

@cwebbtw
Copy link
Contributor

cwebbtw commented Apr 20, 2019

Would be great to see support for this

@landro
Copy link

landro commented May 3, 2019

AutoScaling has been supported in the go sdk since 2019-02-01:
github.com/Azure/azure-sdk-for-go/services/containerservice/mgmt/2019-02-01/containerservice
See models.go for details

@zloeber
Copy link

zloeber commented May 3, 2019

So who's going to hack it into provider? The preview feature for the aci connector is already available.

@DeanPH
Copy link

DeanPH commented May 24, 2019

Any progress on this?

@jamesbibby
Copy link

Looks like there is a PR for it in the works #3361

@KIRY4
Copy link

KIRY4 commented Jun 6, 2019

Any updates? We really need this feature in terraform!

@invidian
Copy link
Contributor

The PR is merged now, so it's available for testing. I guess with next release this issue can be closed.

@tombuildsstuff
Copy link
Contributor

Fixed via #3361

@ghost
Copy link

ghost commented Oct 3, 2019

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Oct 3, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

14 participants