Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding a healthProbe to a load balancer when a virtual machine scale set is attached to the backendPool fails. #7802

Closed
ghost opened this issue Jul 18, 2020 · 3 comments

Comments

@ghost
Copy link

ghost commented Jul 18, 2020

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and AzureRM Provider) Version

terraform -v
Terraform v0.12.28

  • provider.azurerm v2.18.0

azurerm provider version 2.18.0 is the first version in which this starts to fail.

Affected Resource(s)

  • azurerm_linux_virtual_machine_scale_set
  • azurerm_lb_backend_address_pool; and
  • azurerm_lb_probe

Terraform Configuration Files

# This is the minimal configuration that breaks.
# Version 2.17 of the azurerm provider works fine, but 2.18+ breaks.

provider "azurerm" {
  version         = "=2.18"
  subscription_id = "<redacted>"
  features {}
}

variable "public-applications" {
  default = {
    "http" = {
      frontendPort = "80",
      backendPort  = "80",
      protocol     = "Tcp",
    },
    "https" = {
      frontendPort = "443",
      backendPort  = "443",
      protocol     = "Tcp",
    },
    # "breaks" = {
    #   frontendPort = "445",
    #   backendPort  = "445",
    #   protocol     = "Tcp",
    # },
  }
}

locals {
  resourcenames = "test"
}

data "azurerm_resource_group" "vnet-rg" {
  name = "d1-vnet"
}

data "azurerm_virtual_network" "vnet" {
  name                = "d1-vnet"
  resource_group_name = "d1-vnet"
}

data "azurerm_subnet" "subnet" {
  name                 = "d06"
  virtual_network_name = data.azurerm_virtual_network.vnet.name
  resource_group_name  = data.azurerm_resource_group.vnet-rg.name
}

resource "azurerm_resource_group" "k8s-rg" {
  name     = "d1-d06-k8s"
  location = data.azurerm_virtual_network.vnet.location
}

resource "azurerm_public_ip" "lb-public-ip" {
  name                = "${local.resourcenames}-lb-public-IP"
  resource_group_name = azurerm_resource_group.k8s-rg.name
  location            = azurerm_resource_group.k8s-rg.location
  allocation_method   = "Static"
  sku                 = "Standard"
  domain_name_label   = "lk-${local.resourcenames}-lb"
}

resource "azurerm_lb" "lb-public" {
  name                = "${local.resourcenames}-lb-public"
  resource_group_name = azurerm_resource_group.k8s-rg.name
  location            = azurerm_resource_group.k8s-rg.location
  sku                 = "Standard"

  frontend_ip_configuration {
    name                 = "PublicFrontend"
    public_ip_address_id = azurerm_public_ip.lb-public-ip.id
  }
}

resource "azurerm_lb_backend_address_pool" "lb-public-pool" {
  resource_group_name = azurerm_resource_group.k8s-rg.name
  loadbalancer_id     = azurerm_lb.lb-public.id
  name                = "publicWorkerPool"
}

resource "azurerm_lb_rule" "lb-public-rules" {
  for_each = var.public-applications

  resource_group_name            = azurerm_resource_group.k8s-rg.name
  loadbalancer_id                = azurerm_lb.lb-public.id
  name                           = "loadBalancingRule-${each.key}"
  protocol                       = each.value.protocol
  frontend_port                  = each.value.frontendPort
  backend_port                   = each.value.backendPort
  frontend_ip_configuration_name = "PublicFrontend"
  backend_address_pool_id        = azurerm_lb_backend_address_pool.lb-public-pool.id
  probe_id                       = azurerm_lb_probe.lb-public-probes[each.key].id
}

resource "azurerm_lb_probe" "lb-public-probes" {
  for_each = var.public-applications

  name                = "healthProbe-${each.key}"
  resource_group_name = azurerm_resource_group.k8s-rg.name
  loadbalancer_id     = azurerm_lb.lb-public.id
  protocol            = each.value.protocol
  port                = each.value.backendPort
}

resource "azurerm_linux_virtual_machine_scale_set" "k8s-vm-set" {
  sku                 = "Standard_D2s_v3"
  instances           = 1
  resource_group_name = azurerm_resource_group.k8s-rg.name
  location            = azurerm_resource_group.k8s-rg.location
  name                = "test-vmss"
  admin_username      = "azure"

  admin_ssh_key {
    username   = "azure"
    public_key = "<redacted>"
  }

  os_disk {
    caching              = "ReadWrite"
    storage_account_type = "StandardSSD_LRS"
    disk_size_gb         = 64
  }

  source_image_reference {
    publisher = "Canonical"
    offer     = "UbuntuServer"
    sku       = "18.04-LTS"
    version   = "latest"
  }

  network_interface {
    name    = "nic"
    primary = true
    ip_configuration {
      name      = "ipconfig1"
      primary   = true
      subnet_id = data.azurerm_subnet.subnet.id

      load_balancer_backend_address_pool_ids = [
        azurerm_lb_backend_address_pool.lb-public-pool.id,
      ]
    }
  }
}

Debug Output

https://gist.github.com/LeankitJSmith/7be03f85250a18d7c47814b492c9abf7

Expected Behavior

Updating the variable public-applications should allow modification of the load balancer to attach new healthprobes and rules. This works in azurerm < 2.18.0

Actual Behavior

The initial apply works fine, however once the VM Scale set has attached its NICs to the load balancer's backend pool, attempting to add new healthprobes/rules returns an error.

Steps to Reproduce

  1. Apply the configuration to deploy the load balancer, health probes, backend pool, VM scale set. terraform apply
  2. Uncomment the "breaks" entry in the public-applications variable. Attempting to add a new healthProbe doesn't work. terraform apply

Important Factoids

Standard Azure

References

  • #0000
@ArcturusZhang
Copy link
Contributor

Hi @LeankitJSmith thanks for this issue!

Just get a closer look at the output log, the error is actually thrown out from the load balancer. And one big change between 2.17 and 2.18 in load balancer is that we update the api-version of the network service.

@magodo do you have any insight on this?

@ArcturusZhang ArcturusZhang removed the service/vmss Virtual Machine Scale Sets label Jul 24, 2020
@magodo
Copy link
Collaborator

magodo commented Jul 24, 2020

@LeankitJSmith thank you for submitting this and I'm sorry you are experiencing it.

This is a duplicate of #7691, so I am going to close this issue in favor of that one. You might want to subscribe that issue to track the progress. Thanks!

@magodo magodo closed this as completed Jul 24, 2020
@ghost
Copy link

ghost commented Aug 23, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Aug 23, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants