Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for additionalPortMappings in azurerm_container_app #23442

Open
1 task done
kf6kjg opened this issue Oct 3, 2023 · 15 comments
Open
1 task done

Support for additionalPortMappings in azurerm_container_app #23442

kf6kjg opened this issue Oct 3, 2023 · 15 comments

Comments

@kf6kjg
Copy link

kf6kjg commented Oct 3, 2023

Is there an existing issue for this?

  • I have searched the existing issues

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment and review the contribution guide to help.

Description

This feature is still in preview, but I figured it was worth adding to the queue so that support can be planned for and eventually land in a timely manner.

In API version 2023-05-02-preview Azure has added support for a new block in the ingress block: additionalPortMappings.

This feature request is to track adding support for that new feature at such a time as it becomes released to their main API version.

New or Affected Resource(s)/Data Source(s)

azurerm_container_app

Potential Terraform Configuration

resource "azurerm_container_app" "example" {
  name                         = "example-app"
  container_app_environment_id = azurerm_container_app_environment.example.id
  resource_group_name          = azurerm_resource_group.example.name
  revision_mode                = "Single"

  ingress {
    target_port = 1234
    # The new item:
    additional_port_mapping {
      external_enabled = true
      target_port = 4321
    }
    additional_port_mapping {
      target_port = 2345
    }
  }

  template {
    container {
      name   = "examplecontainerapp"
      image  = "mcr.microsoft.com/azuredocs/containerapps-helloworld:latest"
      cpu    = 0.25
      memory = "0.5Gi"
    }
  }
}

References

microsoft/azure-container-apps#763

@rcskosir
Copy link
Contributor

rcskosir commented Oct 4, 2023

@kf6kjg Thank you for taking the time to open this feature request!

@ZuitAMB
Copy link

ZuitAMB commented Feb 2, 2024

According to https://azure.microsoft.com/de-de/updates/generally-available-azure-container-apps-supports-additional-tcp-ports/ it should be generally available now. (However, so far only supported by newest CLI extension)

@rcskosir rcskosir removed the preview label Feb 2, 2024
@pabi18

This comment was marked as duplicate.

@youyinnn
Copy link

According to https://azure.microsoft.com/de-de/updates/generally-available-azure-container-apps-supports-additional-tcp-ports/ it should be generally available now. (However, so far only supported by newest CLI extension)

I am sorry, but how? How do you configure it with Terraform?

@ZuitAMB
Copy link

ZuitAMB commented Apr 19, 2024

@youyinnn As fas as I know, it is not possible using Terraform yet. To enable Terraform deployments with this new feature, we probably need a new Azure Container Apps API version by Azure: https://learn.microsoft.com/en-us/azure/templates/microsoft.app/change-log/summary

@jsheetzmt
Copy link

additionalPortMappings is available in latest Azure API. https://learn.microsoft.com/en-us/azure/templates/microsoft.app/containerapps?pivots=deployment-language-terraform#ingress-2

@ZuitAMB
Copy link

ZuitAMB commented Apr 19, 2024

additionalPortMappings is available in latest Azure API.

Unfortunately, latest version is a preview version:
2023-11-02-preview <- latest
2023-08-01-preview
2023-05-02-preview <- introduction of additionalPortMappings
2023-05-01 <- latest non preview version

Hopefully, we get a 2024-0X-XX non-preview version soon

@aellwein
Copy link

I've stumbled upon this issue, unfortunately we need this urgently. Can someone help with this?

@roisanchezriveira
Copy link

I haven't seen any progress on it, I personally workarounded it using the azapi provider to update the resource JSON definition directly for this (and also for the probes that are defined one way in the portal, and differently in the API). And yes, I've been using the preview version of the API

resource "azurerm_container_app" "container_app" {
  name                         = "ca-example}"
  container_app_environment_id = var.container_app_environment_id
  resource_group_name          = var.container_apps_rg
  revision_mode                = "Single"

  identity {
    type         = "UserAssigned"
    identity_ids = [azurerm_user_assigned_identity.container_app_user.id]
  }

  registry {
    server   = var.registry
    identity = azurerm_user_assigned_identity.container_app_user.id
  }

  dynamic "secret" {
    for_each = local.ca_secrets
    content {
      identity            = azurerm_user_assigned_identity.container_app_user.id
      key_vault_secret_id = secret.value
      name                = secret.key
      value               = null
    }
  }

  template {
    revision_suffix = null
    container {
      name   = "example"
      image  = "${var.registry}/${var.image}"
      cpu    = var.cpu
      memory = var.memory
      dynamic "volume_mounts" {
        for_each = var.storage_mounts
        content {
          name = volume_mounts.key
          path = volume_mounts.value
        }
      }
      dynamic "env" {
        for_each = local.app_env_variables
        content {
          name        = env.key
          value       = env.value
          secret_name = env.key
        }
      }
      liveness_probe {
        failure_count_threshold = 2
        path                    = var.probes["liveness_probe"].path
        initial_delay           = var.probes["liveness_probe"].initial_delay
        interval_seconds        = var.probes["liveness_probe"].period
        port                    = var.probes["liveness_probe"].port
        timeout                 = 1
        transport               = upper(var.probes["liveness_probe"].transport)
      }
      readiness_probe {
        failure_count_threshold = 2
        success_count_threshold = 3
        path                    = var.probes["readiness_probe"].path
        interval_seconds        = var.probes["readiness_probe"].period
        port                    = var.probes["readiness_probe"].port
        timeout                 = 1
        transport               = upper(var.probes["readiness_probe"].transport)
      }
      startup_probe {
        failure_count_threshold = 2
        path                    = var.probes["startup_probe"].path
        interval_seconds        = var.probes["startup_probe"].period
        port                    = var.probes["startup_probe"].port
        timeout                 = 1
        transport               = upper(var.probes["startup_probe"].transport)
      }
    }
    http_scale_rule {
      name                = "http"
      concurrent_requests = 100
    }
    max_replicas = 2
    min_replicas = 1
  }

  ingress {
    allow_insecure_connections = false
    external_enabled           = true
    target_port                = 8080
    traffic_weight {
      percentage      = 100
      latest_revision = true
    }
  }

  lifecycle {
    ignore_changes = [
      template[0].container[0].liveness_probe,
      template[0].container[0].readiness_probe,
      template[0].container[0].startup_probe,
      template[0].container[0].image
    ]
  }
}

# update the container app with extra additionalPortMappings, as this is not supported by the existing TF provider
resource "azapi_update_resource" "container_app_api" {
  type        = "Microsoft.App/containerApps@2023-11-02-preview"
  resource_id = azurerm_container_app.container_app.id

  body = jsonencode({
    properties = {
      configuration = {
        ingress = {
          clientCertificateMode = "Ignore"
          stickySessions = {
            affinity : "none"
          }
          additionalPortMappings = var.additional_ports
        }
      }
      template = {
        containers = [{
          probes = [
            {
              httpGet = {
                path   = var.probes["liveness_probe"].path
                port   = var.probes["liveness_probe"].port
                scheme = upper(var.probes["liveness_probe"].transport)
              }
              initialDelaySeconds = var.probes["liveness_probe"].initial_delay
              periodSeconds       = var.probes["liveness_probe"].period
              type                = "Liveness"
            },
            {
              httpGet = {
                path   = var.probes["readiness_probe"].path
                port   = var.probes["readiness_probe"].port
                scheme = upper(var.probes["readiness_probe"].transport)
              }
              initialDelaySeconds = var.probes["readiness_probe"].initial_delay
              periodSeconds       = var.probes["readiness_probe"].period
              type                = "Readiness"
            },
            {
              httpGet = {
                path   = var.probes["startup_probe"].path
                port   = var.probes["startup_probe"].port
                scheme = upper(var.probes["startup_probe"].transport)
              }
              initialDelaySeconds = var.probes["startup_probe"].initial_delay
              periodSeconds       = var.probes["startup_probe"].period
              type                = "Startup"
            }
          ]
        }]
      }
    }
  })

  depends_on = [
    azurerm_container_app.container_app,
  ]
  lifecycle {
    replace_triggered_by = [azurerm_container_app.container_app]
  }
}

@aellwein
Copy link

Thanks, @roisanchezriveira, that definitely helps!

@fblampe
Copy link

fblampe commented Jun 20, 2024

additionalPortMappings is available in latest Azure API.
Hopefully, we get a 2024-0X-XX non-preview version soon

There's a non-preview version 2024-03-01 that supports this feature: https://learn.microsoft.com/en-us/rest/api/containerapps/container-apps/create-or-update?view=rest-containerapps-2024-03-01&tabs=HTTP#create-or-update-container-app

So, is there a chance that this could be added to terraform?

@aellwein
Copy link

I found also another unsupported attribute there: template.volumes[].mountOptions. This can be set via Portal but not in Terraform.

@kawahara-titan
Copy link

@roisanchezriveira - I have been doing something similar except using azapi_resource_action. We were using Secrets on our container app and because the azapi_update_resource uses a PUT, it apparently uses a GET to retrieve all of the missing attributes. Because the Secrets are not returned in the GET, you end up getting an "ContainerAppSecretInvalid" error.

At any rate, what I wanted to ask you was whether you experience an issue where the Additional Ports become blank after successive Apply steps? I have seemingly been observing that behavior and trying to make sure I'm not crazy.

@roisanchezriveira
Copy link

@roisanchezriveira - I have been doing something similar except using azapi_resource_action. We were using Secrets on our container app and because the azapi_update_resource uses a PUT, it apparently uses a GET to retrieve all of the missing attributes. Because the Secrets are not returned in the GET, you end up getting an "ContainerAppSecretInvalid" error.

At any rate, what I wanted to ask you was whether you experience an issue where the Additional Ports become blank after successive Apply steps? I have seemingly been observing that behavior and trying to make sure I'm not crazy.

I had the same issue, that's why I added this to the container app resource

  lifecycle {
    ignore_changes = [
      template[0].container[0].liveness_probe,
      template[0].container[0].readiness_probe,
      template[0].container[0].startup_probe,
    ]
  }

And this to the azapi one

  lifecycle {
    replace_triggered_by = [azurerm_container_app.container_app]
  }

My guess is that the azapi was modifying the container app resource and triggering a modification on subsequent applies (so I added the ignore changes on the probes to avoid that) and that any change on the azurerm resource wipes the azapi, so I added the trigger on it to ensure the ports always are mapped after any other change to the container app

@kawahara-titan
Copy link

@roisanchezriveira - I have been doing something similar except using azapi_resource_action. We were using Secrets on our container app and because the azapi_update_resource uses a PUT, it apparently uses a GET to retrieve all of the missing attributes. Because the Secrets are not returned in the GET, you end up getting an "ContainerAppSecretInvalid" error.
At any rate, what I wanted to ask you was whether you experience an issue where the Additional Ports become blank after successive Apply steps? I have seemingly been observing that behavior and trying to make sure I'm not crazy.

I had the same issue, that's why I added this to the container app resource

  lifecycle {
    ignore_changes = [
      template[0].container[0].liveness_probe,
      template[0].container[0].readiness_probe,
      template[0].container[0].startup_probe,
    ]
  }

And this to the azapi one

  lifecycle {
    replace_triggered_by = [azurerm_container_app.container_app]
  }

My guess is that the azapi was modifying the container app resource and triggering a modification on subsequent applies (so I added the ignore changes on the probes to avoid that) and that any change on the azurerm resource wipes the azapi, so I added the trigger on it to ensure the ports always are mapped after any other change to the container app

Thanks for confirming. And your solution to just use the replace_triggered_by on the container app is a lot more elegant than what I was considering!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

10 participants