Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sensitive properties of nested blocks are shown during a plan in v12.6 #201

Open
katbyte opened this issue Oct 18, 2019 · 23 comments · May be fixed by #819
Open

Sensitive properties of nested blocks are shown during a plan in v12.6 #201

katbyte opened this issue Oct 18, 2019 · 23 comments · May be fixed by #819
Labels
bug Something isn't working subsystem/types Issues and feature requests related to the type system of Terraform and our shims around it. upstream-protocol Requires change of protocol specification, i.e. can't be done under the current protocol upstream-terraform

Comments

@katbyte
Copy link
Contributor

katbyte commented Oct 18, 2019

This an extension of:

Terraform Version

terraform: 0.12.6
azurerm: 1.32.1

Terraform Configuration Files

    ssh_key {
      key_data = var.aks_public_key_data
    }
  }

  network_profile {
    network_plugin     = var.aks_network_plugin
    network_policy     = var.aks_network_policy
    dns_service_ip     = var.aks_dnsServiceIP
    docker_bridge_cidr = var.aks_dockerBridgeCidr
    service_cidr       = var.aks_serviceCidr
  }

  service_principal {
    client_id     = data.azurerm_key_vault_secret.cluster_sp_id.value
    client_secret = data.azurerm_key_vault_secret.cluster_sp_secret.value
  }

  role_based_access_control {
    enabled = true
    azure_active_directory {
      client_app_id     = var.aks_aad_clientapp_id
      server_app_id     = var.aks_aad_serverapp_id
      server_app_secret = var.aks_aad_serverapp_secret
      tenant_id         = data.azurerm_client_config.current.tenant_id
    }
  }
}

Debug Output

Crash Output

Expected Behavior

Sensitive values should be masked masked.

Actual Behavior

On a delete with a plan when there is nested sub objects with only select properties marked as sensitive some of the values are shown in plan.

The client_key and password fields in the kube_admin_config block are shown as literal strings instead of being masked. We're running terraform version 0.12.6, and you can see the provider versions in the plan output below.

Successfully configured the backend "azurerm"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "external" (terraform-providers/external) 1.2.0...
- Downloading plugin for provider "template" (terraform-providers/template) 2.1.2...
- Downloading plugin for provider "azurerm" (terraform-providers/azurerm) 1.32.1...

The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

* provider.azurerm: version = "~> 1.32"
* provider.external: version = "~> 1.2"
* provider.template: version = "~> 2.1"

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Success! The configuration is valid.

Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

azurerm_resource_group.sandboxuks1: Refreshing state... [id=/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCEGROUP_NAME>].
module.aks-cluster.data.azurerm_client_config.current: Refreshing state...
module.aks-cluster.data.azurerm_subscription.current: Refreshing state...
module.aks-cluster.azurerm_kubernetes_cluster.aks_with_aad_parameters[0]: Refreshing state... [id=/subscriptions/<SUBSCRIPTION_ID>/resourcegroups/<RESOURCEGROUP_NAME>/providers/Microsoft.ContainerService/managedClusters/<RESOURCEGROUP_NAME>]
module.aks-cluster.azurerm_public_ip.aks_pip: Refreshing state... [id=/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCEGROUP_NAME>_PIP/providers/Microsoft.Network/publicIPAddresses/<RESOURCEGROUP_NAME>_PIP]
module.aks-cluster.azurerm_monitor_diagnostic_setting.aks_cluster_diagnostics[0]: Refreshing state... [id=/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCEGROUP_NAME>/providers/Microsoft.ContainerService/managedClusters/<RESOURCEGROUP_NAME>|aks-cluster-to-eventhub]

------------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
-/+ destroy and then create replacement
<= read (data resources)

Terraform will perform the following actions:

# module.aks-cluster.data.external.aks_agentpool_name will be read during apply
# (config refers to values not yet known)
<= data "external" "aks_agentpool_name"  {
    + id      = (known after apply)
    + program = [
        + "bash",
        + ".terraform/modules/aks-cluster/scripts/agentpool.sh",
        ]
    + query   = {
        + "rg_name" = "MC_<RESOURCEGROUP_NAME>_<RESOURCEGROUP_NAME>_uksouth"
        }
    + result  = (known after apply)
    }

# module.aks-cluster.azurerm_kubernetes_cluster.aks_with_aad_parameters[0] must be replaced
-/+ resource "azurerm_kubernetes_cluster" "aks_with_aad_parameters" {
    - api_server_authorized_ip_ranges = [] -> null
        dns_prefix                      = "<DNS_PREFIX>"
    ~ fqdn                            = "<CLUSTER_FQDN>" -> (known after apply)
    ~ id                              = "/subscriptions/<SUBSCRIPTION_ID>/resourcegroups/<RESOURCEGROUP_NAME>/providers/Microsoft.ContainerService/managedClusters/<RESOURCEGROUP_NAME>" -> (known after apply)
    ~ kube_admin_config               = [
        - {
            - client_certificate     = "<CLIENT_CERT>"
            - client_key             = "LITERAL_CLIENT_KEY_STRING" <--this is not obscured in any way, the client key is shown as a literal string
            - cluster_ca_certificate = "<CLUSTER_CA_CERT>"
            - host                   = "<HOSTNAME>"
            - password               = "LITERAL_PASSWORD_STRING" <--this is not obscured in any way, this is the current password for the kube admin config
            - username               = "clusterAdmin_<RESOURCEGROUP_NAME>_<RESOURCEGROUP_NAME>"
            },
        ] -> (known after apply)
    ~ kube_admin_config_raw           = (sensitive value)
    ~ kube_config                     = [
        - {
            - client_certificate     = ""
            - client_key             = ""
            - cluster_ca_certificate = "<CLUSTER_CA_CERT>"
            - host                   = "<HOSTNAME>"
            - password               = ""
            - username               = "clusterUser_<RESOURCEGROUP_NAME>_<RESOURCEGROUP_NAME>"
            },
        ] -> (known after apply)
    ~ kube_config_raw                 = (sensitive value)
    ~ kubernetes_version              = "1.13.9" -> "1.13.7"
        location                        = "uksouth"
        name                            = "<RESOURCEGROUP_NAME>"
    ~ node_resource_group             = "MC_<RESOURCEGROUP_NAME>_<RESOURCEGROUP_NAME>_uksouth" -> (known after apply)
        resource_group_name             = "<RESOURCEGROUP_NAME>"
    ~ tags                            = {} -> (known after apply)

    ~ addon_profile {
        + aci_connector_linux {
            + enabled     = (known after apply)
            + subnet_name = (known after apply)
            }

        + http_application_routing {
            + enabled                            = (known after apply)
            + http_application_routing_zone_name = (known after apply)
            }

        + oms_agent {
            + enabled                    = (known after apply)
            + log_analytics_workspace_id = (known after apply)
            }
        }

    ~ agent_pool_profile {
        - availability_zones  = [] -> null
            count               = 2
        + dns_prefix          = (known after apply)
        - enable_auto_scaling = false -> null
        ~ fqdn                = "<AGENTPOOL_PROFILE_FQDN>" -> (known after apply)
        - max_count           = 0 -> null
            max_pods            = 80
        - min_count           = 0 -> null
            name                = "agentpool"
        - node_taints         = [] -> null
            os_disk_size_gb     = 64
            os_type             = "Linux"
            type                = "AvailabilitySet"
            vm_size             = "Standard_DS4_v2"
            vnet_subnet_id      = "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCEGROUP_NAME>/providers/Microsoft.Network/virtualNetworks/<VNET_NAME>/subnets/<SUBNET_NAME>"
        }

        linux_profile {
            admin_username = "<ADMIN_USERNAME>"

            ssh_key {
                key_data = "<SSH_KEY>"
            }
        }

    ~ network_profile {
            dns_service_ip     = "10.254.0.10"
            docker_bridge_cidr = "172.17.0.1/16"
            load_balancer_sku  = "basic"
            network_plugin     = "azure"
        ~ network_policy     = "calico" -> "azure" # forces replacement
        + pod_cidr           = (known after apply)
            service_cidr       = "10.254.0.0/20"
        }

        role_based_access_control {
            enabled = true

            azure_active_directory {
                client_app_id     = "<AAD_CLIENT_ID>"
                server_app_id     = "<AAD_SERVER_ID>"
                server_app_secret = (sensitive value)
                tenant_id         = "*******"
            }
        }

    - service_principal {
        - client_id = "<SP_CLIENT_ID>" -> null
        }
    + service_principal {
        + client_id     = "<SP_CLIENT_ID>"
        + client_secret = (sensitive value)
        }
    }

# module.aks-cluster.azurerm_monitor_diagnostic_setting.aks_nsg_diagnostics[0] must be replaced
-/+ resource "azurerm_monitor_diagnostic_setting" "aks_nsg_diagnostics" {
        eventhub_authorization_rule_id = "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCEGROUP_ID>/providers/Microsoft.EventHub/namespaces/<NS_NAME>/AuthorizationRules/RootManageSharedAccessKey"
        eventhub_name                  = "aks-nsg-diagnostics"
    ~ id                             = "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCEGROUP_ID>/providers/Microsoft.Network/networkSecurityGroups/<AGENTPOOL_NAME>" -> (known after apply)
        name                           = "aks-nsg-to-eventhub"
    ~ target_resource_id             = "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCEGROUP_ID>/providers/Microsoft.Network/networkSecurityGroups/<AGENTPOOL_NAME>" -> (known after apply) # forces replacement

    - log {
        - category = "NetworkSecurityGroupEvent" -> null
        - enabled  = true -> null

        - retention_policy {
            - days    = 0 -> null
            - enabled = true -> null
            }
        }
    + log {
        + category = "NetworkSecurityGroupEvent"
        + enabled  = true

        + retention_policy {
            + enabled = true
            }
        }
    - log {
        - category = "NetworkSecurityGroupRuleCounter" -> null
        - enabled  = true -> null

        - retention_policy {
            - days    = 0 -> null
            - enabled = true -> null
            }
        }
    + log {
        + category = "NetworkSecurityGroupRuleCounter"
        + enabled  = true

        + retention_policy {
            + enabled = true
            }
        }
    }

Plan: 2 to add, 0 to change, 2 to destroy.

------------------------------------------------------------------------

This plan was saved to: terraform.tfplan

To perform exactly these actions, run the following command to apply:
    terraform apply "terraform.tfplan"`

Steps to Reproduce

Additional Context

References

@rahmanyusmadi
Copy link

0.12.9 has this problem too.

@hashibot hashibot transferred this issue from hashicorp/terraform Oct 18, 2019
@apparentlymart
Copy link
Contributor

I've transferred this to the plugin SDK repository because it appears to be caused by how the SDK is presenting this schema to Terraform Core.

Because this schema is marked as Computed: true, the SDK is presenting it to Terraform Core as an attribute with a list-of-objects type rather than as a block. "Sensitive" is an attribute-level idea, but it only applies to whole attributes and cannot apply to sub-paths within an attribute.


Within the provider protocol's schema model as currently defined, the best behavior we could support here is for the SDK to recursively check inside any schema.Schema it plans to represent to Terraform Core as an attribute to see if any of the nested schemas are marked as Sensitive. If any are marked, the SDK must then mark that entire attribute (the whole list of objects) as Sensitive.

I believe that currently the nested Sensitive flag is being discarded altogether (because the type system doesn't have a way to say "object type where nested attribute foo is sensitive") and so Terraform Core is getting no signal from the SDK that this attribute ought to be treated as sensitive.


Alternatively, the SDK could elect to present this construct to Terraform Core as being a block. Using a block to represent something that is fully computed has some implications around how the SDK must represent unknown values (it must be a known list of known objects with unknown values inside) but I assume the SDK must already be handling those edges in order to support Optional+Computed blocks that don't appear in the configuration at all, so 🤞 it would "just work" with the existing shims.

If we take that second path, I think we should check whether it causes any of the "but we are tolerating it because the provider is using the legacy SDK"-class warnings in TF_LOG=trace. Introducing more of those will create additional blockers for moving away from the legacy SDK model, which may be the right tradeoff to make in order to fix this issue promptly but I would still like to account for that potential technical debt so we can be aware of it moving into future work.

@katbyte
Copy link
Contributor Author

katbyte commented Oct 21, 2019

Thanks for looking into this @apparentlymart,

I personally don't think marking the entire block as sensitive is a great UX from the users POV and would put my weight behind the 2nd option.

@alastairtree
Copy link

This issue is affecting us deploying app_serivce onto azure because the site_credential.password property is displayed in plain text into terraform plans.

This issue is causing us to leak app service deployment credentials into our deployment logs (we use Octopus Deploy) which are not considered safe storage for secrets by our infosec team. I know Octopus Deploy had to publish a CVE when they had similar bug last year.

This is an important security issue, please fix ASAP.

@paultyng paultyng self-assigned this Feb 11, 2020
@paultyng paultyng added upstream-protocol Requires change of protocol specification, i.e. can't be done under the current protocol upstream-terraform labels Feb 11, 2020
@paultyng
Copy link
Contributor

paultyng commented Feb 11, 2020

As mentioned above, this is ultimately a limitation of Terraform core's data model. Only certain types of data actually track a Sensitive flag even though all "Attributes" in the SDK can have a Sensitive flag set. There is upstream work going on in Terraform core on sensitive values that needs to complete before we can address this in the provider SDK.

The core data model has two fundamental concepts: blocks and attributes. Blocks can hold blocks and other attributes. Attributes are typed. The type of an attribute though can be an "object" or "map" that has additional nested keys. Only attributes track this sensitive flag, blocks and fields on objects do not. The SDK's legacy model does not surface this nuance, nor does it make it easy for the developer to influence the translation to core's model directly. So even though you can set Sensitive in the SDK, the SDK doesn't always have a place to put that value (at least yet, until upstream adds additional handling for it).

All that being said, Sensitive has not historically been transitive, so use of a password downstream in a another resource for instance, would still be exposed in a plan. There is no way as Terraform works today to ensure full protection of Sensitive values, and as such you should take care to protect your logs as well as your state and plan files.

@paultyng
Copy link
Contributor

Linking this to a tracking issue for more directly surfacing Terraform core concepts in the SDK: #321

@subesokun
Copy link

subesokun commented Mar 2, 2020

Any progress on this issue? If we for example change any attribute on our Kubernetes cluster (in our case via azurerm) then terraform plan leaks the whole kubecfg into the build logs. There is no way that we can filter it out manually, except piping the whole STDOUT of terraform plan to /dev/null.... but then we won't see what's happening in the planning stage which is also bad. Please rise the priority of this issue.

@invidian
Copy link

If someone is interested in workaround for this, I've implemented it in flexkube/libflexkube#66. As my Terraform state structure is rather complicated there, for simpler cases, I'm sure it can be simplified, but basically what I've done is, I store all secret values in separated computed field, which has Sensitive: true set at top level, so user won't see any changes underneath (https://github.com/flexkube/libflexkube/blob/master/cmd/terraform-provider-flexkube/flexkube/resource_containers.go#L27).

Then in user configuration, I use different method when I'm flattening Go structure into Terraform state compatible struct: https://github.com/flexkube/libflexkube/blob/master/cmd/terraform-provider-flexkube/flexkube/ssh.go#L16.

Finally, when calculating the diff, you need to either inject the secrets into user configuration or just always copy user configuration into the "globally" sensitive field.

While definitely a workaround, it allows right now to provide user nice UX without compromising the security. It also works in all plan, apply and destroy scenarios.

@simongottschlag
Copy link

Any updates here? This is a big issue since secrets are leaking.

@paultyng
Copy link
Contributor

Per my comment above, this is something that has to happen on Terraform core first, you can track progress on issues like transitive sensitivity on the upstream issues like this one: hashicorp/terraform#16114

@hbuckle
Copy link

hbuckle commented Mar 11, 2020

@paultyng - fair enough, but seeing as this is a fairly severe security issue I think we need a more definite response than "it will be fixed in core at some point".
Until that happens should the recommendation be that providers simply mark the whole block as sensitive?
Although the UX would be worse I think that's better than the security risk of printing secrets in logs.

@paultyng
Copy link
Contributor

paultyng commented Mar 12, 2020

The definitive response is treat it all as sensitive.

New features may alleviate the concerns, but currently no amount of sensitive flagging will prevent downstream interpolation or variable usage that could expose secrets in your logs.

@Magicloud
Copy link

Magicloud commented May 20, 2021

I think there should be a high level design on this. Not that the provider or plugin-sdk that decides or provides options for user on certain attributes to be sensitive. But a common way for user to hide any parts on planning/applying/outputs.

This is not just about security. But also the long lines of cert/key breaks some stupid CI/CD tools.

@nimmyarjith
Copy link

Is there any update on this. Keep seeing lots of threads related. But no fix I could see. Can anyone please share the solution.

bflad added a commit to hashicorp/terraform-docs-common that referenced this issue Jul 25, 2022
Reference: hashicorp/terraform-plugin-framework#214
Reference: hashicorp/terraform-plugin-sdk#819
Reference: hashicorp/terraform-plugin-sdk#201

As part of investigating some differences between sdk/v2 and framework, the individual sensitivity configuration offered by nested attributes in protocol version 6 is important to call out explicitly.
bflad added a commit to hashicorp/terraform-docs-common that referenced this issue Jul 26, 2022
#76)

Reference: hashicorp/terraform-plugin-framework#214
Reference: hashicorp/terraform-plugin-sdk#819
Reference: hashicorp/terraform-plugin-sdk#201

As part of investigating some differences between sdk/v2 and framework, the individual sensitivity configuration offered by nested attributes in protocol version 6 is important to call out explicitly.
@tjrobinson
Copy link

I can confirm this issue still exists with Terraform 1.3.4 and hashicorp/azurerm 2.99

@gabrielwallin
Copy link

Has anyone got a good suggestion for working around/with the issue? Do you just accept that the password is exposed in the logs, and therefore protect the logs extra hard?

@tjrobinson
Copy link

Has anyone got a good suggestion for working around/with the issue? Do you just accept that the password is exposed in the logs, and therefore protect the logs extra hard?

If you're using Azure DevOps Pipelines and you make sure the secret is set as a variable (it doesn't matter what it's called) in a variable group used by the pipeline then Azure DevOps will **** out the value in the logs.

@marcindulak
Copy link

Has anyone got a good suggestion for working around/with the issue? Do you just accept that the password is exposed in the logs, and therefore protect the logs extra hard?

If you're using Azure DevOps Pipelines and you make sure the secret is set as a variable (it doesn't matter what it's called) in a variable group used by the pipeline then Azure DevOps will **** out the value in the logs.

The problem with printing nested sensitive values was (is still?) present also when using data sources hashicorp/terraform-provider-azurerm#4105 (comment). There the sensitive value is not known in advance and won't be redacted by CI providers.

@seamonster-wolf
Copy link

@austinvalle Sorry you are the last person I have seen from the Hashicorp DevEx team to commit to this repository today. Just want to make sure this issue is not lost and remains visible because it remains an extreme security concern for logs and it has been over 3 years. Are there any updates from a team at Hashicorp that may plan to work on this? It seems some work started on this but was never mainlined. #819

@bflad
Copy link
Contributor

bflad commented Mar 8, 2023

Hi folks 👋 This issue appears to have collected a few threads around value sensitivity. Some are related to the terraform-plugin-sdk Go module that providers developers can use to write their providers and some are related to broader feature requests that would affect provider SDKs, Terraform core, and the protocol between them.

The original description for this issue was that declaring a terraform-plugin-sdk based schema, its abstractions allow creating a technically invalid schema that would not mark nested attributes as sensitive or raising any errors for provider developers about the errant situation. The reasons why this is not valid was described previously: #201 (comment)

In this example, the top level Sensitive: true has no effect:

&schema.Schema{
  // ... other fields ...
  Sensitive: true,
  Elem: &schema.Resource{ // This typically designates a block
    Schema: map[string]*schema.Schema{
      "nested_attr": {
        // ... other fields ...
        // no Sensitive: true defined
      },
    },
  }
}

The current fix for provider developers in this case is to instead mark all nested attributes individually as Sensitive: true, e.g.

&schema.Schema{
  // ... other fields ...
  Sensitive: true,
  Elem: &schema.Resource{ // This typically designates a block
    Schema: map[string]*schema.Schema{
      "nested_attr": {
        // ... other fields ...
        Sensitive: true, // now enabled
      },
    },
  }
}

The changes proposed in #819 is one possible solution where terraform-plugin-sdk would try to approximate the developers likely intention, rather than strictly following what is a valid schema. That change is however missing any testing to verify it would not cause regressions in existing providers. Incorrectly marking attributes as sensitive can cause breaking changes in practitioner configurations on Terraform 0.14+, where outputs must be marked as sensitive when a sensitive value is received or an error is raised. Any which way though, adjusting the currently ignored behavior to become supported without an opt-in would likely require a major version release of this Go module to signal the practitioner-affecting change. There are no future plans for another major release of this Go module at this time.

Provider developers also have the option of migrating to terraform-plugin-framework, which correctly surfaces the nuances between attributes, blocks, and the behavior differences between the two concepts. One additional benefit of building on terraform-plugin-framework is that blocks can be migrated to nested attributes which does allow sensitivity to be declared directly on the attribute, singularly declared on nested attributes, and the practitioner benefit of using expressions instead of dynamic blocks (if being configured).


In terms of provider logging showing sensitive values, there are two parts to this.

If the value is known to be sensitive and the provider is using terraform-plugin-log for logging, there are various filtering capabilities available. Other logging libraries will require their own solution to be implemented in the provider code, but those are outside the scope of what the maintainers here can have knowledge or assist.

If the sensitive value is coming from another resource or explicitly being marked as explicit in the Terraform configuration (e.g. via the sensitive() function), providers today have no visibility into that sensitivity -- they only get passed the value. Supporting that ability within providers would require changes to Terraform core, the protocol, and likely within provider SDKs. I'm not aware of an existing Terraform core issue that tracks "support value sensitivity across the plugin protocol", so it might be best to create an issue in the core repository with use cases highlighting where those sensitive values are exposed today.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working subsystem/types Issues and feature requests related to the type system of Terraform and our shims around it. upstream-protocol Requires change of protocol specification, i.e. can't be done under the current protocol upstream-terraform
Projects
None yet
Development

Successfully merging a pull request may close this issue.