Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Azurem prints secrets during plan phase in plain text.. #5083

Closed
hajdukd opened this issue Dec 5, 2019 · 6 comments
Closed

Azurem prints secrets during plan phase in plain text.. #5083

hajdukd opened this issue Dec 5, 2019 · 6 comments
Labels
bug service/kubernetes-cluster upstream/terraform This issue is blocked on an upstream issue within Terraform (Terraform Core/CLI, The Plugin SDK etc)

Comments

@hajdukd
Copy link

hajdukd commented Dec 5, 2019

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

Terraform (and AzureRM Provider) Version

version = "=1.36.1"

Affected Resource(s)

azurerm_kubernetes_cluster

Terraform Configuration Files

N/A, any configuration can be an example.

Debug Output

N/A

Panic Output

N/A

Expected Behavior

Every sensitive data will be hidden during plan step. Things like:

  • client_certificate
  • client_key
  • cluster_ca_certificate
  • password
  • agent_pool_profile.linux_profile.ssh_key.key_data

It works correctly for "kube_config_raw" ( listed as (sensitive value) ).

Actual Behavior

All sensitive data is printed in plain text.

Steps to Reproduce

terraform plan

Important Factoids

N/A

References

N/A

@brennerm
Copy link
Contributor

brennerm commented Dec 5, 2019

@hajdukd This issue comes from a bug in the Terraform Plugin SDK. For this reason it is currently not possible to mask parts of nested blocks. One option would be to mark the whole kube_config block as sensitive. But to me this sounds like a workaround for the actual problem.

Additionally attributes like client_certificate, cluster_ca_certificate and agent_pool_profile.linux_profile.ssh_key.key_data do not need to be sensitive imo. These are public certificates that shouldn't lead to any damage when being exposed.

@katbyte katbyte added the bug label Dec 8, 2019
@rahmancloud
Copy link

Quick workaround is to omit kube_config sensitive information using egrep:

terraform plan | egrep -v 'client_key|password|client_certificate|cluster_ca_certificate'

@lodejard
Copy link

lodejard commented Jan 23, 2020

One option would be to mark the whole kube_config block as sensitive. But to me this sounds like a workaround for the actual problem.

Additionally attributes like client_certificate, cluster_ca_certificate and agent_pool_profile.linux_profile.ssh_key.key_data do not need to be sensitive imo. These are public certificates that shouldn't lead to any damage when being exposed.

@brennerm In the short term could we mark the whole kube_admin_config as sensitive, until terraform gains the ability to mask parts of a nested blocks?

The parts that are needed can still be referenced normally, passed around as outputs, and fetched via terraform_remote_state, so it's no inconvenience there; but having admin credentials show up in build logs is genuinely catastrophic. Especially because it shows up when the plan has an aks update, not when it is created or unchanged. It's very easy to start using this resource as it stands without knowing you're leaking admin credentials until later.

Fortunately we noticed this happens on clusters which don't have production workloads, which would have been a major incident. As it is we still need to rotate all of the credentials.

In a nutshell, I just can't emphasize enough how strongly the possibility of leaking cluster-admin should outweigh the inconvenience of redacting a few other public properties until the related terraform bug is fixed.

@alastairtree
Copy link
Contributor

This also affects app_serivce and the site_credential.password property.

This issue is causing us to leak app service deployment credentials into our deployment logs (we use Octopus Deploy) which are not considered safe storage for secrets by our infosec team. I know Octopus Deploy had to publish a CVE when they had similar bug last year.

As an important security issue please fix this ASAP.

@tombuildsstuff
Copy link
Contributor

👋

Since this issue needs to be fixed in the Terraform Plugin SDK rather than tracking this issue in multiple places I'm going to close this issue in favour of the upstream issue. Once that's been fixed we'll update the version of the Plugin SDK being used and this should get resolved - as such please subscribe to the upstream issue for updates.

Thanks!

@tombuildsstuff tombuildsstuff added the upstream/terraform This issue is blocked on an upstream issue within Terraform (Terraform Core/CLI, The Plugin SDK etc) label Feb 13, 2020
@ghost
Copy link

ghost commented Mar 28, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

@ghost ghost locked and limited conversation to collaborators Mar 28, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug service/kubernetes-cluster upstream/terraform This issue is blocked on an upstream issue within Terraform (Terraform Core/CLI, The Plugin SDK etc)
Projects
None yet
Development

No branches or pull requests

8 participants