Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow defining triggers that are sensitive information #38

Closed
reegnz opened this issue Feb 7, 2020 · 21 comments
Closed

Allow defining triggers that are sensitive information #38

reegnz opened this issue Feb 7, 2020 · 21 comments

Comments

@reegnz
Copy link

reegnz commented Feb 7, 2020

Terraform Version

Terraform v0.12.20

  • provider.null v2.1.2

Affected Resource(s)

  • null_resource

Terraform Configuration Files

variable "mysecret" {
  type = string
}

resource null_resource example {
  triggers = {
    secret = var.mysecret
  }

  provisioner "local-exec" {
    command = "echo Create"
    environment = {
      SECRET = self.triggers.mysecret
    }
  }
  
  provisioner "local-exec" {
    command = "echo Destroy"
    environment = {
      SECRET = self.triggers.mysecret
    }
  }
}

Debug Output

❯ terraform apply
var.mysecret
  Enter a value: supersecret


An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # null_resource.example will be created
  + resource "null_resource" "example" {
      + id       = (known after apply)
      + triggers = {
          + "secret" = "supersecret"
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions in workspace "play"?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: no

Apply cancelled.

Expected Behavior

A way to provide sensitive values as triggers, so that the plan does only print (sensitive) and not the actual value of a sensitive trigger.

Actual Behavior

No way to define sensitive triggers.

Steps to Reproduce

  1. terraform apply

Important Factoids

The null_resource should also take a sensitive_triggers map that is obfuscated in the plan output.

Other providers, like the local provider use a similar approach:
https://www.terraform.io/docs/providers/local/r/file.html

@jferris
Copy link

jferris commented Sep 2, 2020

I've been using sha1 in triggers to work around this issue.

  triggers = {
    secret = sha1(var.mysecret)
  }

@reegnz
Copy link
Author

reegnz commented Sep 4, 2020

@jferris That's a usable workaround, thanks for the tip.

Still, it would be nice to have a built in way of providing sensitive values, just as we have with https://registry.terraform.io/providers/hashicorp/local/latest/docs/resources/file

@keiranmraine
Copy link

The need for triggers for destroy also means (for us) that we need to load files into variables. This has the same issue in that we don't want the file content dumping to logs.

@reegnz
Copy link
Author

reegnz commented Sep 8, 2020

@keiranmraine as exlained earlier by @jferris, for your use-case the hash of the file would be perfectly valid workaround, I'd even argue that in that case it's not even a workaround, hashing files to notice changes is a pretty well established industry practice.

@keiranmraine
Copy link

Hi @reegnz, I should have included more context.

Unless I'm misunderstanding the use of sha1() in a terraform context, that doesn't solve the problem for our use case which is due to the removal of the ability to use variables in when = destroy.

We have destroy actions that require variables/files for cleanup actions on services that terraform doesn't interact with natively. Now variables aren't allowed in destroy the use of self.triggers.* is the only way to pass values into the local-exec.

In these instances we need the actual value, not to know it was different.

@reegnz
Copy link
Author

reegnz commented Sep 9, 2020

Given that you already start with a file in the first place, couldn't you just set file hash plus file name as a trigger? Then the destroy provisioner can work with the file name directly instead of getting passed the contents.

@keiranmraine
Copy link

I hadn't thought of that for files. Thanks!

@reegnz
Copy link
Author

reegnz commented Sep 9, 2020

@keiranmraine I modified the ticket description based on your input to include a destroy provisioner, because the destroy provisioning only being able to use 'self' is a really good argument for why hashing is not a complete solution if you need to reference the sensitive value in the destroying of the resource.

@reegnz
Copy link
Author

reegnz commented Sep 9, 2020

One specific use-case where the hashing solution wouldn't work is in the case where an API key needs to be used to run the create/destroy provisioning, so the API key is sensitive and it needs to be a trigger.

On the other hand that use-case has a bunch of other issues with it as well, eg. the key should be an environment variable for the entire process, so destroy isn't tried with an old API key. :)

Trying to construct some reasonable use-case that's more life-like.

Anyway, will give it a go and prepare a PR for a sensitive_triggers field for hacktoberfest.

@dekimsey
Copy link

dekimsey commented Oct 9, 2020

Right now this behavior is blocking our ability to migrate to terraform 0.13 with the error:

Error: Invalid reference from destroy provisioner
...
Destroy-time provisioners and their connection configurations may only
reference attributes of the related resource, via 'self', 'count.index', or
'each.key'.

Since we use some secrets to fill in the information in our connection{} block, it is by nature sensitive. Putting the values in the triggers allows us to upgrade but then dumps private keys into our diffs :/.

What I would like to see happen is something like this:

resource "null_resource" "register-pa-core-rds" {
  triggers = {
    rds_cluster_id         = module.pa-core-rds.rds_cluster_id
    host                   = aws_route53_record.bastion.fqdn
    user                   = var.provisioning_user
  }
  sensitive_triggers  = {  # Values are redacted in the diff
    private_key            = tls_private_key.provisioning.private_key_pem
    management_token       = random.management-token.result
  }

  provisioner "remote-exec" {
    environment = {
      MANAGEMENT_TOKEN = self.sensitive_triggers.mysecret
    }

    inline = [
      "echo register-rds ${self.triggers.rds_cluster_id}",
    ]
  }

  provisioner "remote-exec" {
    environment = {
      CONSUL_MANAGEMENT_TOKEN = self.sensitive_triggers.mysecret
    }
    inline = [
      "echo deregister-rds ${self.triggers.rds_cluster_id}",
    ]
    when = destroy
  }

  connection {
    type        = "ssh"
    user        = self.triggers.user
    host        = self.triggers.host
    private_key = self.sensitive_triggers.private_key
  }

@jgiannuzzi
Copy link

jgiannuzzi commented Oct 29, 2020

FYI I opened PR #48 to implement this.

If anyone is interested, can you please give it a spin and report whether it works for you?

Build instructions

You need Go 1.15 installed to build the provider. It's also possible to use Docker if you don't want to install Go.

With Go 1.15 installed

git clone -b sensitive https://github.com/jgiannuzzi/terraform-provider-null
cd terraform-provider-null
make build

The provider can then be found in $GOPATH/bin.

With Docker installed

Linux

First spin up the Go 1.15 container:

docker run --rm -ti -v $PWD:/go/bin -w /root golang:1.15

Then within that container, do the following:

git clone -b sensitive https://github.com/jgiannuzzi/terraform-provider-null
cd terraform-provider-null
make build

You can then exit the container and the plugin will be in you current working directory.

macOS

First spin up the Go 1.15 container:

docker run --rm -ti -v $PWD:/go/bin/darwin_amd64 -w /root golang:1.15

Then within that container, do the following:

git clone -b sensitive https://github.com/jgiannuzzi/terraform-provider-null
cd terraform-provider-null
make build GOOS=darwin

You can then exit the container and the plugin will be in you current working directory.

Windows

First spin up the Go 1.15 container from a PowerShell terminal:

docker run --rm -ti -v ${pwd}:/go/bin/windows_amd64 -w /root golang:1.15

Then within that container, do the following:

git clone -b sensitive https://github.com/jgiannuzzi/terraform-provider-null
cd terraform-provider-null
make build GOOS=windows

You can then exit the container and the plugin will be in you current working directory.

Install instructions

Linux

mkdir -p ~/.local/share/terraform/plugins/registry.terraform.io/hashicorp/null/3.0.0/darwin_amd64
mv terraform-provider-null ~/.local/share/terraform/plugins/registry.terraform.io/hashicorp/null/3.0.0/darwin_amd64/

macOS

mkdir -p "~/Library/Application Support/io.terraform/plugins/registry.terraform.io/hashicorp/null/3.0.0/darwin_amd64"
mv terraform-provider-null "~/Library/Application Support/io.terraform/plugins/registry.terraform.io/hashicorp/null/3.0.0/darwin_amd64/"

Windows

New-Item ${APPDATA}/HashiCorp/Terraform/plugins -ItemType Directory -ea 0
Move-Item terraform-provider-null ${APPDATA}/HashiCorp/Terraform/plugins/

Usage instructions

Upgrade your project to use the custom version of the plugin:

terraform init -upgrade

@dekimsey
Copy link

I'd like to try this, but I wasn't able to get TF to load the provider.


Error: Unsupported argument

  on rds.tf line 28, in resource "null_resource" "register-rds":
  28:   sensitive_triggers = {

An argument named "sensitive_triggers" is not expected here.

The given instructions seem to assume terraform is installed in $GOPATH (based on the plugins docs), but my terraform binary is installed by brew elsewhere. So I moved the plugin into ~/.terraform.d/plugins which appears to be valid according to the docs.

$ which terraform-provider-null
/Users/dkimsey/go/bin/terraform-provider-null 
$ mv /Users/dkimsey/go/bin/terraform-provider-null ~/.terraform.d/plugins
$ ls ~/.terraform.d/plugins
terraform-provider-null
$ cat ~/.terraformrc
plugin_cache_dir   = "$HOME/.terraform.d/plugin-cache"

Then I tried running a TF_LOG=trace run, but it doesn't look like terraform is seeing anything at all (it's unclear if it would log anything if found, I couldn't find the source for this log message in terraform's codebase):

2020/10/30 12:38:55 [TRACE] providercache.fillMetaCache: using cached result from previous scan of .terraform/plugins
2020/10/30 12:38:55 [DEBUG] checking for provisioner in "."
2020/10/30 12:38:55 [DEBUG] checking for provisioner in "/usr/local/bin"
2020/10/30 12:38:55 [DEBUG] checking for provisioner in "/Users/dkimsey/.terraform.d/plugins"
2020/10/30 12:38:55 [DEBUG] checking for provisioner in "/Users/dkimsey/.terraform.d/plugins/darwin_amd64"
2020/10/30 12:38:55 [INFO] Failed to read plugin lock file .terraform/plugins/darwin_amd64/lock.json: open .terraform/plugins/darwin_amd64/lock.json: no such file or directory
2020/10/30 12:38:55 [TRACE] Meta.Backend: backend *remote.Remote supports operations

I'm sure this is a PEBKAC issue, but I'm just not seeing it. My only dev terraform work was in my own provider, and I don't recall if I ever tested it outside of my local dev testing. So I don't have much experience with side-loading a provider.

@jgiannuzzi
Copy link

Sorry @dekimsey, I should have also explained how to install the plugin 😅

You seem to be using macOS, so you should copy/move it to ~/Library/Application Support/io.terraform/plugins/registry.terraform.io/hashicorp/null/3.0.0/darwin_amd64/terraform-provider-null, and then run terraform init -upgrade in your workspace.

I'll update my post above with installation instructions for the other operating systems.

@dekimsey
Copy link

dekimsey commented Oct 30, 2020

Okay, I just re-read your instructions now that it's been a few hours. I installed it incorrectly, I'll try again on Monday now that I read them correctly :)

Thank you @jgiannuzzi!

@dekimsey
Copy link

dekimsey commented Nov 2, 2020

Okay, well I was unable to get my live state to try these changes. I gave up. I ended up creating a simple single-resource test from scratch and that did work. shrug

And the change is pretty trivial so, I'm sure it'll Just Work ;). Thank you for doing this @jgiannuzzi. I think this was our only blocker for our 0.13 upgrade.

@neilswinton
Copy link

neilswinton commented Nov 9, 2020

I'm really happy to see @jgiannuzzi's PR. When you have different people working on different platforms, it's helpful to avoid pathnames in state since they can change between users and platforms. Normally, this is no big deal, but when Let's Encrypt gives you 5 updates per month, you really can't afford to burn renewals because a file path changed. We have destroy actions for kubernetes CRD's that use process substitution instead of files to get around files changing but the destroy actions were putting our kubeconfig files in plaintext. Here's an example destroy whose triggers will benefit:

  provisioner "local-exec" {
    when        = destroy
    interpreter = ["/bin/bash", "-c"]
    command = "kubectl --kubeconfig=<(cat <<<'${self.triggers.kubeconfig_data}') delete -f <(cat <<<'${self.triggers.certificate_request_yml}') "

zhimsel added a commit to grnhse/terraform-provider-null that referenced this issue Dec 11, 2020
These are functionally identical, but masks the inputs/outputs.

Closes hashicorp#38
zhimsel added a commit to grnhse/terraform-provider-null that referenced this issue Apr 1, 2021
These are functionally identical, but masks the inputs/outputs.

Closes hashicorp#38
@dekimsey
Copy link

Just checking in, did this go anywhere? Seems to have died :/

@tmatilai
Copy link

Maybe I'm missing something, but isn't is sufficient to mark the mysecret variable as sensitive in the original example?

@reegnz
Copy link
Author

reegnz commented Jul 13, 2022

@tmatilai Marking variables as sensitive was not a terraform feature when the ticket was opened. It got introduced with 0.14. But yes, it seems to solve the issue.
Another alternative is the sensitive function (which came with terraform 0.15): https://www.terraform.io/language/functions/sensitive

Checking with the following code:

variable "mysecret" {
  type = string
  sensitive = true
}

resource null_resource example {
  triggers = {
    secret = var.mysecret
  }

  provisioner "local-exec" {
    command = "echo Create"
    environment = {
      SECRET = self.triggers.secret
    }
  }
  
  provisioner "local-exec" {
    command = "echo Destroy"
    environment = {
      SECRET = self.triggers.secret
    }
  }
}

The cli output of a plan looks like this:

❯ terraform plan
var.mysecret
  Enter a value: supersecret


Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # null_resource.example will be created
  + resource "null_resource" "example" {
      + id       = (known after apply)
      + triggers = {
          + "secret" = (sensitive)
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.

─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.

Given that both of the above solve the issue in a straightforward way, I'm gonna close the ticket as it can be solved with a terraform-native feature without any modification necessary to the provider anymore.

@reegnz reegnz closed this as completed Jul 13, 2022
@vl-shopback
Copy link

no more sensitive_triggers again?

Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 23, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
9 participants