Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhance vault_generic_secret for more general API calls #244

Closed
wants to merge 2 commits into from

Conversation

jberkenbilt
Copy link
Contributor

@jberkenbilt jberkenbilt commented Nov 20, 2018

NOTE the description here has been superceded by later comments.

I'm opening this pull request as a starting point for discussion. It adds some functionality to vault_generic_secret that makes it much more broadly usable for vault configuration control. I'm interested to see what you think. This is my first time in the code, so every detail of this is just my first stab at it.

In version 1.3.1 of the terraform vault provider, vault_generic_secret
has limitations that make it hard to use for the following two cases:

  • Call a generic API where a read on the resource returns more keys
    that a write, as is common for many configuration endpoints where
    only a subset of keys have to be specified
  • Writing to an API where the write operation returns additional
    information, such as when using write to create a new resource and
    needing to retrieve its ID.

This change adds three new fields:

  • ignore_absent_fields: when specified, data_json is populated by
    copying the originally supplied data_json and overwriting individual
    fields with values returned by the read operation
  • write_data_json: a computed value containing the data returned by
    the write operation, if any
  • read_data_json: a computed value containing the data returned by the
    read operation; identical to data_json if ignore_absent_fields is
    false; otherwise, contains what the read actually returned

Here is an example of using this. You can start a fresh vault server in dev mode and apply this terraform project against it. This uses to vault_generic_secret to write a subset of fields to auth/userpass/users/u1, including an initial dummy password, without caring that the read returns a bunch of other fields and does not include password, and it writes identity/lookup/entity and pulls out the resulting lookup information. You can't get this from a vault_generic_secret data source because it's a write operation, not a read operation, that looks up the information.

provider "vault" {
  # Prevent this from accidentally being applied to a production vault
  address = "http://127.0.0.1:8200"
}

resource "vault_policy" "p1" {
  name = "p1"

  policy = <<EOF
path "secret/data/p1" {
  capabilities = ["read"]
}
EOF
}

resource "vault_auth_backend" "userpass" {
  type = "userpass"
  path = "userpass" # optional
}

resource "vault_generic_secret" "test1" {
  path = "secret/potato"

  data_json = <<EOF
{
  "value": "salad"
}
EOF
}

resource "vault_generic_secret" "u1" {
  depends_on           = ["vault_auth_backend.userpass"]
  path                 = "auth/userpass/users/u1"
  ignore_absent_fields = true

  data_json = <<EOF
{
  "policies": ["p1"],
  "password": "something"
}
EOF
}

resource "vault_generic_secret" "u1_token" {
  depends_on   = ["vault_generic_secret.u1"]
  path         = "auth/userpass/login/u1"
  disable_read = true

  data_json = <<EOF
{
  "password": "something"
}
EOF
}

output "up_accessor" {
  value = "${vault_auth_backend.userpass.accessor}"
}

output "value" {
  value = "${vault_generic_secret.u1.data_json}"
}

output "read_u1" {
  value = "${vault_generic_secret.u1.read_data_json}"
}

output "write_u1" {
  value = "${vault_generic_secret.u1.write_data_json}"
}

resource "vault_generic_secret" "u1_entity" {
  depends_on           = ["vault_generic_secret.u1_token"]
  disable_read         = true
  path                 = "identity/lookup/entity"
  ignore_absent_fields = true

  data_json = <<EOF
{
  "alias_name": "u1",
  "alias_mount_accessor": "${vault_auth_backend.userpass.accessor}"
}
EOF
}

output "u1_id" {
  value = "${vault_generic_secret.u1_entity.write_data_json}"
}

@ghost ghost added the size/S label Nov 20, 2018
@jberkenbilt
Copy link
Contributor Author

I thought I'd also point out that this will be ever more useful after we have jsondecode in terraform 0.12.

@jberkenbilt jberkenbilt force-pushed the flexible-generic-secret branch 2 times, most recently from c953c8c to a1d2165 Compare November 21, 2018 19:05
@jberkenbilt
Copy link
Contributor Author

Got tests to pass. I think I'm right to ignore write_data_json in the import test, but someone should scruitinize this. I won't attempt doc updates or anything until I hear whether this is even worth considering or whether a completely different solution would be better.

@jberkenbilt
Copy link
Contributor Author

The new functionality is not exercised in any automated test at this time.

@jberkenbilt
Copy link
Contributor Author

Hello again! How do I get someone to look at this? Sorry if I overlooked something. I notice that @tyrannosaurus-becks has reviewed and merged several recent pull requests. I don't want to nag...I just want to make sure this doesn't languish and that I am following correct procedures. Thanks. :-)

@tyrannosaurus-becks tyrannosaurus-becks self-assigned this Dec 10, 2018
if err != nil {
return fmt.Errorf("error marshaling JSON for %q: %s", path, err)
}
d.Set("write_data_json", string(jsonData))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Writing the entire secret out gives me pause, though I'm undecided what I think of it yet.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it makes me a bit nervous too, actually. I wonder whether there should be an input parameter that specifically indicates which fields should be written.

if err != nil {
return fmt.Errorf("data_json %#v syntax error: %s", d.Get("data_json"), err)
}
relevantData = suppliedData
Copy link
Contributor

@tyrannosaurus-becks tyrannosaurus-becks Dec 11, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this line needed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, because sometimes fields that are written are not returned by the read operation. Initializing relevantData with suppliedData ensures that those fields' absence from the read operation will not cause terraform to consider the resource to be out of date. In my example, the password field is an example of this. If you first write to auth/userpass/users/u1 with the password field and the read back auth/userpass/users/u1, you get back some fields you didn't write and you don't get back password. This causes password to be persisted in the state, but any user who is using the vault terraform provider must always be mindful of what's being written to state. The use case here would be to use a dummy password that is changed out of band. Let me know if this is clear or if I should clarify further.

}
d.Set("write_data_json", string(jsonData))
} else {
d.Set("write_data_json", "null")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this line needed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some write operations don't return anything, and I wanted to make sure that a reference to write_data_json didn't fail in that case. Basically I was trying to follow the pattern I observed that all things that could be read from the resource were explicitly initialized. Please let me know if I have misunderstood how this is supposed to work. I can probably produce an example that would error out if this line were absent. Basically since terraform doesn't have any way to ask whether a particular field is there, it's easier to work with if the same fields are always there but just have explicitly null values. Lazy evaluation of conditionals in 0.12 may make this less problematic.

Copy link
Contributor

@tyrannosaurus-becks tyrannosaurus-becks left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jberkenbilt thank you for working on this! I appreciate it!

I'm not certain what I think of this approach yet. I need to play more with the example you've given, comparing how it works today to how it would work with the code. I don't fully grok the use case and how this solves it, though your explanation is excellent.

@jberkenbilt
Copy link
Contributor Author

@tyrannosaurus-becks Thanks for taking a look! What can I do to help you grok the use case? If you think this is worth pursuing, I can find some time to write tests that cover this better, including tests that would fail if some of the lines you had questions about where removed. Also, if you think it's better, I can change it so that instead of recording write_data_json, we require the caller to explicitly mention which fields they will be interested in querying.

In 0.12, terraform will have proper support for nested data structures, which makes it unnecessary to bother with json strings. Is there an intention to enhance the interface to store results in proper nested data structures instead of json strings? That would obviously impact this resource. I notice that the generic_secret data source already pulls out top-level keys whose values are strings and puts them in data. I guess in 0.12, the entire complex structure could be returned in data, and we could also have data, read_data, and write_data here.

I'll try to find some time to add tests, though I will have to familiarize myself with how the tests are set up. Let me know what you think about restricting which parts of the write response are stored, and if you can shed any light on whether interfaces will change for 0.12, that would be helpful. Thanks!

@cvbarros
Copy link
Contributor

Adding my 2cents to this discussion:
I believe provider resource design and API capabilities should not mirror each other. In other words, Terraform can be seen as an user interface and abstraction to thr underlying API, which may require some translation and/or adapting to the general higher level usage.

The overloading of vault_generic_secret instead of specific resources gives power to configure more and more in Vault "as code". However, there's a big tradeoff with that decision - the provider resources appear more and more as "property bags" and as API wrappers, not a suitable abstraction as a Terraform "idiom".

I'd be more keen to seeing the vault_generic_secret resource evolving to just provision secret data on KV/KV2 secret engine, which full support for delete, patching keys etc. And focusing effort on implementing/closing the gap on missing resources and other configuration.

@jberkenbilt
Copy link
Contributor Author

@cvbarros You make a very good point. This change is, admittedly, a workaround for places where the terraform provider is not yet rich enough to handle several cases. In my own work, I find myself facing similar dilemas frequently. I usually fall on the side of forgoeing the workaround in favor of the right solution, though sometimes the benefit of a simple solution that can be used "now" outweighs the potential downside, especially when the right solution is significantly harder or resources are not available soon.

The documentation already touts generic secret as usable for general API calls, but it doesn't really work for the reasons that made me do this. I can think of three options:

  1. Create some kind of catch-all that implements the functionality I've put here without knowing anything about v2 secrets. It would be soley for making API calls for which there is no specific resource/data source and could be documented as such. Such a resource would closely resemble generic secret with this patch. I started down this path, but switched course when I realized that the functionality I wanted could be added very easily to this resource.
  2. We could write the resources or data sources that we need and submit those as pull requests (or whoever owns this provider could work to close some of the gaps). For our particular use cases, this would only go a small distance toward closing the gap as there's a lot of vault API that is not covered. Also I don't know that we would actually make the time for this.
  3. This change could be taken with full awareness that it is a workaround, and this fact could be documented more clearly. If, at some point, the need for the workaround goes away, the added functionality could be deprecated.

I would also point out that something like this would also be potentially useful for working with third-party plugins or new vault features that are not yet in the terraform vault provider.

@cvbarros
Copy link
Contributor

Indeed @jberkenbilt it's not a simple dilema, so I totally understand the pragmatic view. This provider is maintained by Hashicorp, but has been under low maintenance for quite some time. Thus, there are some rough edges caused by debt that, if tackled, can put the codebase in a better direction. The point about 3rd party plugins is also a great one, which I haven't considered.

vault_generic_secret being used for KV/KV2 and for the mentioned use can lead to other implications, such as described in #258. Should we treat input/output data from this resource as sensitive when dealing with such "generic api usage"?
In that regard, putting more responsibility into the resource makes it way harder to change and cope with all the use cases.
In addition there's the consideration of KV/KV2 secret engine - they deal with different APIs and have disparate features. Every "generic API usage" has to assume dealing with KV1, whereas for secret kv store, there's a flexibility to use both.

Due to backward compatibility, there's no easy solution I can suggest to address the previous points, but maybe these can be considered:

  • Create a new resource resource_vault_generic_data and data_vault_generic_data, for generalized read/write operations to Logical paths in Vault. The shape/design of this resource can be geared towards a property bag with k/v within a set that is always Computed, meaning the state/configuration may not always be in sync and Terraform will be fine.
  • Open feature requests for missing resources, take a stab at implementing them or wait for a contributor to come forward - this option is viable when the project is mantained and releases are frequent - which I believe it will be the case from now on. The more resources are written, easier it is to extend them and the project gains momentum from more contributors.
  • Create a new resource resource_vault_json_data that just reads/writes json to/from a path.

@jberkenbilt
Copy link
Contributor Author

One more thought in favor of a resource_vault_json_data or similar method, an option we both mentioned: while I appreciate the desire to not have an escape hatch and to instead want a terraform abstraction around vault, the argument for this with vault is not as strong as with other providers, such as AWS. Most providers do not have an open-ended API like vault does. With AWS, GCP, and numerous other kinds of things, there is a fairly set API, and there would be no way to do what you want to do in any way except going through a specific API for that thing. Vault is different by design. With vault, it is specifically designed so that everything can be done through normal read and write operations to API endpoints using a uniform interface. This is actually central to vault's design because the whole way you configure permissions for operations is through ACLs that control access to their endpoints in much the same way they would provide access to secrets. As such, a generic catch-all method for talking to vault is actually supportive of vault's intended design and not "just" a workaround for the terraform vault provider's gaps in implementing a good abstraction. This isn't to say that the gap shouldn't be closed, but I think there's a case to be made that some kind of general get/post mechanism actually makes sense in this provider.

@cvbarros
Copy link
Contributor

This is a strong point, totally agree!
However, it would be nice to segregate this "generic interface" from the Secret Engine KV/KV2, that's the gist of my comments :)

@jberkenbilt
Copy link
Contributor Author

@cvbarros Okay, I'll take what I did, peel out of generic secret, disentangle it from the v1/v2 stuff, etc. and open a different pull request. When that's done, I'll reference it from here and close this one. Thanks for all the engagement on this. I completely agree with your position.

@daveadams
Copy link

I like where this is going. A "logical endpoint" Vault resource is sorely needed for encoding configuration outside of Vault itself.

@jberkenbilt
Copy link
Contributor Author

I'm back at work starting this week and hope to have time to work on this soon. Thanks for all the comments.

@jberkenbilt jberkenbilt force-pushed the flexible-generic-secret branch 2 times, most recently from c8980ca to dd2af36 Compare January 21, 2019 19:36
@ghost ghost added size/L and removed size/S labels Jan 21, 2019
@jberkenbilt
Copy link
Contributor Author

Hi everyone. I've pushed a new version of this up. This commit does not touch vault_generic_secret but instead adds a new resource called vault_generic_endpoint with the following behavior:

  • Allows writing to a generic endpoint. There is nothing in here about v2 secrets. It's just simple read and write calls
  • The full read data is returned in read_data_json, and data_json can be modified by ignore_absent_fields as in the original change. We could get rid of read_data_json entirely and just have people use the vault_generic_secret data source for that.
  • Write data is returned in two ways: write_data_json and write_data. The latter is a map of strings analogous to read_data vs. read_data_json in the generic secret data source. Hopefully the need for this goes away in terraform 0.12.
  • Write data is stored in state only for fields present in write_fields
  • In addition to disable_read, there is disable_delete

I haven't added any tests yet since I wanted to get a feel for whether this attempt is more on the mark.

As I write this, I think I should drop read_data_json since it might contain sensitive information. I'm going to push that up as a separate commit which we can drop if people think read_data_json should be kept.

@jberkenbilt
Copy link
Contributor Author

jberkenbilt commented Jan 21, 2019

I removed read_data_json. I think it's better not to have it. I'll squash the commits if people agree.

Here's a new version of the terraform sample i included originally that works with the new resource.

provider "vault" {
  # Prevent this from accidentally being applied to a production vault
  address = "http://127.0.0.1:8200"
}

resource "vault_policy" "p1" {
  name = "p1"

  policy = <<EOF
path "secret/data/p1" {
  capabilities = ["read"]
}
EOF
}

resource "vault_auth_backend" "userpass" {
  type = "userpass"
  path = "userpass" # optional
}

resource "vault_generic_endpoint" "u1" {
  depends_on           = ["vault_auth_backend.userpass"]
  path                 = "auth/userpass/users/u1"
  ignore_absent_fields = true

  data_json = <<EOF
{
  "policies": ["p1"],
  "password": "something"
}
EOF
}

resource "vault_generic_endpoint" "u1_token" {
  depends_on   = ["vault_generic_endpoint.u1"]
  path         = "auth/userpass/login/u1"
  disable_read = true
  disable_delete = true

  data_json = <<EOF
{
  "password": "something"
}
EOF
}

output "up_accessor" {
  value = "${vault_auth_backend.userpass.accessor}"
}

output "u1_data_json" {
  value = "${vault_generic_endpoint.u1.data_json}"
}

output "write_u1" {
  value = "${vault_generic_endpoint.u1.write_data_json}"
}

resource "vault_generic_endpoint" "u1_entity" {
  depends_on           = ["vault_generic_endpoint.u1_token"]
  disable_read         = true
  disable_delete       = true
  path                 = "identity/lookup/entity"
  ignore_absent_fields = true
  write_fields         = ["id"]

  data_json = <<EOF
{
  "alias_name": "u1",
  "alias_mount_accessor": "${vault_auth_backend.userpass.accessor}"
}
EOF
}

# Terraform 0.12 will include a jsondecode function that would allow
# us to pull the actual ID out of write_data_json.
output "u1_id_json" {
  value = "${vault_generic_endpoint.u1_entity.write_data_json}"
}

output "u1_id" {
  value = "${vault_generic_endpoint.u1_entity.write_data["id"]}"
}

@jberkenbilt
Copy link
Contributor Author

I'm working on tests, and there are a few problems I need to fix.

@ghost ghost added size/XL and removed size/L labels Jan 22, 2019
@jberkenbilt
Copy link
Contributor Author

I fixed a minor problem and added an acceptance test, which now passes. This is ready for review.

@jberkenbilt
Copy link
Contributor Author

I also squashed the commit that removes read_data_json. It's possible that it could contain sensitive data. I think it's very clear that it shouldn't be there. As coded now, there are no surprises about what's persisted in state. If ignore_absent_fields is true, only fields provided will be persisted for read, and only fields specified in write_fields will be persisted with write.

@jberkenbilt
Copy link
Contributor Author

I see there are some conflicts here that I will resolve. Is there a chance we can move on this? I'd like to start using it if we agree on the approach, names, etc. I'll try to find some time in the next few days to resolve the merge conflicts.

@jberkenbilt
Copy link
Contributor Author

updated to resolve merge conflict

@jberkenbilt
Copy link
Contributor Author

@cvbarros @daveadams @tyrannosaurus-becks Are any of you able to review or comment on my updated vault generic endpoint resource? Thanks.

@cvbarros
Copy link
Contributor

cvbarros commented Feb 2, 2019

Since changed to the generic_endpoint approach, implementation LGTM!
Only non-blocking minor improvement suggestions IMO:

  • I'd strive for more test coverage in some other cases such as disable_read, disable_delete etc.
  • It would be great if the new resource PR came also with updated documenrtation on how to use it properly.

@jberkenbilt
Copy link
Contributor Author

I can add more test cases and update documentation. When done, I will ask for final review. In my doc update, I will describe how to use this resource, update the generic secret resource to recommend use of this resource for generic endpoint access, and also mention in this resource that the generic secret data source is still applicable for just reading arbitrary endpoints. I should be able to get to this early next week. When I'm done, I'll ping again asking for final review and possible merge. Thanks!

@ghost ghost added the documentation label Feb 2, 2019
@jberkenbilt
Copy link
Contributor Author

I have pushed up documentation changes. When I update the tests, I will squash that with the original code change commit and indicate that I'm ready for review. If you'd like, you can get a jump on reviewing the doc changes.

This resource does general writes to vault endpoints. It is preferable
to using vault_generic_secret for this purpose for the following
reasons:

* It allows calling a generic API where a read on the resource returns
  more or different keys that a write, as is common for many
  configuration endpoints where only a subset of keys have to be
  specified
* It provides access to data returned by the write operation, such as
  when using write to create a new resource and needing to retrieve
  its ID
* It enables calling endpoints that can't be deleted
@jberkenbilt
Copy link
Contributor Author

@cvbarros (cc @tyrannosaurus-becks) I've updated the docs and updated the tests. The tests were already testing positive and negative disable_read. Although there were both positive and negative disable_delete cases, all the disable delete cases were for resources inside the userpass backend, so they weren't really getting exercised. I added cases to exercise that and also to exercise that the test logic exercising it is correct. Yes, tests of tests. I also manually exercised the correctness of the test cases my temporarily breaking stuff and making sure the tests failed. Looking through other test classes to find examples of how to do things, I think I can say this particular resource is tested at least as well as many of the other ones and well enough to catch the most obvious ways it could break.

Barring any further suggestions or comments, I think it's ready to go.

Once merged, how long does it take before a regular terraform init will grab this version? I ask because I have to decide whether to find a way to use a local build in the interim. Thanks!

@jberkenbilt
Copy link
Contributor Author

jberkenbilt commented Feb 4, 2019

There's a small problem...when disable_read is not true, plan is out of date. I'm going to make a fix for this and update the tests.

Actually, no, it's good. There was a typo in my terraform. :-/

It's working for my use case. Still ready to go. :-)

@jberkenbilt
Copy link
Contributor Author

@cvbarros Do you know what the next step is for this?

@cvbarros
Copy link
Contributor

Hey @jberkenbilt ! I'm just a community member - I provided feedback on your PR in order to help. For me it looks amazing and really helpful, so I'm also keen on having this available 🤞 !
I believe this has to be reviewed/merged by maintainers, that's the next step.

@jberkenbilt
Copy link
Contributor Author

@cvbarros Okay, thanks! I'll just sit tight then. I appreciate your review and comments.

@tyrannosaurus-becks
Copy link
Contributor

Hi @jberkenbilt! Just as an update, this PR has been stalled because we've been discussing what direction we want to take with the Terraform Vault Provider security-wise. Since this is somewhat of a departure from its code up to this point, it's not as much of a shoo-in to approve and merge as other PRs that have been brought in since this one was opened. This PR is a cleverly flexible approach, and I appreciate you submitting it - it has not been forgotten.

@jberkenbilt
Copy link
Contributor Author

@tyrannosaurus-becks Thanks for the update. I really appreciate it. Would it help if I share a little more insight as to what we're doing and how this helps? Maybe this will convince you to accept it, or maybe it will help you come up with a better approach. Either way would be great for us. :-)

I'm happy to provide as much detail as you want, but very briefly, our approach is to use terraform to manage vault configuration but not secrets. For secrets, vault is authoritative, and our approach to configuration control is more along the lines of good backups and using versioned secrets engines where possible. Terraform is good for some of this, but not all of it. A few more details are below.

Our basic approach is to use a stand-alone tool (still to be written) that queries vault to get an inventory of everything that's there and ensures that whatever's there, short of actual secrets themselves are represented in terraform. Examples of things we would control in terraform are policies, roles, auth backends, and secret backends. This reason for this approach is that, in the case vault, the stuff that's outside of configuration control is a great liability, so we want to make sure that everything is under configuration control and drift is quickly detected. Once we have done this, it's easier to ensure that what is under configuration control conforms to our policies.

We have developed terraform generator that allows us to make our terraform data-driven. This makes it easier for us to enforce policy and separate the policy from the data in a way that reduces errors. We also use a two-stage authorization method where the first stage is to acquire a token in an environment-specific way (user login, service account login through kubernetes, IAM instance profile, app role, etc.) and then to use that token to get a role-specific token using a create token for a role in the token backend. We want policies attached to stage one tokens to do nothing other than grant access to create stage two tokens. It's much easier to validate and enforce this kind of policy if you can guarantee that all policies are in terraform, and all terraform code that generates policies conforms to certain rules.

This approach allows us to use terraform for what it's good at: detecting drift, managing state, and generally synchronizing the state of the system with the code that describes the system. However since of the configuration endpoints have properties that makes code such as in this pull request necessary, or alternatively, we could write specific resources for the small handful of cases we have run into. Right now, we have a lot of terraform code that uses the vault_generic_secret resource and has to hard-code defaults to avoid terraform from reporting drift. This is also not forward-compatible because if vault adds new fields to configuration endpoints or defaults change either vault itself or as a result of configuration changes we make at the mount level, it can cause a cascade of things to be reported as outdated. We can mitigate a lot of this with our terraform generator though.

In spite of having authored this pull request, I also am not sure that it's really the right approach. There are a lot of ways in which terraform is not a great fit for controlling vault configuration, and the risk of persisting sensitive information in terraform state by mistake is pretty high. While this change makes it possible for us to use terraform to handle all the configuration cases were currently have, we may end up rolling our own solution for vault configuration and stepping away from the terraform vault provider entirely, or we may end up using it only for a few basic things, such as configuring policies and roles and mapping users and groups in the various auth backends to the policies that allow them to obtain second stage tokens. So far, there or only a small handful of cases we can't handle with vault_generic_secret, and those are done manually and logged in a journal since we don't want to start using this code in case it is not accepted.

I am happy to provide more details about any of what I mentioned above. I'm planning on blogging about our data-driven terraform approach in the context of a blog I'm planning writing about declarative and imperative systems. I also expect to blog about our two stage authorization approach.

Thanks again for the update. I think it's always worth waiting for the right solution, and I would not want a solution to be accepted that would push people in a wrong strategic direction. I have often been in the position of not accepting changes to systems I own for this reason.

@cvbarros
Copy link
Contributor

Great reasoning, @jberkenbilt! We also use terraform-vault in almost the same fashion as you do and for the same reasons. Just my 2c

@tyrannosaurus-becks
Copy link
Contributor

Hi @jberkenbilt ! Thanks for your patience on this one.

I think this is a really good approach. The long pause was because we were discussing whether we'd like to do an approach like this, or maybe start generating resources using OpenAPI, which was recently introduced in Vault, or maybe we'd like to wait until further potential language features arrive in Terraform.

After all this, I don't see any reason not to move forward with this PR. Thanks to semantic versioning, if for some reason we decide to go the other way and not use this in the future, we can always deprecate it. Also, as you've pointed out, it does provide an avenue for people to use the Terraform Vault Provider even when a resource is not yet available, and I think that's really cool, especially since the vast majority of this provider's code is currently community-sourced.

I also like how you've explicitly noted in the docs that it's best to avoid placing secrets in config or Terraform state. Plus, both the code and the test coverage is superb.

The code has developed a conflict in the tremendous amount of time it took to decide and I don't want to burden you with dealing with it. I'm going to move these changes to a native branch so I can add a merge commit easily, but don't worry, you'll still retain credit for your commits. I'll do a bit of testing and comment there if there are any further issues, though I don't anticipate any.

Thanks for sticking this one out!

@tyrannosaurus-becks
Copy link
Contributor

Closing in favor of #374 .

@jberkenbilt jberkenbilt deleted the flexible-generic-secret branch March 30, 2019 00:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants