-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi-get for secrets? #272
Comments
👍 |
Just to chime in here, while I think this is an interesting enhancement idea, I do think we also need to ensure we can implement this in a way that prevents someone causing horrendous performance problems due to requesting 10k secrets at once. Maybe an arbitrary (or operator-tunable) hard limit would suffice. |
This would be useful for applications that request multiple secret details. In general, I see two patterns when using the
This is however only a scenario for Obtaining multiple secrets (password for MySQL, Cassandra, AWS) in one shot would help with batching and request synchronization since it's usually a difficult exercise for a client. I'm currently triggering multiple async requests and synchronize them at the end. This is especially useful when vault needs to perform work that is related to remote services. Not sure how vault handles concurrency internally but parallel requests help to reduce the total time the client has to wait for secrets. |
All requests are handled concurrently! |
Any update on this issue? |
None currently. |
It would indeed be useful to have this. Any bigger application will usually require a whole number of secrets to run (databases, API keys, other credentials). So I think this is a quite common use case. Is this on the roadmap? |
Not at this time. |
Hi guys, I've ended up with something that allowed to us request multiple secrets at one time, but it introduces a new custom HTTP method - MULTIPLE. I don't think that is a good way to do it, maybe you have some ideas about that and we can implement them with community? |
Yes? |
|
I would instead consider extending the LIST method on /v1/secret/:path to also return values. That could be further extended with a filter option in a query params. If its unplanned for the roadmap, is it because the feature is not compelling? Should we be thinking about fetching multiple secrets in a different way? This is particularly painful in the jenkins integration because the LIST methods arent even exposed there, you need to now maintain a separate list, and keep it in sync with the actual committed secrets. |
Having LIST also return values would mean that every list command would have to then read every key it returns, which could result in huge load on the Vault server. There are a number of reasons we haven't supported this so far, but for each get request you'd have to check the token (so that you'd decrement the use count if it's use-limited), run an ACL check, and handle that request separately. It may well be more efficient to just make multiple requests over an existing connection (e.g. using HTTP/2), especially if those requests are run in parallel; the OP's claim that it would be a performance improvement is merely a supposition and depends strongly on whether you're considering clock time or overall processing time within Vault. |
This behavior could be controlled with a query param as well like: ?keys=true
Thats a key insight into the complexities that seem to block this, thanks. I guess there is no other way to think about bulk secrets except to accept it for what it is, if I wasnt using jenkinsfile I wouldnt feel that strongly about it given reasonable alternatives, so maybe an issue/pr on the plugin is a better way to go.
Doesnt that assume the OP meant vault performance? My bootstrapping container making 100+ http requests could be characterized as bad performance, my time to availability is now constrained by the round trip of each secret fetch. |
It does seem like that plugin has some specific challenges that might best be solved by the plugin itself.
Parallelize them! Vault deals with parallel requests very well. In addition, sending serial requests over a single HTTP connection will net you huge performance gains (10-50x improvement is totally possible vs. opening a new connection per). |
Certainly, dont want to belabor the point too much just wanted to point out there are performance considerations on both sides I guess. On its face sending 1 request is better than 100, but opening n sockets and sending 100 requests is also not exactly a mammoth task. About serialization though, do you mean pipelining where I just blast out requests on a single socket without waiting on any responses between requests? Yes, this is a good suggestion if supported. Otherwise then wouldnt we really be constraining ourselves to 100*rtt? |
@jacob-salassi we wrote a small tool called vaultenv which fetches secrets from Vault in parallel and puts them into the environment as environment variables. |
Basically you have two options. One is to use a single HTTP session but send requests serially over the same session (re-use), where you save the (significant) overhead of HTTP and TLS session establishment between them. The other is to use HTTP/2 which allows mutliple streams to be opened within the same session, each carrying their own request. So you can parallelize natively. @rkrzr that's cool! Feel free to send a PR to get it added to https://www.vaultproject.io/api/relatedtools.html ...you can do it right from that page via the "Edit this page" link at the bottom! |
As suggested in hashicorp#272 (comment)
Thanks for pointing that out! I opened a PR here: #3945 |
@jefferai did quite a bit of googling here, are there any examples anywhere for the vault go API calling Read() in a goroutine? |
Sure. See https://github.com/hashicorp/vault/blob/master/command/approle_concurrency_integ_test.go for a bunch of concurrent calls that happen in goroutines. Should work fine. |
Thanks, you are right. Totally my mistake :), the example solved it. |
Closing this for now as there are no plans currently to implement this, in favor of e.g. HTTP pipelining and concurrency. |
Given that requesting secrets is now rate-limited, can this please be reopened? |
I do not see a way to
GET
multiple secrets with a single http request. It would be a nice performance improvement to have, certainly if someone has to fetch 10s or 100s of secrets at a given time.The text was updated successfully, but these errors were encountered: