-
Notifications
You must be signed in to change notification settings - Fork 183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for AWS Dynamic Credentials #333
Comments
Do you have time to submit a PR for monstache for this? I noticed that shared credentials provider you linked does not actually refresh after reading the |
I might have time to work on this if we come up with a plan of attack. You're right I might have read that provider wrong. Hm I'm not sure if that matches our use case. What you're suggesting would use an AWS Service to refresh the token right? In our case our service on kubernetes has a sidecar for Vault. And vault assumes an AWS IAM role (which gives creds that last 1 hour), these creds are then written to the ~/.aws/credentials file. Then in an hour Vault will refresh them and write again. So the actual refreshing credentials part is already handled. We just need monstache to be able to keep using the latest variables (i.e if that file changes then use the new creds, not refresh the creds themselves). If that makes sense. |
@kush-patel-hs can you please try the issue-333 branch and let me know if that helps? Switch to that branch and [aws-connect]
# choose a strategy that looks in environment vars and then ~/.aws/credentials.
strategy = 1
# set the profile to use
profile = "dev-profile"
# force expire the credentials every 5 minutes forcing a re-read
force-expire = "5m"
# add AWS region
region = "us-east-1" Strategy |
Hello @rwynn, sorry for the late reply, I'm on vacation! Your branch is headed in the right direction! A few things to iron out. The duration force expiry (say I set it to 5 min, and our system updates the creds at 6m, then our creds will be wrong for 4m). I could set it to something very low like 15s and I think that would work well for reading env vars, but might be too io intensive for reading file. We could look into using something from here https://github.com/fsnotify to watch the file for changes on top of the force expire. We could also split the env var reading and file reading into two strategies so with env var we can refresh very frequently (10-15s) and with file reading we can watch for changes. Thanks for the fast turn around! My teammate is continuing our tech evaluation while I'm on vacation. |
@kush-patel-hs updated the branch based on your feedback. Thanks. |
@kush-patel-hs I've merged this code into the [aws-connect]
# choose a strategy that reads from ~/.aws/credentials.
strategy = 1
# set the AWS credential profile to use if not `default`
profile = "dev-profile"
# set AWS region
region = "us-east-1"
# enable file watcher
watch-credentials = true |
Small update: Working for next 4 days then offline for 7 days. Thanks for adding the file watcher @rwynn! We can switch to using If you open a PR to merge to master I can review it for you. |
Quick question, is there a need for I think we can have just Also: We're pulling the rel6 docker image and it's giving us
Has it not been updated? |
I initially tried simply putting the watch on the credentials file. But this seemed to have the following problems:
Watching the parent directory (e.g. ~/.aws) did not have these problems. I assume that those using this feature would expect it to work in a wide variety of situations. The only requirement would be having ~/.aws at monstache startup. The docker file has not been updated as this has not been released yet. |
A few things:
Furthermore, if we start this feature with just the one variable We built your branch! We think the credentials are being fed correctly, but we're having different unknown problems trying to communicate with ES from our staging k8s. We're going to try to figure that out today. |
Disregard my last comment. Upon consulting with someone who knows more about our vault/k8s setup. I have learned that we do something like |
@kush-patel-hs thanks for the feedback! I would agree that It shouldn't be a major issue if the credentials are invalidated 3 times because the act of invalidation just sets a flag and the actual reloading (in this case reading of the file) happens before the next request is made. I've pushed out a new release with this feature included. Thanks for taking the time to report and test it out. |
Thanks for your work on this! @rwynn |
We're currently doing a SPIKE and some research into using Elasticsearch potentially with Monstache. The tool is working great so far locally. We're trying to figure out how we can deploy it. We have self-hosted Mongo with basic auth, but we have an IAM role for elasticsearch. When we use AWS IAM roles they last for 1 hour and then they refresh. So we can't just launch monstache and let it run. Atm we would have to somehow restart monstache every hour.
aws-sdk-go has support for reading from the ~/.aws/credentials file https://github.com/aws/aws-sdk-go/blob/4f042170d30a74a7b1333268a83154c32347f990/aws/credentials/shared_credentials_provider.go#L29 and also expire these credentials and read the file again when they do expire.
It would be great to have similar support in monstache. Both reading from ~/.aws/credentials and dynamically expiring and reading from it again. We could potentially update the
aws-connect
configuration to fallback on ~/.aws/credentials if access-key/secret are missing. Or add in a filename and profile.The text was updated successfully, but these errors were encountered: