-
Notifications
You must be signed in to change notification settings - Fork 9.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
provider/aws: Add aws_s3_bucket_object data source #6946
provider/aws: Add aws_s3_bucket_object data source #6946
Conversation
@@ -113,6 +113,7 @@ func Provider() terraform.ResourceProvider { | |||
DataSourcesMap: map[string]*schema.Resource{ | |||
"aws_ami": dataSourceAwsAmi(), | |||
"aws_availability_zones": dataSourceAwsAvailabilityZones(), | |||
"aws_s3_object": dataSourceAwsS3Object(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we instead call this aws_s3_bucket_object
so it matches with the resource of the same name? So far we've been trying to make the data sources match the corresponding resources as much as possible (in terms of attributes too) so it's relatively easy to switch between them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, that's not a problem at all, I can rename it.
Generally speaking though I'd rather see resource called aws_s3_object
as it just makes more sense when you pronounce it (IMO). 😃
I assume that consistency with past choices is probably easier & safer to implement than deprecating past choices though.
Aside from some nits, this LGTM. Thanks... I was just the other day wishing for this data source! |
@radeksimko I too was thinking about this data source! 👍 Question: would it be too tough to get in KMS support or possible customer SSE support as well? It would make in-band use of encrypted S3 data so much easier (and open up the playing field for using S3 as an easy secret store for Terraform even). |
@vancluever KMS is supported - one of the test cases is even covering that specific use case. It was fairly easy to implement KMS support as it works out of the box - the IAM user/role refreshing the data source just needs Re SSE support - I decided to not support this in the initial implementation because we don't support that in |
@radeksimko ahh yeah, you are right, and Very nice - might have to rebuild our fork in the next couple of days with this! Thanks a ton! |
685565a
to
b0ca952
Compare
- This is to allow easier testing of data sources which read data from resources created in the same scope
b0ca952
to
d4fe1b9
Compare
@apparentlymart I address 2 of your valid comments + provided some explanation on the third, I also reran acceptance tests which all passed. Will you give it a final 👍 / 👎 ? 😉 |
I didn't have time to test again but by my eyes it looks great! 👍 |
@radeksimko great thanks! |
Will there be support for interpolating the values of arguments |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
There are 2 main use-cases I can think of:
consul_keys
). Plaintext values per object should be easy to read and use out of the box, JSON/YAML and other formats might need some extra care.helper/resource
-PreventPostDestroyRefresh
TL;DR: This is to reduce number of TestCases from 3 to 2. It might be possible to reduce it down to 1, but I wanted to test the behaviour of data source at the beginning of the lifecycle, not in the middle.
The S3 object + S3 bucket we're testing there is set up as part of the test case and we run
refresh
after destroying all resources by default - probably to verify that all resources are gone. This is ok for data sources which read data outside of Terraform's control. Destroy operation however doesn't destroy data sources which means thatrefresh
onaws_s3_object
reading object that was destroyed would fail.Test plan