-
Notifications
You must be signed in to change notification settings - Fork 642
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ListBucketVersions and improve S3.deleteBucketContents #2747
Add ListBucketVersions and improve S3.deleteBucketContents #2747
Conversation
@@ -155,7 +157,7 @@ trait S3IntegrationSpec | |||
(put, delete, metaBefore, metaAfter) | |||
} | |||
|
|||
val (putResult, deleteResult, metaBefore, metaAfter) = result.futureValue | |||
val (putResult, _, metaBefore, metaAfter) = result.futureValue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor thing, I noticed that this variable wasn't being used.
So the tests are currently failing with due to the command for enabling versioning i.e.
Fails locally with
When running against Minio. I don't know why this is happening, if you run the test against a real S3 then it works. I am going to disable the test in Minio for now until I figure out how ti fix it. EDIT: May have been due to a key collision, trying again |
699ba77
to
76bece7
Compare
Okay so it seems the minio that is used in Travis doesn't really support versioning so I had to disable the test |
Regarding this issue with not being able to make a versioned bucket in Minio to test properly, should I make an issue for it and annotate it as a comment in the code to fix it later? |
Note and issue sounds good to me. |
76bece7
to
d81b2a4
Compare
@@ -180,7 +213,7 @@ import scala.util.{Failure, Success, Try} | |||
.mapAsync(parallelism = 1)(entityForSuccess) | |||
.map { | |||
case (entity, headers) => | |||
Option((entity.dataBytes.mapMaterializedValue(_ => NotUsed), computeMetaData(headers, entity))) | |||
Some((entity.dataBytes.mapMaterializedValue(_ => NotUsed), computeMetaData(headers, entity))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another minor thing I noticed, this can never be null
since we are using Scala idiomatic akka-http API so its safe to use Some
here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also it is always handed a tuple regardless of the nullness of other other things :)
@johanandren Done |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM but I think it could be slightly simplified without the fromMaterializer
operator.
@@ -180,7 +213,7 @@ import scala.util.{Failure, Success, Try} | |||
.mapAsync(parallelism = 1)(entityForSuccess) | |||
.map { | |||
case (entity, headers) => | |||
Option((entity.dataBytes.mapMaterializedValue(_ => NotUsed), computeMetaData(headers, entity))) | |||
Some((entity.dataBytes.mapMaterializedValue(_ => NotUsed), computeMetaData(headers, entity))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also it is always handed a tuple regardless of the nullness of other other things :)
) | ||
|
||
Source | ||
.fromMaterializer { (mat, attr) => |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is the fromMaterializer/defered materialization needed? We already have an implicit Materializer in scope and the attributes doesn't look like they are used anywhere?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Personally I was just copying the style that is in this file, i.e. if you look at
- https://github.com/akka/alpakka/blob/master/s3/src/main/scala/akka/stream/alpakka/s3/impl/S3Stream.scala#L239-L251 (this was the original before I started working on S3 alpakka)
- https://github.com/akka/alpakka/blob/master/s3/src/main/scala/akka/stream/alpakka/s3/impl/S3Stream.scala#L269-L282
- https://github.com/akka/alpakka/blob/master/s3/src/main/scala/akka/stream/alpakka/s3/impl/S3Stream.scala#L327-L340
- https://github.com/akka/alpakka/blob/master/s3/src/main/scala/akka/stream/alpakka/s3/impl/S3Stream.scala#L352-L364
- https://github.com/akka/alpakka/blob/master/s3/src/main/scala/akka/stream/alpakka/s3/impl/S3Stream.scala#L402-L414
You can see that this pattern is used throughout. I personally don't have a problem in simplifying this but maybe it makes sense to do it in a future PR so its consistent?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, didn't notice. Sure, perhaps there is a non-obvious reason wanting to defer that materialization that I don't know about, so let's go with what's already there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ennru Maybe you can comment on this, according to git blame Source.setup
was used before it got changed to Source.fromMaterializer
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
setup
was mainly a predecessor to fromMaterializer
. In many places we do this to access stream attributes, not sure what was the original reason here.
This PR adds the listBucketVersions API and also improves the S3.deteBucketContents so that it also deletes all version objects (previously if you had versioned objects then the
S3.deleteBucketContents
wouldn't properly clear the bucket which means that a consequentS3.deleteBucket
would fail). This also mirrors S3's official documentation on how to clear a bucket, i.e. see https://docs.aws.amazon.com/AmazonS3/latest/userguide/delete-bucket.html.There were also some minor documentation errors with
listMultipartUploadAndCommonPrefixes
which I fixed.