Skip to content

Commit

Permalink
document copy operation with respect to directories. (#272)
Browse files Browse the repository at this point in the history
  • Loading branch information
markjschreiber authored Nov 6, 2023
1 parent 8436989 commit f9bee94
Show file tree
Hide file tree
Showing 2 changed files with 13 additions and 5 deletions.
8 changes: 8 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -329,6 +329,14 @@ we could test for file existence before deletion this would require an additiona
operation. Because S3 only guarantees read after write consistency it would be possible for a file to be created or
deleted between these two operations. Therefore, we currently always return `true`

### Copies of a directory will also copy contents

Our implementation of `FileSystemProvider.copy` will also copy the content of the directory via batched copy operations. This is a variance
from some other implementations such as `UnixFileSystemProvider` where directory contents are not copied and the
use of the {@code walkFileTree} is suggested to perform deep copies. In S3 this could result in an explosion
of API calls which would be both expensive in time and possibly money. By performing batch copies we can greatly reduce
the number of calls.

## Building this library

The library uses the gradle build system and targets Java 11 to allow it to be used in many contexts. To build you can simply run:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -345,18 +345,18 @@ public void delete(Path path) throws IOException {
* specified by the {@link Files#copy(Path, Path, CopyOption[])} method
* except that both the source and target paths must be associated with
* this provider.
* <br>
* Our implementation will also copy the content of the directory via batched copy operations. This is a variance
* from some other implementations such as `UnixFileSystemProvider` where directory contents are not copied and the
* use of the {@code walkFileTree} is suggested to perform deep copies. In S3 this could result in an explosion
* of API calls which would be both expensive in time and possibly money.
*
* @param source the path to the file to copy
* @param target the path to the target file
* @param options options specifying how the copy should be done
*/
@Override
public void copy(Path source, Path target, CopyOption... options) throws IOException {
//
// TODO: source and target can belong to any file system (confirmed, see
// https://github.com/awslabs/aws-java-nio-spi-for-s3/issues/135),
// we can not assume they points to S3 objects
//
try {
// If both paths point to the same object, this is a no-op
if (source.equals(target)) {
Expand Down

0 comments on commit f9bee94

Please sign in to comment.