Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HADOOP-16759. Filesystem openFile() builder to take a FileStatus param #1761

Merged

Conversation

steveloughran
Copy link
Contributor

  • Enhanced builder + FS spec
  • s3a FS to use this to skip HEAD on open
  • and to use version/etag when opening the file

works with S3AFileStatus FS and S3ALocatedFileStatus

Change-Id: If80f73137643fd50a969a92ad5794d0d09e3aee6

@steveloughran
Copy link
Contributor Author

tested -s3a ireland. Not tested the other stores to make sure they don't break (as they don't read the status, it's hard to see how). Could also add more failure tests (path mismatch, ...) and use s3a metrics to verify HEAD doesn't count

Copy link
Member

@liuml07 liuml07 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me overall. I can give another round review later. Thanks!

*
* If/when new attributes added to the builder, this class will be extended.
*/
public class OpenFileParameters {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm thinking does it need a builder itself for easier construction and being immutable?

OpenFileParameters parameters = new OpenFileParameters();
parameters.setMandatoryKeys(getMandatoryKeys());
parameters.setOptions(getOptions());
parameters.setBufferSize(getBufferSize());
parameters.setStatus(getStatus());

to

OpenFileParameters parameters = OpenFileParameters.builder()
      .mandatoryKeys(getMandatoryKeys())
      .ptions(getOptions())
      .bufferSize(getBufferSize())
      .status(getStatus())
      .build();

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's just overkill IMO...just a struct we can pass round and expand in an implementation side AP

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, makes sense. This is totally fine.

I was thinking if there would be more parameters to support and some parameters are optional, a builder can be better in some cases.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

noted. No real reason not to I guess

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

gone half way with this; renamed set* to with* and return the same object for ease of chaining

getMandatoryKeys(),
getOptions(),
getBufferSize());
parameters);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: merge with previous line seems no longer than 80

@@ -4366,15 +4378,38 @@ private void requireSelectSupport(final Path source) throws
InternalConstants.STANDARD_OPENFILE_KEYS,
"for " + path + " in non-select file I/O");
}
FileStatus status = parameters.getStatus();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: This status can be named providedStatus or something. Clearer as it's referred multiple times following.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will do

.build().get()) {
instream.read();
}
}
Copy link
Member

@liuml07 liuml07 Dec 13, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about fs.openFile(testPath) without file status? It will simply fail right because S3Guard has wrong status? Do we need a test for that?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you mean with the file version changed? It shouldn't notice the version issues without s3guard, will fai on read() after a number of retriesl when s3guard version doesn't match that in the store

Copy link
Member

@liuml07 liuml07 Dec 18, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The two try-with tests are testing that with a good FileStatus passed, it's able to skip the S3Guard and hence no errors are reported regardless of change detection policy.

I was thinking is it worth to make sure if we don't pass file status, the forged file status is indeed used. So it will error out because of bad etag. For e.g.

    try(FSDataInputStream instream = fs.openFile(testpath)
        .build().get()) {
      try {
        instream.read();
       // No exception only if we don't enforce change detection as exception
        assertTrue(changeDetectionMode.equals(CHANGE_DETECT_MODE_NONE) ||
            changeDetectionMode.equals(CHANGE_DETECT_MODE_WARN));
      } catch (Exception ignored) {
        // Ignored.
      }
    }

I think again and now guess this extra test perhaps is not required to demonstrate the withFileStatus is being honored.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done. Also a contract test to verify that the status must be non-null

final Optional<Configuration> options)
final Path file,
final Optional<Configuration> options,
final S3AFileStatus status)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure, but status can be Optional?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure

@steveloughran
Copy link
Contributor Author

thanks for the comments. I'm on vacation until jan so will address them then.

@apache apache deleted a comment from hadoop-yetus Jan 8, 2020
@steveloughran steveloughran force-pushed the s3/HADOOP-16759-openfile-with-status branch from 2f7e666 to 217a0b1 Compare January 9, 2020 17:56
@steveloughran
Copy link
Contributor Author

update with the recommended changes, and some extra tests I could think of. Sorry, but I rebased the branch while I wasn't paying full attention -things aren't going to match up properly. I'm going to do that again once I've done the auth mode merge as I want as much QE of that patch as I can

* Enhanced builder + FS spec
* s3a FS to use this to skip HEAD on open
* and to use version/etag when opening the file

works with S3AFileStatus FS and S3ALocatedFileStatus

Change-Id: If80f73137643fd50a969a92ad5794d0d09e3aee6
* ~builder API for openFileParameters
* S3AFS moves to Optional<S3AFileStatus> where appropriate
* Which includes the select() operation too
* extra test to verify pickup of invalid status when a valid one is not
  passed in
* also contract test to verify that null value is rejected

Change-Id: I391abb6030503fc6288c6494156b85391cb7c196
* approximate builder API for openFileParameters

* S3AFS moves to Optional<S3AFileStatus> where appropriate,
which includes the select() operation too. There's a common method
to do the extraction.

* Switch to objects.requireNonNull in stream builder / openFileParameters
  plus error text (NPE debugging...)

Extra tests
 * verify pickup of invalid status when none is supplied
 * rejection of a status declaring the source is a directory
 * after an object is deleted, the 404 isn't picked up until the read() call
   initiates the GET.
 * And there, if you use server-side versionID, *you get the file still*
 * +contract test to verify that withFileStatus(null) status value is rejected

Change-Id: I326f68538940f245e1af56d3b6055015fd3e1bfe
@steveloughran steveloughran force-pushed the s3/HADOOP-16759-openfile-with-status branch from 217a0b1 to 6a1a951 Compare January 10, 2020 11:29
@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Comment
+0 🆗 reexec 0m 35s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 markdownlint 0m 0s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 3 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 1m 5s Maven dependency ordering for branch
+1 💚 mvninstall 18m 4s trunk passed
+1 💚 compile 16m 50s trunk passed
+1 💚 checkstyle 2m 41s trunk passed
+1 💚 mvnsite 2m 19s trunk passed
+1 💚 shadedclient 19m 9s branch has no errors when building and testing our client artifacts.
+1 💚 javadoc 2m 19s trunk passed
+0 🆗 spotbugs 1m 12s Used deprecated FindBugs config; considering switching to SpotBugs.
+1 💚 findbugs 3m 15s trunk passed
_ Patch Compile Tests _
+0 🆗 mvndep 0m 25s Maven dependency ordering for patch
+1 💚 mvninstall 1m 20s the patch passed
+1 💚 compile 15m 57s the patch passed
+1 💚 javac 15m 57s the patch passed
-0 ⚠️ checkstyle 2m 42s root: The patch generated 17 new + 290 unchanged - 2 fixed = 307 total (was 292)
+1 💚 mvnsite 2m 19s the patch passed
-1 ❌ whitespace 0m 0s The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
+1 💚 shadedclient 13m 3s patch has no errors when building and testing our client artifacts.
+1 💚 javadoc 2m 13s the patch passed
+1 💚 findbugs 3m 27s the patch passed
_ Other Tests _
-1 ❌ unit 8m 48s hadoop-common in the patch failed.
+1 💚 unit 1m 39s hadoop-aws in the patch passed.
+1 💚 asflicense 0m 52s The patch does not generate ASF License warnings.
119m 15s
Reason Tests
Failed junit tests hadoop.fs.TestHarFileSystem
Subsystem Report/Notes
Docker Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1761/3/artifact/out/Dockerfile
GITHUB PR #1761
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle markdownlint
uname Linux 96402919e997 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality personality/hadoop.sh
git revision trunk / 49df838
Default Java 1.8.0_232
checkstyle https://builds.apache.org/job/hadoop-multibranch/job/PR-1761/3/artifact/out/diff-checkstyle-root.txt
whitespace https://builds.apache.org/job/hadoop-multibranch/job/PR-1761/3/artifact/out/whitespace-eol.txt
unit https://builds.apache.org/job/hadoop-multibranch/job/PR-1761/3/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
Test Results https://builds.apache.org/job/hadoop-multibranch/job/PR-1761/3/testReport/
Max. process+thread count 1665 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
Console output https://builds.apache.org/job/hadoop-multibranch/job/PR-1761/3/console
versions git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1
Powered by Apache Yetus 0.11.1 https://yetus.apache.org

This message was automatically generated.

@apache apache deleted a comment from hadoop-yetus Jan 10, 2020
@steveloughran
Copy link
Contributor Author

style

./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/AbstractFileSystem.java:34:import java.util.Set;:8: Unused import - java.util.Set. [UnusedImports]
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java:853:        new CompletableFuture<>(), () -> open(path, parameters.getBufferSize()));: Line is longer than 80 characters (found 81). [LineLength]
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/DelegateToFileSystem.java:269:   * {@link FileSystem#openFileWithOptions(Path, org.apache.hadoop.fs.impl.OpenFileParameters)}.: Line is longer than 80 characters (found 96). [LineLength]
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java:4615:          .withStatus(super.getStatus());  // explicit so as to avoid IDE warnings: Line is longer than 80 characters (found 82). [LineLength]
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/FutureDataInputStreamBuilderImpl.java:150:  public FutureDataInputStreamBuilder withFileStatus(FileStatus status) {:65: 'status' hides a field. [HiddenField]
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/FutureDataInputStreamBuilderImpl.java:155:  /**: First sentence should end with a period. [JavadocStyle]
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/OpenFileParameters.java:59:  public OpenFileParameters withMandatoryKeys(final Set<String> mandatoryKeys) {:65: 'mandatoryKeys' hides a field. [HiddenField]
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/OpenFileParameters.java:64:  public OpenFileParameters withOptions(final Configuration options) {:61: 'options' hides a field. [HiddenField]
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/OpenFileParameters.java:69:  public OpenFileParameters withBufferSize(final int bufferSize) {:54: 'bufferSize' hides a field. [HiddenField]
./hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/impl/OpenFileParameters.java:74:  public OpenFileParameters withStatus(final FileStatus status) {:57: 'status' hides a field. [HiddenField]
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:1004:    S3AFileStatus fileStatus = extractOrFetchSimpleFileStatus(path, providedStatus);: Line is longer than 80 characters (found 84). [LineLength]
./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java:4322:    final S3AFileStatus fileStatus = extractOrFetchSimpleFileStatus(path, providedStatus);: Line is longer than 80 characters (found 90). [LineLength]
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ARemoteFileChanged.java:472:        || changeDetectionSource.equals(CHANGE_DETECT_SOURCE_VERSION_ID));: 'method call' child has incorrect indentation level 8, expected level should be 10. [Indentation]
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ARemoteFileChanged.java:485:       instream.read();: 'try' child has incorrect indentation level 7, expected level should be 6. [Indentation]
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ARemoteFileChanged.java:520:            instream.read();: 'if' child has incorrect indentation level 12, expected level should be 8. [Indentation]
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ARemoteFileChanged.java:521:          } else {: 'if rcurly' has incorrect indentation level 10, expected level should be 6. [Indentation]
./hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ARemoteFileChanged.java:1012:    S3AFileStatus newStatus = writeFile(testpath, dataset, dataset.length / 2, true);: Line is longer than 80 characters (found 85). [LineLength]

Address the checkstyle issues.

Add some tests of the API in ITestS3GuardOutOfBandOperations,
for auth and non-auth.

This includes the discovery (and fix!) of the fact that with the
specific s3guard retry logic of HADOOP-16490, we needed to set a
different retry option to get the tests to fail fast on deleted
file in auth mode. It has been like that for a few months, but
we've never noticed...though even in parallel runs it would have
reduced performance by using up a process for 2+ minutes.
Running in the IDE I initially thought my changes had broken something.

Also: ITestS3Select fails as trying to select a dir raises an FNFE,
just as open() always has. Because we skip looking for dir markers in
select or open, attempts to read a nonexistent file will fail faster
(though still add a 404 to the S3 cache)

Change-Id: I33ffc90ce470c590143cebacc6efd2d2849d2106
@steveloughran
Copy link
Contributor Author

Latest iteration -tested s3 ireland

*Address the checkstyle issues.

  • Adds some tests of the API in ITestS3GuardOutOfBandOperations,
    for auth and non-auth.

This includes the discovery (and fix!) of the fact that with the
specific s3guard retry logic of HADOOP-16490, we needed to set a
different retry option to get the tests to fail fast on deleted
file in auth mode. It has been like that for a few months, but
we've never noticed...though even in parallel runs it would have
reduced performance by using up a process for 2+ minutes.
Running in the IDE I initially thought my changes had broken something.

Also: ITestS3Select fails as trying to select a dir raises an FNFE,
just as open() always has. Because we skip looking for dir markers in
select or open, attempts to read a nonexistent file will fail faster
(though still add a 404 to the S3 cache)

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Comment
+0 🆗 reexec 25m 17s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 1s No case conflicting files found.
+0 🆗 markdownlint 0m 0s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 5 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 1m 10s Maven dependency ordering for branch
+1 💚 mvninstall 18m 6s trunk passed
+1 💚 compile 16m 43s trunk passed
+1 💚 checkstyle 2m 44s trunk passed
+1 💚 mvnsite 2m 16s trunk passed
+1 💚 shadedclient 18m 52s branch has no errors when building and testing our client artifacts.
+1 💚 javadoc 2m 14s trunk passed
+0 🆗 spotbugs 1m 11s Used deprecated FindBugs config; considering switching to SpotBugs.
+1 💚 findbugs 3m 14s trunk passed
_ Patch Compile Tests _
+0 🆗 mvndep 0m 26s Maven dependency ordering for patch
+1 💚 mvninstall 1m 21s the patch passed
+1 💚 compile 15m 52s the patch passed
+1 💚 javac 15m 52s the patch passed
-0 ⚠️ checkstyle 2m 45s root: The patch generated 3 new + 302 unchanged - 2 fixed = 305 total (was 304)
+1 💚 mvnsite 2m 14s the patch passed
-1 ❌ whitespace 0m 0s The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
+1 💚 shadedclient 12m 39s patch has no errors when building and testing our client artifacts.
+1 💚 javadoc 2m 16s the patch passed
+1 💚 findbugs 3m 33s the patch passed
_ Other Tests _
-1 ❌ unit 8m 47s hadoop-common in the patch failed.
+1 💚 unit 1m 36s hadoop-aws in the patch passed.
+1 💚 asflicense 0m 54s The patch does not generate ASF License warnings.
143m 15s
Reason Tests
Failed junit tests hadoop.fs.TestHarFileSystem
Subsystem Report/Notes
Docker Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1761/4/artifact/out/Dockerfile
GITHUB PR #1761
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle markdownlint
uname Linux f6a40df57974 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality personality/hadoop.sh
git revision trunk / 6a859d3
Default Java 1.8.0_232
checkstyle https://builds.apache.org/job/hadoop-multibranch/job/PR-1761/4/artifact/out/diff-checkstyle-root.txt
whitespace https://builds.apache.org/job/hadoop-multibranch/job/PR-1761/4/artifact/out/whitespace-eol.txt
unit https://builds.apache.org/job/hadoop-multibranch/job/PR-1761/4/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
Test Results https://builds.apache.org/job/hadoop-multibranch/job/PR-1761/4/testReport/
Max. process+thread count 1476 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
Console output https://builds.apache.org/job/hadoop-multibranch/job/PR-1761/4/console
versions git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1
Powered by Apache Yetus 0.11.1 https://yetus.apache.org

This message was automatically generated.

@steveloughran steveloughran force-pushed the s3/HADOOP-16759-openfile-with-status branch from b05fae7 to 76e3d46 Compare January 21, 2020 10:07
@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Comment
+0 🆗 reexec 1m 15s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 1s No case conflicting files found.
+0 🆗 markdownlint 0m 0s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 5 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 1m 12s Maven dependency ordering for branch
+1 💚 mvninstall 22m 3s trunk passed
+1 💚 compile 18m 27s trunk passed
+1 💚 checkstyle 2m 55s trunk passed
+1 💚 mvnsite 2m 5s trunk passed
+1 💚 shadedclient 20m 4s branch has no errors when building and testing our client artifacts.
+1 💚 javadoc 1m 57s trunk passed
+0 🆗 spotbugs 1m 7s Used deprecated FindBugs config; considering switching to SpotBugs.
+1 💚 findbugs 3m 9s trunk passed
_ Patch Compile Tests _
+0 🆗 mvndep 0m 21s Maven dependency ordering for patch
+1 💚 mvninstall 1m 19s the patch passed
+1 💚 compile 16m 58s the patch passed
+1 💚 javac 16m 58s the patch passed
-0 ⚠️ checkstyle 2m 48s root: The patch generated 3 new + 302 unchanged - 2 fixed = 305 total (was 304)
+1 💚 mvnsite 2m 7s the patch passed
-1 ❌ whitespace 0m 0s The patch has 3 line(s) that end in whitespace. Use git apply --whitespace=fix <<patch_file>>. Refer https://git-scm.com/docs/git-apply
+1 💚 shadedclient 14m 12s patch has no errors when building and testing our client artifacts.
+1 💚 javadoc 1m 59s the patch passed
+1 💚 findbugs 3m 35s the patch passed
_ Other Tests _
-1 ❌ unit 9m 22s hadoop-common in the patch failed.
+1 💚 unit 1m 25s hadoop-aws in the patch passed.
+1 💚 asflicense 0m 45s The patch does not generate ASF License warnings.
127m 51s
Reason Tests
Failed junit tests hadoop.fs.TestHarFileSystem
Subsystem Report/Notes
Docker Client=19.03.5 Server=19.03.5 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1761/5/artifact/out/Dockerfile
GITHUB PR #1761
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle markdownlint
uname Linux efa71df44878 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality personality/hadoop.sh
git revision trunk / d887e49
Default Java 1.8.0_232
checkstyle https://builds.apache.org/job/hadoop-multibranch/job/PR-1761/5/artifact/out/diff-checkstyle-root.txt
whitespace https://builds.apache.org/job/hadoop-multibranch/job/PR-1761/5/artifact/out/whitespace-eol.txt
unit https://builds.apache.org/job/hadoop-multibranch/job/PR-1761/5/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
Test Results https://builds.apache.org/job/hadoop-multibranch/job/PR-1761/5/testReport/
Max. process+thread count 1720 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
Console output https://builds.apache.org/job/hadoop-multibranch/job/PR-1761/5/console
versions git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1
Powered by Apache Yetus 0.11.1 https://yetus.apache.org

This message was automatically generated.

Copy link
Member

@liuml07 liuml07 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

@liuml07 liuml07 merged commit 5e2ce37 into apache:trunk Jan 21, 2020
@liuml07 liuml07 added the fs/s3 changes related to hadoop-aws; submitter must declare test endpoint label Jan 21, 2020
RogPodge pushed a commit to RogPodge/hadoop that referenced this pull request Mar 25, 2020
apache#1761). Contributed by Steve Loughran

* Enhanced builder + FS spec
* s3a FS to use this to skip HEAD on open
* and to use version/etag when opening the file

works with S3AFileStatus FS and S3ALocatedFileStatus
deepakdamri pushed a commit to acceldata-io/hadoop that referenced this pull request Jan 21, 2025
apache#1761). Contributed by Steve Loughran

* Enhanced builder + FS spec
* s3a FS to use this to skip HEAD on open
* and to use version/etag when opening the file

works with S3AFileStatus FS and S3ALocatedFileStatus
deepakdamri pushed a commit to acceldata-io/hadoop that referenced this pull request Jan 21, 2025
apache#1761). Contributed by Steve Loughran

* Enhanced builder + FS spec
* s3a FS to use this to skip HEAD on open
* and to use version/etag when opening the file

works with S3AFileStatus FS and S3ALocatedFileStatus
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
fs/s3 changes related to hadoop-aws; submitter must declare test endpoint
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants