Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HADOOP-18184. S3A Prefetching unbuffer. #5832

Draft
wants to merge 2 commits into
base: trunk
Choose a base branch
from

Conversation

steveloughran
Copy link
Contributor

@steveloughran steveloughran commented Jul 11, 2023

Making this a pre-requisite for vectored IO as

  • helps me learn my way around the code
  • I propose that the caching block stream stops its prefetching once a vector io request has come in, maybe even free up all existing blocks.

How was this patch tested?

s3 london. slow; want to ge the latest changes to speed up the prefetch tests then rebase onto it

For code changes:

  • Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')?
  • Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation?
  • If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
  • If applicable, have you updated the LICENSE, LICENSE-binary, NOTICE-binary files?

@steveloughran steveloughran marked this pull request as draft July 11, 2023 18:27
@steveloughran
Copy link
Contributor Author

based on #4298

@steveloughran steveloughran force-pushed the s3/pre/HADOOP-18184-unbuffer branch from 7d6ae13 to ffd3180 Compare July 12, 2023 19:56
@apache apache deleted a comment from hadoop-yetus Jul 12, 2023
@apache apache deleted a comment from hadoop-yetus Jul 12, 2023
@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 50s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 1s codespell was not available.
+0 🆗 detsecrets 0m 1s detect-secrets was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 10 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 19m 48s Maven dependency ordering for branch
+1 💚 mvninstall 36m 23s trunk passed
+1 💚 compile 18m 42s trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 💚 compile 17m 10s trunk passed with JDK Private Build-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09
+1 💚 checkstyle 4m 43s trunk passed
+1 💚 mvnsite 2m 29s trunk passed
+1 💚 javadoc 1m 46s trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 💚 javadoc 1m 30s trunk passed with JDK Private Build-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09
+1 💚 spotbugs 3m 51s trunk passed
+1 💚 shadedclient 36m 44s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 29s Maven dependency ordering for patch
+1 💚 mvninstall 1m 25s the patch passed
+1 💚 compile 17m 50s the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 💚 javac 17m 50s the patch passed
+1 💚 compile 16m 56s the patch passed with JDK Private Build-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09
+1 💚 javac 16m 56s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
-0 ⚠️ checkstyle 4m 36s /results-checkstyle-root.txt root: The patch generated 10 new + 5 unchanged - 0 fixed = 15 total (was 5)
+1 💚 mvnsite 2m 26s the patch passed
+1 💚 javadoc 1m 42s the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
-1 ❌ javadoc 0m 42s /results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09.txt hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09 with JDK Private Build-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09 generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0)
+1 💚 spotbugs 4m 9s the patch passed
+1 💚 shadedclient 39m 7s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 18m 46s hadoop-common in the patch passed.
-1 ❌ unit 2m 55s /patch-unit-hadoop-tools_hadoop-aws.txt hadoop-aws in the patch passed.
+1 💚 asflicense 0m 58s The patch does not generate ASF License warnings.
263m 2s
Reason Tests
Failed junit tests hadoop.fs.s3a.prefetch.TestS3ARemoteInputStream
hadoop.fs.s3a.prefetch.TestS3ACachingBlockManager
Subsystem Report/Notes
Docker ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/3/artifact/out/Dockerfile
GITHUB PR #5832
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname Linux 0a48d23fb14f 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / ffd3180
Default Java Private Build-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/3/testReport/
Max. process+thread count 1250 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/3/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@steveloughran steveloughran force-pushed the s3/pre/HADOOP-18184-unbuffer branch from ffd3180 to f24f1fd Compare July 14, 2023 16:05
@steveloughran
Copy link
Contributor Author

HADOOP-18184. S3A prefetch unbuffer

  • Lots of statistic collection with use in tests.
  • s3a prefetch tests all moved to prefetch. package
  • and split into caching stream and large files tests
  • large files and LRU are scale
  • and testRandomReadLargeFile uses small block size to reduce read overhead
  • new hadoop common org.apache.hadoop.test.Sizes sizes class with predefined
    sizes (from azure; not moved existing code to it yet)

Overall, the prefetch reads of the large files are slow; while it's critical
to test multi-block files, we don't need to work on the landsat csv file.

better: one of the huge tests uses it, with a small block size of 1 MB to
force lots of work.

@steveloughran
Copy link
Contributor Author

Yes, this is a lot more than just unbuffer, but its the first time i've really had the code in the IDE with me writing tests to use IOStats, context iostats, waiting for tests to finish etc.

I have more to do which I will followup on different jiras. key: actually support small block memory caching so you can use the stream without any disk use. needed to switch to this everywhere.

@steveloughran
Copy link
Contributor Author

timeout in lru tests

[ERROR] Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 843.577 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3APrefetchingLruEviction
[ERROR] testSeeksWithLruEviction[max-blocks-1](org.apache.hadoop.fs.s3a.ITestS3APrefetchingLruEviction)  Time elapsed: 600.017 s  <<< ERROR!
org.junit.runners.model.TestTimedOutException: test timed out after 600000 milliseconds
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:837)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:999)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1308)
        at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
        at org.apache.hadoop.fs.s3a.ITestS3APrefetchingLruEviction.testSeeksWithLruEviction(ITestS3APrefetchingLruEviction.java:176)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
        at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
        at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
        at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
        at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
        at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
        at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.lang.Thread.run(Thread.java:750)

issue here is having all the different bulk reads in the same test case; if it takes too long (> 10 minutes!) then it fails. the solution here shouldn't be "add a bigger timeout" it should be "make these tests faster by working with smaller files and smaller blocks"

@steveloughran
Copy link
Contributor Author

tested, s3 london, with -Dparallel-tests -DtestsThreadCount=8 -Dprefetch -Dscale and no VPN in the way. This is getting back to as slow as it used to be -and so needs work.

All the landasat tests are going to be long-haul for most people; the existing hugefile tests should be extended to do the reading on their files which are (a) on the chosen aws region and (b) let you control the filesize

[INFO] 
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  29:27 min (Wall Clock)
[INFO] Finished at: 2023-07-14T17:34:02+01:00
[INFO] ------------------------------------------------------------------------
[WARNING] 


@steveloughran steveloughran marked this pull request as ready for review July 14, 2023 17:50
@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 57s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 1s codespell was not available.
+0 🆗 detsecrets 0m 1s detect-secrets was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 19 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 16m 2s Maven dependency ordering for branch
+1 💚 mvninstall 36m 19s trunk passed
+1 💚 compile 18m 30s trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 💚 compile 16m 59s trunk passed with JDK Private Build-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09
+1 💚 checkstyle 4m 41s trunk passed
+1 💚 mvnsite 2m 28s trunk passed
+1 💚 javadoc 1m 47s trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 💚 javadoc 1m 32s trunk passed with JDK Private Build-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09
+1 💚 spotbugs 3m 51s trunk passed
+1 💚 shadedclient 38m 35s branch has no errors when building and testing our client artifacts.
-0 ⚠️ patch 39m 1s Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 30s Maven dependency ordering for patch
+1 💚 mvninstall 1m 24s the patch passed
+1 💚 compile 17m 54s the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 💚 javac 17m 54s the patch passed
+1 💚 compile 16m 52s the patch passed with JDK Private Build-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09
+1 💚 javac 16m 52s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
-0 ⚠️ checkstyle 4m 35s /results-checkstyle-root.txt root: The patch generated 24 new + 5 unchanged - 0 fixed = 29 total (was 5)
+1 💚 mvnsite 2m 28s the patch passed
+1 💚 javadoc 1m 41s the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
-1 ❌ javadoc 0m 42s /results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09.txt hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09 with JDK Private Build-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09 generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0)
+1 💚 spotbugs 4m 16s the patch passed
+1 💚 shadedclient 38m 46s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 18m 50s hadoop-common in the patch passed.
-1 ❌ unit 2m 44s /patch-unit-hadoop-tools_hadoop-aws.txt hadoop-aws in the patch passed.
+1 💚 asflicense 1m 0s The patch does not generate ASF License warnings.
260m 39s
Reason Tests
Failed junit tests hadoop.fs.s3a.prefetch.TestS3ARemoteInputStream
hadoop.fs.s3a.prefetch.TestS3ACachingBlockManager
Subsystem Report/Notes
Docker ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/4/artifact/out/Dockerfile
GITHUB PR #5832
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname Linux 3fb7c87200ac 4.15.0-212-generic #223-Ubuntu SMP Tue May 23 13:09:22 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / f24f1fd
Default Java Private Build-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/4/testReport/
Max. process+thread count 2418 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/4/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@virajjasani
Copy link
Contributor

virajjasani commented Jul 15, 2023

the solution here shouldn't be "add a bigger timeout" it should be "make these tests faster by working with smaller files and smaller blocks"

PR #5851

@steveloughran steveloughran force-pushed the s3/pre/HADOOP-18184-unbuffer branch from f24f1fd to 2e6ab81 Compare July 20, 2023 17:54
@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 9m 4s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 1s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 19 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 16m 18s Maven dependency ordering for branch
+1 💚 mvninstall 20m 28s trunk passed
+1 💚 compile 10m 34s trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 💚 compile 9m 51s trunk passed with JDK Private Build-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09
+1 💚 checkstyle 2m 25s trunk passed
+1 💚 mvnsite 1m 53s trunk passed
+1 💚 javadoc 1m 30s trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 💚 javadoc 1m 19s trunk passed with JDK Private Build-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09
+1 💚 spotbugs 2m 39s trunk passed
+1 💚 shadedclient 22m 37s branch has no errors when building and testing our client artifacts.
-0 ⚠️ patch 22m 58s Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 23s Maven dependency ordering for patch
+1 💚 mvninstall 0m 59s the patch passed
+1 💚 compile 11m 0s the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 💚 javac 11m 0s the patch passed
+1 💚 compile 10m 29s the patch passed with JDK Private Build-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09
+1 💚 javac 10m 29s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
-0 ⚠️ checkstyle 3m 1s /results-checkstyle-root.txt root: The patch generated 24 new + 5 unchanged - 0 fixed = 29 total (was 5)
+1 💚 mvnsite 1m 53s the patch passed
+1 💚 javadoc 1m 27s the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
-1 ❌ javadoc 0m 38s /results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09.txt hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09 with JDK Private Build-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09 generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0)
+1 💚 spotbugs 2m 41s the patch passed
+1 💚 shadedclient 23m 41s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 17m 15s hadoop-common in the patch passed.
-1 ❌ unit 2m 11s /patch-unit-hadoop-tools_hadoop-aws.txt hadoop-aws in the patch passed.
+1 💚 asflicense 0m 51s The patch does not generate ASF License warnings.
181m 9s
Reason Tests
Failed junit tests hadoop.fs.s3a.prefetch.TestS3ACachingBlockManager
hadoop.fs.s3a.prefetch.TestS3ARemoteInputStream
Subsystem Report/Notes
Docker ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/5/artifact/out/Dockerfile
GITHUB PR #5832
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname Linux 024946e44df7 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 2e6ab81
Default Java Private Build-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/5/testReport/
Max. process+thread count 1309 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/5/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 30s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 1s codespell was not available.
+0 🆗 detsecrets 0m 1s detect-secrets was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 19 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 15m 39s Maven dependency ordering for branch
+1 💚 mvninstall 21m 13s trunk passed
+1 💚 compile 11m 45s trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 💚 compile 10m 27s trunk passed with JDK Private Build-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09
+1 💚 checkstyle 2m 45s trunk passed
+1 💚 mvnsite 1m 56s trunk passed
+1 💚 javadoc 1m 32s trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 💚 javadoc 1m 19s trunk passed with JDK Private Build-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09
+1 💚 spotbugs 2m 48s trunk passed
+1 💚 shadedclient 24m 6s branch has no errors when building and testing our client artifacts.
-0 ⚠️ patch 24m 24s Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 25s Maven dependency ordering for patch
+1 💚 mvninstall 0m 54s the patch passed
+1 💚 compile 10m 59s the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 💚 javac 10m 59s the patch passed
+1 💚 compile 10m 31s the patch passed with JDK Private Build-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09
+1 💚 javac 10m 31s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
-0 ⚠️ checkstyle 2m 50s /results-checkstyle-root.txt root: The patch generated 25 new + 5 unchanged - 0 fixed = 30 total (was 5)
+1 💚 mvnsite 1m 56s the patch passed
+1 💚 javadoc 1m 26s the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
-1 ❌ javadoc 0m 37s /results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09.txt hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09 with JDK Private Build-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09 generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0)
+1 💚 spotbugs 2m 52s the patch passed
+1 💚 shadedclient 21m 43s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 16m 35s hadoop-common in the patch passed.
-1 ❌ unit 2m 19s /patch-unit-hadoop-tools_hadoop-aws.txt hadoop-aws in the patch passed.
+1 💚 asflicense 0m 50s The patch does not generate ASF License warnings.
174m 9s
Reason Tests
Failed junit tests hadoop.fs.s3a.prefetch.TestS3ACachingBlockManager
hadoop.fs.s3a.prefetch.TestS3ARemoteInputStream
Subsystem Report/Notes
Docker ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/6/artifact/out/Dockerfile
GITHUB PR #5832
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname Linux e4237ac9ea6c 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / d8694da
Default Java Private Build-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/6/testReport/
Max. process+thread count 1252 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/6/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

Copy link
Contributor

@ahmarsuhail ahmarsuhail left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

have done an initial review of the prod code (not looked at tests). looks good, have some questions and minor suggestions

}
} else {
// free the buffers
bufferPool.getAll().forEach(BufferData::setDone);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done buffers will get released on the next prefetch..but wondering if we can just release here instead.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we had so much grief with the abfs prefetch premature release bug that I am scared now

return false;
}

if (unbuffer) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why don't we just do blockData = null? Since on initializeUnderlyingResources we create a new BlockData obj

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also, why just on unbuffer? shouldn't this be cleaned up on close() too?

String message = String.format(
"Caching disabled because of slow operation (%.1f sec)", endOp.duration());
LOG_CACHING_DISABLED.info(message);
prefetchingStatistics.setPrefetchCachingState(false);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: naming here can get confusing, it's not immediately clear if prefetchCachingState refers to just caching blocks in memory via prefetching, or caching them to disk. It would make things clearer if this was instead something like setPrefetchDiskCachingState. if we do rename, also renaming the caching methods would make things clearer

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done in method, but not in gauge name. What if we ever allow a cache to memory option here?

LOG.debug("Block {}: Preparing to cache block", blockNumber);

if (isCachingDisabled()) {
LOG.debug("Block {}: Preparing caching disabled, not prefetching", blockNumber);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should be caching disabled, not caching or something. prefetching may or not be happening here (it could already be done prefetching by the time it gets here)

final int blockNumber = data.getBlockNumber();
LOG.debug("Block {}: Preparing to cache block", blockNumber);

if (isCachingDisabled()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm confused about why we're doing this twice, here and on line 577. as far as I can tell, in between these two, nothing is changing the isCachingDisabled state

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's a .get() on the future...which blocks until the data is received. the checks on L577 are if caching changed during that time.

added some more comments and reviewed/tuned log messages

@@ -526,18 +585,21 @@ private void addToCacheAndRelease(BufferData data, Future<Void> blockFuture,
synchronized (data) {
try {
if (data.stateEqualsOneOf(BufferData.State.DONE)) {
LOG.debug("Block {}: Block already in cache; not adding", blockNumber);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should be something like "block no longer in use, not adding"

int fileSize = (int) s3Attributes.getLen();
LOG.debug("Created caching input stream for {} (size = {})", this.getName(),
fileSize);
streamStatistics.setPrefetchState(numBlocksToPrefetch > 0,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we could also update this statistic in S3AInMemoryInputStream?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what do you think we should publish? that we are prefetching the entire file?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nevermind...since S3AInMemoryInputStream doesn't actually do any prefetching

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why the numBlocksToPrefetch > 0 though? numBlocksToPrefetch will always be > 0 because in S3AFS we do prefetchBlockCount = intOption(conf, PREFETCH_BLOCK_COUNT_KEY, PREFETCH_BLOCK_DEFAULT_COUNT, 1);

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

set to true. i am thinking about a "no prefetch" option where it's only on demand/vector io; but can tune that if/when implemented

return context;
}

private void incrementBytesRead(int bytesRead) {
if (bytesRead > 0) {
streamStatistics.bytesRead(bytesRead);
streamStatistics.bytesReadFromBuffer(bytesRead);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what's the difference between this and the one on line 432?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

trying to differentiate bytes we read remotely vs bytes which were returned from cached data.

@steveloughran
Copy link
Contributor Author

updated patch. the test that caching is failing because files aren't being added to the buffer dir. theory: they are going somewhere else.

trivia; rebased patch wouldn't push to the repo until I updated my github oauth tokens; see https://docs.github.com/en/get-started/getting-started-with-git/caching-your-github-credentials-in-git

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 29s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 1s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 22 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 14m 5s Maven dependency ordering for branch
+1 💚 mvninstall 22m 18s trunk passed
+1 💚 compile 11m 7s trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 💚 compile 10m 13s trunk passed with JDK Private Build-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09
+1 💚 checkstyle 2m 23s trunk passed
+1 💚 mvnsite 1m 54s trunk passed
+1 💚 javadoc 1m 32s trunk passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 💚 javadoc 1m 21s trunk passed with JDK Private Build-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09
+1 💚 spotbugs 2m 38s trunk passed
+1 💚 shadedclient 21m 51s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 25s Maven dependency ordering for patch
+1 💚 mvninstall 0m 59s the patch passed
+1 💚 compile 9m 51s the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 💚 javac 9m 51s the patch passed
+1 💚 compile 9m 37s the patch passed with JDK Private Build-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09
+1 💚 javac 9m 37s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
-0 ⚠️ checkstyle 2m 17s /results-checkstyle-root.txt root: The patch generated 24 new + 8 unchanged - 0 fixed = 32 total (was 8)
+1 💚 mvnsite 1m 52s the patch passed
+1 💚 javadoc 1m 27s the patch passed with JDK Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1
+1 💚 javadoc 1m 22s the patch passed with JDK Private Build-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09
+1 💚 spotbugs 2m 47s the patch passed
+1 💚 shadedclient 21m 48s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 16m 43s hadoop-common in the patch passed.
-1 ❌ unit 2m 19s /patch-unit-hadoop-tools_hadoop-aws.txt hadoop-aws in the patch passed.
+1 💚 asflicense 0m 50s The patch does not generate ASF License warnings.
167m 8s
Reason Tests
Failed junit tests hadoop.fs.s3a.prefetch.TestS3ACachingBlockManager
hadoop.fs.s3a.prefetch.TestS3ARemoteInputStream
Subsystem Report/Notes
Docker ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/7/artifact/out/Dockerfile
GITHUB PR #5832
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets
uname Linux 91de6937f614 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / c0e4f1c
Default Java Private Build-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.19+7-post-Ubuntu-0ubuntu120.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_362-8u372-gaus1-0ubuntu120.04-b09
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/7/testReport/
Max. process+thread count 1475 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/7/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

Copy link
Contributor

@ahmarsuhail ahmarsuhail left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reviewed test code as well

return false;
}

if (unbuffer) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also, why just on unbuffer? shouldn't this be cleaned up on close() too?

}
// update the statistics
prefetchingStatistics.fetchOperationCompleted(isPrefetch, bytesFetched);
if (tracker != null) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

potentially could remove this null check and the one on 404 for the tracker. it used to be null for non prefetching ops before..but won't be null anymore

int fileSize = (int) s3Attributes.getLen();
LOG.debug("Created caching input stream for {} (size = {})", this.getName(),
fileSize);
streamStatistics.setPrefetchState(numBlocksToPrefetch > 0,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nevermind...since S3AInMemoryInputStream doesn't actually do any prefetching

int fileSize = (int) s3Attributes.getLen();
LOG.debug("Created caching input stream for {} (size = {})", this.getName(),
fileSize);
streamStatistics.setPrefetchState(numBlocksToPrefetch > 0,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why the numBlocksToPrefetch > 0 though? numBlocksToPrefetch will always be > 0 because in S3AFS we do prefetchBlockCount = intOption(conf, PREFETCH_BLOCK_COUNT_KEY, PREFETCH_BLOCK_DEFAULT_COUNT, 1);

* @param prefetch prefetch option
* @return the modified configuration.
*/
public static Configuration enablePrefetch(final Configuration conf, boolean prefetch) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: renaming to setPrefetchState or something would improve readability. enablePrefetch on a glance, makes it seem like we're always enabling it

verifyStatisticCounterValue(ioStats, STREAM_READ_BYTES,
expectedReadBytes);
// unbuffer
in.unbuffer();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since this test is getting quite big..it might be better to have a separate test for unbuffer

byte[] buffer = new byte[prefetchBlockSize];

in.read(buffer, 0, prefetchBlockSize - 10240);
assertCacheFileExists();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't we add a seek and then a read here? Though I tried that locally and the test still fails

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this turns out to be more complex than I'd thought, as there are some long standing behaviours in ensureCurrentBuffer which i suspect is broken.

* asserts whether file with .bin suffix is present. It also verifies certain file stats.
*/
@Test
public void testCacheFileExistence() throws Throwable {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thinking if we can also add a test to check caching gets disabled if it takes too long....but not sure how to do it (or if it's possible)

Also a test that if it's unbuffer, it doesn't get cached

ioStatisticsContext = getCurrentIOStatisticsContext();
ioStatisticsContext.reset();
}

private void createLargeFile() throws Exception {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you think it's worth following the same pattern as AbstractS3ACostTest, which creates a huge file in a test, and then other tests assert that the file exists. ITestInMemoryInputStream could extend it as well, and avoid creating and tearing down the small file multiple times

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 36s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 1s No case conflicting files found.
+0 🆗 codespell 0m 1s codespell was not available.
+0 🆗 detsecrets 0m 1s detect-secrets was not available.
+0 🆗 markdownlint 0m 1s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 26 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 14m 13s Maven dependency ordering for branch
+1 💚 mvninstall 21m 51s trunk passed
+1 💚 compile 11m 37s trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04
+1 💚 compile 10m 32s trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 💚 checkstyle 2m 35s trunk passed
+1 💚 mvnsite 1m 50s trunk passed
+1 💚 javadoc 1m 32s trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04
+1 💚 javadoc 1m 21s trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 💚 spotbugs 3m 1s trunk passed
+1 💚 shadedclient 22m 53s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 24s Maven dependency ordering for patch
+1 💚 mvninstall 1m 1s the patch passed
+1 💚 compile 11m 3s the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04
+1 💚 javac 11m 3s the patch passed
+1 💚 compile 10m 31s the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 💚 javac 10m 31s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
-0 ⚠️ checkstyle 2m 44s /results-checkstyle-root.txt root: The patch generated 23 new + 8 unchanged - 0 fixed = 31 total (was 8)
+1 💚 mvnsite 1m 56s the patch passed
+1 💚 javadoc 1m 31s the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04
-1 ❌ javadoc 0m 39s /patch-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_382-8u382-ga-1~20.04.1-b05.txt hadoop-aws in the patch failed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05.
+1 💚 spotbugs 3m 14s the patch passed
+1 💚 shadedclient 22m 53s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 17m 3s hadoop-common in the patch passed.
-1 ❌ unit 2m 21s /patch-unit-hadoop-tools_hadoop-aws.txt hadoop-aws in the patch passed.
+1 💚 asflicense 0m 50s The patch does not generate ASF License warnings.
173m 57s
Reason Tests
Failed junit tests hadoop.fs.s3a.prefetch.TestS3ACachingBlockManager
Subsystem Report/Notes
Docker ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/8/artifact/out/Dockerfile
GITHUB PR #5832
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint
uname Linux 1ce35e95d6a3 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 7e40bd9
Default Java Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/8/testReport/
Max. process+thread count 2159 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/8/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 30s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 1s No case conflicting files found.
+0 🆗 codespell 0m 1s codespell was not available.
+0 🆗 detsecrets 0m 1s detect-secrets was not available.
+0 🆗 markdownlint 0m 1s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 26 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 14m 11s Maven dependency ordering for branch
+1 💚 mvninstall 21m 54s trunk passed
+1 💚 compile 11m 43s trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04
+1 💚 compile 10m 23s trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 💚 checkstyle 2m 56s trunk passed
+1 💚 mvnsite 2m 1s trunk passed
+1 💚 javadoc 1m 34s trunk passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04
+1 💚 javadoc 1m 21s trunk passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 💚 spotbugs 2m 49s trunk passed
+1 💚 shadedclient 23m 5s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 21s Maven dependency ordering for patch
+1 💚 mvninstall 1m 2s the patch passed
+1 💚 compile 10m 52s the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04
+1 💚 javac 10m 52s the patch passed
+1 💚 compile 10m 36s the patch passed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
+1 💚 javac 10m 36s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
-0 ⚠️ checkstyle 2m 58s /results-checkstyle-root.txt root: The patch generated 23 new + 8 unchanged - 0 fixed = 31 total (was 8)
+1 💚 mvnsite 2m 0s the patch passed
+1 💚 javadoc 1m 28s the patch passed with JDK Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04
-1 ❌ javadoc 0m 38s /patch-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_382-8u382-ga-1~20.04.1-b05.txt hadoop-aws in the patch failed with JDK Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05.
+1 💚 spotbugs 3m 5s the patch passed
+1 💚 shadedclient 23m 3s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 16m 55s hadoop-common in the patch passed.
-1 ❌ unit 2m 21s /patch-unit-hadoop-tools_hadoop-aws.txt hadoop-aws in the patch passed.
+1 💚 asflicense 0m 50s The patch does not generate ASF License warnings.
174m 29s
Reason Tests
Failed junit tests hadoop.fs.s3a.prefetch.TestS3ACachingBlockManager
Subsystem Report/Notes
Docker ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/9/artifact/out/Dockerfile
GITHUB PR #5832
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint
uname Linux 8353235891c8 4.15.0-213-generic #224-Ubuntu SMP Mon Jun 19 13:30:12 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 7e40bd9
Default Java Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.20+8-post-Ubuntu-1ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_382-8u382-ga-1~20.04.1-b05
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/9/testReport/
Max. process+thread count 2160 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/9/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@steveloughran
Copy link
Contributor Author

testing: some failures. also a timeout in ITestS3APrefetchingLruEviction which I think shows a test in need of some tuning

steveloughran added a commit to steveloughran/hadoop that referenced this pull request Jan 17, 2024
…t runs

This is actually trickier than it seems as we will need to go deep into the
implementation of caching.

Specifically: the prefetcher knows the file length and if you open a file
shorter than that, but less than one block, the read is considered a failure
and the whole block is skipped, so read() of the nominally in-range data
returns -1.

This fix has to be considered a PoC and should be combined with the other
big PR for prefetching, apache#5832 as that is where changes should go.

Here is just test tuning and some differentiation of channel problems from
other EOFs.

Change-Id: Icdf7e2fb10ca77b6ca427eb207472fad277130d7
@steveloughran steveloughran force-pushed the s3/pre/HADOOP-18184-unbuffer branch from 1381966 to 2b613ff Compare January 17, 2024 18:38
steveloughran added a commit to steveloughran/hadoop that referenced this pull request Jan 17, 2024
…t runs

This is actually trickier than it seems as we will need to go deep into the
implementation of caching.

Specifically: the prefetcher knows the file length and if you open a file
shorter than that, but less than one block, the read is considered a failure
and the whole block is skipped, so read() of the nominally in-range data
returns -1.

This fix has to be considered a PoC and should be combined with the other
big PR for prefetching, apache#5832 as that is where changes should go.

Here is just test tuning and some differentiation of channel problems from
other EOFs.

Change-Id: Icdf7e2fb10ca77b6ca427eb207472fad277130d7
@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 22s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 1s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+0 🆗 markdownlint 0m 0s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 26 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 14m 3s Maven dependency ordering for branch
+1 💚 mvninstall 19m 17s trunk passed
+1 💚 compile 8m 16s trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04
+1 💚 compile 7m 29s trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08
+1 💚 checkstyle 2m 2s trunk passed
+1 💚 mvnsite 1m 27s trunk passed
+1 💚 javadoc 1m 9s trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04
+1 💚 javadoc 1m 0s trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08
+1 💚 spotbugs 2m 8s trunk passed
+1 💚 shadedclient 20m 20s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 20s Maven dependency ordering for patch
+1 💚 mvninstall 0m 51s the patch passed
+1 💚 compile 8m 33s the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04
+1 💚 javac 8m 33s the patch passed
+1 💚 compile 7m 39s the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08
+1 💚 javac 7m 39s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
-0 ⚠️ checkstyle 1m 58s /results-checkstyle-root.txt root: The patch generated 37 new + 9 unchanged - 0 fixed = 46 total (was 9)
+1 💚 mvnsite 1m 17s the patch passed
+1 💚 javadoc 0m 52s the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04
-1 ❌ javadoc 0m 24s /patch-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt hadoop-aws in the patch failed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08.
-1 ❌ spotbugs 0m 55s /new-spotbugs-hadoop-tools_hadoop-aws.html hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
+1 💚 shadedclient 22m 38s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 16m 9s hadoop-common in the patch passed.
-1 ❌ unit 2m 21s /patch-unit-hadoop-tools_hadoop-aws.txt hadoop-aws in the patch passed.
+1 💚 asflicense 0m 31s The patch does not generate ASF License warnings.
147m 43s
Reason Tests
SpotBugs module:hadoop-tools/hadoop-aws
Dead store to tracker in org.apache.hadoop.fs.s3a.prefetch.S3ARemoteObject.openForRead(long, int) At S3ARemoteObject.java:org.apache.hadoop.fs.s3a.prefetch.S3ARemoteObject.openForRead(long, int) At S3ARemoteObject.java:[line 191]
Failed junit tests hadoop.fs.s3a.prefetch.TestS3ACachingBlockManager
Subsystem Report/Notes
Docker ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/17/artifact/out/Dockerfile
GITHUB PR #5832
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint
uname Linux 285d8df21024 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 2b613ff
Default Java Private Build-1.8.0_392-8u392-ga-1~20.04-b08
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/17/testReport/
Max. process+thread count 2153 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/17/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 22s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 1s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+0 🆗 markdownlint 0m 0s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 28 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 13m 57s Maven dependency ordering for branch
+1 💚 mvninstall 22m 50s trunk passed
+1 💚 compile 8m 34s trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04
+1 💚 compile 7m 35s trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08
+1 💚 checkstyle 2m 4s trunk passed
+1 💚 mvnsite 1m 22s trunk passed
+1 💚 javadoc 0m 59s trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04
+1 💚 javadoc 0m 52s trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08
+1 💚 spotbugs 2m 22s trunk passed
+1 💚 shadedclient 23m 5s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 20s Maven dependency ordering for patch
-1 ❌ mvninstall 0m 16s /patch-mvninstall-hadoop-tools_hadoop-aws.txt hadoop-aws in the patch failed.
-1 ❌ compile 7m 37s /patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt root in the patch failed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.
-1 ❌ javac 7m 37s /patch-compile-root-jdkUbuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.txt root in the patch failed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04.
-1 ❌ compile 7m 19s /patch-compile-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt root in the patch failed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08.
-1 ❌ javac 7m 19s /patch-compile-root-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt root in the patch failed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08.
+1 💚 blanks 0m 0s The patch has no blanks issues.
-0 ⚠️ checkstyle 1m 59s /results-checkstyle-root.txt root: The patch generated 38 new + 9 unchanged - 0 fixed = 47 total (was 9)
-1 ❌ mvnsite 0m 26s /patch-mvnsite-hadoop-tools_hadoop-aws.txt hadoop-aws in the patch failed.
+1 💚 javadoc 0m 56s the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04
-1 ❌ javadoc 0m 31s /results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_392-8u392-ga-1~20.04-b08.txt hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_392-8u392-ga-120.04-b08 with JDK Private Build-1.8.0_392-8u392-ga-120.04-b08 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
-1 ❌ spotbugs 0m 27s /patch-spotbugs-hadoop-tools_hadoop-aws.txt hadoop-aws in the patch failed.
+1 💚 shadedclient 20m 10s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 16m 26s hadoop-common in the patch passed.
-1 ❌ unit 0m 29s /patch-unit-hadoop-tools_hadoop-aws.txt hadoop-aws in the patch failed.
+1 💚 asflicense 0m 34s The patch does not generate ASF License warnings.
148m 19s
Subsystem Report/Notes
Docker ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/18/artifact/out/Dockerfile
GITHUB PR #5832
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint
uname Linux 7fc48a7e3752 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 774bb73
Default Java Private Build-1.8.0_392-8u392-ga-1~20.04-b08
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/18/testReport/
Max. process+thread count 2683 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/18/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

steveloughran added a commit to steveloughran/hadoop that referenced this pull request Mar 28, 2024
…t runs

This is actually trickier than it seems as we will need to go deep into the
implementation of caching.

Specifically: the prefetcher knows the file length and if you open a file
shorter than that, but less than one block, the read is considered a failure
and the whole block is skipped, so read() of the nominally in-range data
returns -1.

This fix has to be considered a PoC and should be combined with the other
big PR for prefetching, apache#5832 as that is where changes should go.

Here is just test tuning and some differentiation of channel problems from
other EOFs.

Change-Id: Icdf7e2fb10ca77b6ca427eb207472fad277130d7
@steveloughran steveloughran force-pushed the s3/pre/HADOOP-18184-unbuffer branch from 774bb73 to 8a55155 Compare March 28, 2024 11:29
@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 20s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 1s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+0 🆗 markdownlint 0m 0s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 27 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 13m 58s Maven dependency ordering for branch
+1 💚 mvninstall 19m 46s trunk passed
+1 💚 compile 8m 55s trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
+1 💚 compile 8m 10s trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
+1 💚 checkstyle 2m 8s trunk passed
+1 💚 mvnsite 1m 37s trunk passed
+1 💚 javadoc 1m 12s trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
+1 💚 javadoc 1m 7s trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
+1 💚 spotbugs 2m 18s trunk passed
+1 💚 shadedclient 20m 20s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 23s Maven dependency ordering for patch
-1 ❌ mvninstall 0m 19s /patch-mvninstall-hadoop-tools_hadoop-aws.txt hadoop-aws in the patch failed.
-1 ❌ compile 8m 14s /patch-compile-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt root in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.
-1 ❌ javac 8m 14s /patch-compile-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt root in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.
-1 ❌ compile 7m 52s /patch-compile-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt root in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.
-1 ❌ javac 7m 52s /patch-compile-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt root in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.
+1 💚 blanks 0m 0s The patch has no blanks issues.
-0 ⚠️ checkstyle 2m 4s /results-checkstyle-root.txt root: The patch generated 38 new + 9 unchanged - 0 fixed = 47 total (was 9)
-1 ❌ mvnsite 0m 32s /patch-mvnsite-hadoop-tools_hadoop-aws.txt hadoop-aws in the patch failed.
+1 💚 javadoc 1m 8s the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
-1 ❌ javadoc 0m 33s /results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu120.04-b06 with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu120.04-b06 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
-1 ❌ spotbugs 0m 30s /patch-spotbugs-hadoop-tools_hadoop-aws.txt hadoop-aws in the patch failed.
+1 💚 shadedclient 21m 8s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 16m 25s hadoop-common in the patch passed.
-1 ❌ unit 0m 33s /patch-unit-hadoop-tools_hadoop-aws.txt hadoop-aws in the patch failed.
+1 💚 asflicense 0m 41s The patch does not generate ASF License warnings.
148m 4s
Subsystem Report/Notes
Docker ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/19/artifact/out/Dockerfile
GITHUB PR #5832
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint
uname Linux 05b96d044d9a 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 8a55155
Default Java Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/19/testReport/
Max. process+thread count 1280 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/19/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@steveloughran steveloughran force-pushed the s3/pre/HADOOP-18184-unbuffer branch from 8a55155 to fb83df2 Compare March 28, 2024 14:28
@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 21s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 1s codespell was not available.
+0 🆗 detsecrets 0m 1s detect-secrets was not available.
+0 🆗 markdownlint 0m 1s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 27 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 16m 2s Maven dependency ordering for branch
+1 💚 mvninstall 21m 14s trunk passed
+1 💚 compile 8m 52s trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
+1 💚 compile 8m 9s trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
+1 💚 checkstyle 2m 8s trunk passed
+1 💚 mvnsite 1m 33s trunk passed
+1 💚 javadoc 1m 14s trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
+1 💚 javadoc 1m 6s trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
+1 💚 spotbugs 2m 20s trunk passed
+1 💚 shadedclient 20m 36s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 22s Maven dependency ordering for patch
+1 💚 mvninstall 0m 48s the patch passed
+1 💚 compile 8m 33s the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
+1 💚 javac 8m 33s the patch passed
+1 💚 compile 8m 9s the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
+1 💚 javac 8m 9s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
-0 ⚠️ checkstyle 2m 3s /results-checkstyle-root.txt root: The patch generated 38 new + 9 unchanged - 0 fixed = 47 total (was 9)
+1 💚 mvnsite 1m 35s the patch passed
+1 💚 javadoc 1m 4s the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
-1 ❌ javadoc 0m 34s /results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu120.04-b06 with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu120.04-b06 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
+1 💚 spotbugs 2m 34s the patch passed
+1 💚 shadedclient 21m 1s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 16m 18s hadoop-common in the patch passed.
-1 ❌ unit 2m 34s /patch-unit-hadoop-tools_hadoop-aws.txt hadoop-aws in the patch passed.
+1 💚 asflicense 0m 43s The patch does not generate ASF License warnings.
155m 28s
Reason Tests
Failed junit tests hadoop.fs.s3a.prefetch.TestS3ACachingBlockManager
Subsystem Report/Notes
Docker ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/20/artifact/out/Dockerfile
GITHUB PR #5832
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint
uname Linux c2614e30dbe8 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / fb83df2
Default Java Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/20/testReport/
Max. process+thread count 2435 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/20/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@steveloughran
Copy link
Contributor Author

There's some race condition with the list add/evict causing intermittent failures of one of the tests. Looks like the failure condition is

  • block has just been evicted
  • read() says block is in cache
  • read is attempted
  • read fails with FNFE.

Suspect there's some kind of list update issue.
Improved logging but not yet fixed this.

2024-04-24 15:57:22,711 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:initializeClass(5724)) - Initialize S3A class
2024-04-24 15:57:22,734 [setup] DEBUG s3a.S3ATestUtils (S3ATestUtils.java:removeBucketOverrides(914)) - Removing option fs.s3a.bucket.stevel-london.directory.marker.retention; was keep
2024-04-24 15:57:22,757 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:initializeClass(5724)) - Initialize S3A class
2024-04-24 15:57:22,771 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:initialize(549)) - Initializing S3AFileSystem for stevel-london
2024-04-24 15:57:22,774 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:propagateBucketOptions(1103)) - Propagating entries under fs.s3a.bucket.stevel-london.
2024-04-24 15:57:22,777 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:propagateBucketOptions(1124)) - Updating fs.s3a.versioned.store from [core-site.xml]
2024-04-24 15:57:22,777 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:propagateBucketOptions(1124)) - Updating fs.s3a.encryption.algorithm from [core-site.xml]
2024-04-24 15:57:22,777 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:propagateBucketOptions(1124)) - Updating fs.s3a.endpoint from [core-site.xml]
2024-04-24 15:57:22,777 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:propagateBucketOptions(1124)) - Updating fs.s3a.encryption.key from [core-site.xml]
2024-04-24 15:57:22,777 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:propagateBucketOptions(1124)) - Updating fs.s3a.change.detection.source from [core-site.xml]
2024-04-24 15:57:22,778 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:maybeIsolateClassloader(1708)) - Configuration classloader set to S3AFileSystem classloader: sun.misc.Launcher$AppClassLoader@18b4aac2
2024-04-24 15:57:22,783 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:buildEncryptionSecrets(1477)) - Using SSE-KMS with key of length 75 ending with 3
2024-04-24 15:57:22,784 [setup] DEBUG s3a.S3ARetryPolicy (S3ARetryPolicy.java:<init>(145)) - Retrying on recoverable AWS failures 3 times with an initial interval of 500ms
2024-04-24 15:57:22,911 [setup] DEBUG s3a.S3AInstrumentation (S3AInstrumentation.java:getMetricsSystem(254)) - Metrics system inited org.apache.hadoop.metrics2.impl.MetricsSystemImpl@6425ab69
2024-04-24 15:57:22,917 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:initialize(605)) - Client Side Encryption enabled: false
2024-04-24 15:57:22,917 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of fs.s3a.paging.maximum is 5000
2024-04-24 15:57:22,917 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:longBytesOption(952)) - Value of fs.s3a.block.size is 33554432
2024-04-24 15:57:22,918 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:longBytesOption(952)) - Value of fs.s3a.prefetch.block.size is 131072
2024-04-24 15:57:22,918 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of fs.s3a.prefetch.block.count is 8
2024-04-24 15:57:22,918 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of fs.s3a.max.total.tasks is 32
2024-04-24 15:57:22,920 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.threads.keepalivetime = PT1M
2024-04-24 15:57:22,920 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of fs.s3a.executor.capacity is 16
2024-04-24 15:57:22,937 [setup] DEBUG auth.SignerManager (SignerManager.java:initCustomSigners(68)) - No custom signers specified
2024-04-24 15:57:22,940 [setup] DEBUG audit.AuditIntegration (AuditIntegration.java:createAndInitAuditor(109)) - Auditor class is class org.apache.hadoop.fs.s3a.audit.impl.LoggingAuditor
2024-04-24 15:57:22,943 [setup] DEBUG impl.ActiveAuditManagerS3A (ActiveAuditManagerS3A.java:serviceInit(199)) - Audit manager initialized with audit service LoggingAuditor{ID='0d643328-91f6-4da7-acae-86fd72161299', headerEnabled=true, rejectOutOfSpan=true, isMultipartUploadEnabled=true}
2024-04-24 15:57:22,943 [setup] DEBUG impl.ActiveAuditManagerS3A (ActiveAuditManagerS3A.java:serviceStart(212)) - Started audit service LoggingAuditor{ID='0d643328-91f6-4da7-acae-86fd72161299', headerEnabled=true, rejectOutOfSpan=true, isMultipartUploadEnabled=true}
2024-04-24 15:57:22,943 [setup] DEBUG audit.AuditIntegration (AuditIntegration.java:createAndStartAuditManager(76)) - Started Audit Manager Service ActiveAuditManagerS3A in state ActiveAuditManagerS3A: STARTED, auditor=LoggingAuditor{ID='0d643328-91f6-4da7-acae-86fd72161299', headerEnabled=true, rejectOutOfSpan=true, isMultipartUploadEnabled=true}}
2024-04-24 15:57:22,944 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:longOption(930)) - Value of fs.s3a.internal.upload.part.count.limit is 10000
2024-04-24 15:57:22,944 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:createRequestFactory(1202)) - Unset storage class property fs.s3a.create.storage.class; falling back to default storage class
2024-04-24 15:57:22,949 [setup] DEBUG auth.CredentialProviderListFactory (CredentialProviderListFactory.java:createAWSV2CredentialProvider(306)) - Credential provider class is org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider
2024-04-24 15:57:22,949 [setup] DEBUG auth.CredentialProviderListFactory (CredentialProviderListFactory.java:createAWSV2CredentialProvider(306)) - Credential provider class is org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider
2024-04-24 15:57:22,950 [setup] DEBUG auth.CredentialProviderListFactory (CredentialProviderListFactory.java:createAWSCredentialProviderList(151)) - For URI s3a://stevel-london/, using credentials AWSCredentialProviderList name=; refcount= 1; size=2: [TemporaryAWSCredentialsProvider, SimpleAWSCredentialsProvider{accessKey.empty=false, secretKey.empty=false}]
2024-04-24 15:57:22,950 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:bindAWSClient(1047)) - Using credential provider AWSCredentialProviderList name=; refcount= 1; size=2: [TemporaryAWSCredentialsProvider, SimpleAWSCredentialsProvider{accessKey.empty=false, secretKey.empty=false}]
2024-04-24 15:57:22,952 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of fs.s3a.connection.maximum is 512
2024-04-24 15:57:22,952 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.acquisition.timeout = PT1M
2024-04-24 15:57:22,952 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.ttl = PT5M
2024-04-24 15:57:22,952 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.establish.timeout = PT30S
2024-04-24 15:57:22,952 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.idle.time = PT1M
2024-04-24 15:57:22,952 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.timeout = PT25S
2024-04-24 15:57:23,085 [setup] DEBUG s3a.DefaultS3ClientFactory (DefaultS3ClientFactory.java:getS3RegionFromEndpoint(397)) - Endpoint s3.eu-west-2.amazonaws.com is not the default; parsing
2024-04-24 15:57:23,090 [setup] DEBUG s3a.DefaultS3ClientFactory (DefaultS3ClientFactory.java:configureEndpointAndRegion(326)) - Setting endpoint to https://s3.eu-west-2.amazonaws.com
2024-04-24 15:57:23,091 [setup] DEBUG s3a.DefaultS3ClientFactory (DefaultS3ClientFactory.java:configureEndpointAndRegion(353)) - Setting region to eu-west-2 from endpoint
2024-04-24 15:57:23,091 [setup] DEBUG s3a.DefaultS3ClientFactory (DefaultS3ClientFactory.java:maybeApplyS3AccessGrantsConfigurations(419)) - S3 Access Grants plugin is not enabled.
2024-04-24 15:57:23,095 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.request.timeout = PT1M
2024-04-24 15:57:23,097 [setup] DEBUG impl.AWSClientConfig (AWSClientConfig.java:initUserAgent(375)) - Using User-Agent: Hadoop 3.5.0-SNAPSHOT
2024-04-24 15:57:23,105 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of fs.s3a.attempts.maximum is 2
2024-04-24 15:57:23,407 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of fs.s3a.connection.maximum is 512
2024-04-24 15:57:23,407 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.acquisition.timeout = PT1M
2024-04-24 15:57:23,408 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.ttl = PT5M
2024-04-24 15:57:23,408 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.establish.timeout = PT30S
2024-04-24 15:57:23,408 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.idle.time = PT1M
2024-04-24 15:57:23,408 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.timeout = PT25S
2024-04-24 15:57:23,430 [setup] DEBUG s3a.DefaultS3ClientFactory (DefaultS3ClientFactory.java:getS3RegionFromEndpoint(397)) - Endpoint s3.eu-west-2.amazonaws.com is not the default; parsing
2024-04-24 15:57:23,430 [setup] DEBUG s3a.DefaultS3ClientFactory (DefaultS3ClientFactory.java:configureEndpointAndRegion(326)) - Setting endpoint to https://s3.eu-west-2.amazonaws.com
2024-04-24 15:57:23,430 [setup] DEBUG s3a.DefaultS3ClientFactory (DefaultS3ClientFactory.java:configureEndpointAndRegion(353)) - Setting region to eu-west-2 from endpoint
2024-04-24 15:57:23,430 [setup] DEBUG s3a.DefaultS3ClientFactory (DefaultS3ClientFactory.java:maybeApplyS3AccessGrantsConfigurations(419)) - S3 Access Grants plugin is not enabled.
2024-04-24 15:57:23,430 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.request.timeout = PT1M
2024-04-24 15:57:23,430 [setup] DEBUG impl.AWSClientConfig (AWSClientConfig.java:initUserAgent(375)) - Using User-Agent: Hadoop 3.5.0-SNAPSHOT
2024-04-24 15:57:23,431 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of fs.s3a.attempts.maximum is 2
2024-04-24 15:57:23,512 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:doBucketProbing(831)) - skipping check for bucket existence
2024-04-24 15:57:23,513 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:initialize(696)) - Input fadvise policy = default
2024-04-24 15:57:23,514 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:initialize(698)) - Change detection policy = VersionIdChangeDetectionPolicy mode=Server
2024-04-24 15:57:23,514 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:initialize(702)) - Filesystem support for magic committers is enabled
2024-04-24 15:57:23,516 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of fs.s3a.fast.upload.active.blocks is 4
2024-04-24 15:57:23,517 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:initialize(721)) - Using S3ABlockOutputStream with buffer = disk; block=67108864; queue limit=4; multipart=true
2024-04-24 15:57:23,517 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:initialize(731)) - fs.s3a.create.performance = false
2024-04-24 15:57:23,518 [setup] DEBUG impl.DirectoryPolicyImpl (DirectoryPolicyImpl.java:getDirectoryPolicy(189)) - Directory markers will be kept
2024-04-24 15:57:23,518 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:initialize(737)) - Directory marker retention policy is DirectoryMarkerRetention{policy='keep'}
2024-04-24 15:57:23,518 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of fs.s3a.bulk.delete.page.size is 250
2024-04-24 15:57:23,519 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:longBytesOption(952)) - Value of fs.s3a.readahead.range is 32768
2024-04-24 15:57:23,519 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of io.file.buffer.size is 4194304
2024-04-24 15:57:23,519 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:longBytesOption(952)) - Value of fs.s3a.input.async.drain.threshold is 1024
2024-04-24 15:57:23,519 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of fs.s3a.vectored.active.ranged.reads is 4
2024-04-24 15:57:23,519 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:longBytesOption(952)) - Value of fs.s3a.vectored.read.min.seek.size is 4096
2024-04-24 15:57:23,519 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:longBytesOption(952)) - Value of fs.s3a.vectored.read.max.merged.size is 1048576
2024-04-24 15:57:23,520 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:initialize(763)) - Using optimized copyFromLocal implementation: true
2024-04-24 15:57:23,529 [setup] INFO  contract.AbstractFSContractTestBase (AbstractFSContractTestBase.java:setup(196)) - Test filesystem = s3a://stevel-london implemented by S3AFileSystem{uri=s3a://stevel-london, workingDir=s3a://stevel-london/user/stevel, partSize=67108864, enableMultiObjectsDelete=true, maxKeys=5000, OpenFileSupport{changePolicy=VersionIdChangeDetectionPolicy mode=Server, defaultReadAhead=32768, defaultBufferSize=4194304, defaultAsyncDrainThreshold=1024, defaultInputPolicy=default}, blockSize=33554432, multiPartThreshold=134217728, s3EncryptionAlgorithm='SSE_KMS', blockFactory=org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory@1fea9f7c, auditManager=Service ActiveAuditManagerS3A in state ActiveAuditManagerS3A: STARTED, auditor=LoggingAuditor{ID='0d643328-91f6-4da7-acae-86fd72161299', headerEnabled=true, rejectOutOfSpan=true, isMultipartUploadEnabled=true}}, authoritativePath=[], useListV1=false, magicCommitter=true, boundedExecutor=BlockingThreadPoolExecutorService{SemaphoredDelegatingExecutor{permitCount=200, available=200, waiting=0}, activeCount=0}, unboundedExecutor=java.util.concurrent.ThreadPoolExecutor@78d0d3c0[Running, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0], credentials=AWSCredentialProviderList name=; refcount= 1; size=2: [TemporaryAWSCredentialsProvider, SimpleAWSCredentialsProvider{accessKey.empty=false, secretKey.empty=false}], delegation tokens=disabled, DirectoryMarkerRetention{policy='keep'}, instrumentation {S3AInstrumentation{instanceIOStatistics=counters=((op_get_delegation_token.failures=0) (committer_commits_completed=0) (stream_read_bytes_discarded_in_close=0) (stream_write_queue_duration.failures=0) (stream_read_vectored_operations=0) (delegation_tokens_issued=0) (stream_read_total_bytes=0) (store_exists_probe.failures=0) (stream_write_exceptions_completing_upload=0) (stream_read_seek_operations=0) (op_mkdirs=0) (committer_commits_created=0) (object_put_request_completed=0) (directories_created=0) (op_hflush=0) (stream_write_block_uploads_aborted=0) (op_get_file_status.failures=0) (object_multipart_initiated=0) (committer_magic_marker_put.failures=0) (op_exists=0) (stream_read_seek_forward_operations=0) (committer_commits_aborted=0) (object_continue_list_request=0) (op_is_directory=0) (audit_access_check_failure=0) (committer_commits.failures=0) (op_xattr_list.failures=0) (multipart_upload_completed=0) (object_put_bytes=0) (object_multipart_aborted.failures=0) (action_executor_acquired=0) (stream_write_total_data=0) (action_http_get_request.failures=0) (op_xattr_get_named=0) (stream_aborted=0) (files_copied_bytes=0) (multipart_upload_list=0) (stream_read_closed=0) (multipart_upload_list.failures=0) (object_copy_requests=0) (committer_materialize_file.failures=0) (op_is_file=0) (files_copied=0) (op_open=0) (op_access.failures=0) (directories_deleted=0) (stream_read_operations=0) (action_executor_acquired.failures=0) (store_io_throttled=0) (op_rename.failures=0) (committer_jobs_failed=0) (committer_tasks_completed=0) (op_is_directory.failures=0) (op_createfile=0) (stream_read_bytes_discarded_in_abort=0) (files_deleted=0) (stream_read_block_acquire_read=0) (object_delete_request.failures=0) (object_multipart_initiated.failures=0) (op_list_files.failures=0) (fake_directories_deleted=0) (stream_read_seek_bytes_skipped=0) (op_delete=0) (object_delete_objects=0) (op_create.failures=0) (object_metadata_request=0) (op_xattr_get_named_map.failures=0) (stream_read_close_operations=0) (op_access=0) (op_copy_from_local_file=0) (op_rename=0) (committer_stage_file_upload.failures=0) (multipart_upload_part_put_bytes=0) (multipart_upload_part_put=0) (stream_write_exceptions=0) (stream_read_unbuffered=0) (stream_evict_blocks_from_cache=0) (committer_tasks_failed=0) (op_get_content_summary.failures=0) (object_put_request.failures=0) (stream_read_operations_incomplete=0) (fake_directories_created=0) (stream_write_total_time=0) (op_get_file_checksum=0) (committer_commits_reverted=0) (op_xattr_get_map.failures=0) (action_http_head_request.failures=0) (op_abort=0) (audit_span_creation=1) (op_delete.failures=0) (action_file_opened.failures=0) (committer_load_single_pending_file=0) (store_io_request=0) (stream_read_version_mismatches=0) (op_get_content_summary=0) (stream_file_cache_eviction=0) (object_multipart_list.failures=0) (committer_bytes_uploaded=0) (committer_load_single_pending_file.failures=0) (stream_read_remote_stream_drain.failures=0) (op_xattr_list=0) (committer_materialize_file=0) (stream_write_bytes=0) (stream_read_block_fetch_operations.failures=0) (stream_read_opened=0) (object_put_request=0) (store_io_retry=0) (stream_read_bytes_backwards_on_seek=0) (object_multipart_list=0) (op_xattr_get_named.failures=0) (op_get_delegation_token=0) (stream_read_seek_backward_operations=0) (op_get_file_status=0) (stream_read_remote_stream_aborted.failures=0) (op_openfile=0) (op_create=0) (stream_file_cache_eviction.failures=0) (op_xattr_get_named_map=0) (op_mkdirs.failures=0) (op_create_non_recursive=0) (files_delete_rejected=0) (committer_commit_job=0) (ignored_errors=0) (audit_request_execution=0) (stream_read_seek_policy_changed=0) (stream_read_seek_bytes_discarded=0) (op_is_file.failures=0) (multipart_upload_started=0) (store_exists_probe=0) (op_exists.failures=0) (op_get_file_checksum.failures=0) (op_list_files=0) (op_abort.failures=0) (committer_stage_file_upload=0) (stream_read_vectored_combined_ranges=0) (object_bulk_delete_request.failures=0) (stream_write_block_uploads=0) (audit_failure=0) (committer_magic_files_created=0) (op_hsync=0) (action_http_get_request=0) (stream_read_bytes=0) (op_copy_from_local_file.failures=0) (multipart_upload_abort_under_path_invoked=0) (op_list_located_status=0) (op_glob_status.failures=0) (object_continue_list_request.failures=0) (object_multipart_aborted=0) (op_list_status=0) (stream_write_block_uploads_committed=0) (object_bulk_delete_request=0) (stream_write_queue_duration=0) (stream_read_fully_operations=0) (stream_read_block_fetch_operations=0) (stream_read_block_acquire_read.failures=0) (action_file_opened=0) (op_xattr_get_map=0) (committer_magic_marker_put=0) (stream_read_remote_stream_drain=0) (multipart_instantiated=0) (stream_read_remote_stream_aborted=0) (object_delete_request=0) (delegation_tokens_issued.failures=0) (op_list_status.failures=0) (multipart_upload_aborted=0) (files_created=0) (object_list_request.failures=0) (committer_jobs_completed=0) (committer_bytes_committed=0) (committer_commit_job.failures=0) (op_glob_status=0) (stream_read_vectored_read_bytes_discarded=0) (action_http_head_request=0) (stream_read_exceptions=0) (op_createfile.failures=0) (stream_read_vectored_incoming_ranges=0) (object_list_request=0));
gauges=((stream_read_block_prefetch_enabled=0) (object_put_request_active=0) (stream_read_block_fetch_operations=0) (stream_read_block_cache_enabled=0) (stream_read_blocks_in_cache=0) (stream_write_block_uploads_active=0) (client_side_encryption_enabled=0) (stream_read_block_prefetch_limit=0) (stream_read_active_memory_in_use=0) (stream_write_block_uploads_pending=0) (stream_write_block_uploads_data_pending=0) (object_put_bytes_pending=0) (stream_read_block_size=0) (stream_read_active_prefetch_operations=0));
minimums=((op_create.failures.min=-1) (object_continue_list_request.failures.min=-1) (object_continue_list_request.min=-1) (object_delete_request.min=-1) (op_is_directory.min=-1) (stream_read_remote_stream_drain.failures.min=-1) (op_xattr_get_named.failures.min=-1) (store_exists_probe.failures.min=-1) (stream_file_cache_eviction.min=-1) (committer_materialize_file.failures.min=-1) (committer_load_single_pending_file.failures.min=-1) (op_xattr_get_named.min=-1) (object_multipart_aborted.failures.min=-1) (op_delete.min=-1) (op_get_delegation_token.min=-1) (op_get_file_checksum.min=-1) (object_list_request.failures.min=-1) (object_bulk_delete_request.failures.min=-1) (op_list_status.failures.min=-1) (op_access.failures.min=-1) (action_http_head_request.min=-1) (object_multipart_list.min=-1) (op_get_file_status.min=-1) (action_http_get_request.failures.min=-1) (stream_write_queue_duration.failures.min=-1) (committer_magic_marker_put.min=-1) (store_exists_probe.min=-1) (stream_read_remote_stream_aborted.failures.min=-1) (op_glob_status.failures.min=-1) (op_list_status.min=-1) (object_multipart_list.failures.min=-1) (op_delete.failures.min=-1) (committer_commit_job.failures.min=-1) (delegation_tokens_issued.failures.min=-1) (op_rename.min=-1) (object_delete_request.failures.min=-1) (op_glob_status.min=-1) (op_abort.failures.min=-1) (object_multipart_initiated.min=-1) (committer_magic_marker_put.failures.min=-1) (op_createfile.min=-1) (op_exists.min=-1) (op_copy_from_local_file.min=-1) (op_exists.failures.min=-1) (op_mkdirs.min=-1) (action_http_get_request.min=-1) (multipart_upload_list.failures.min=-1) (committer_commit_job.min=-1) (op_get_file_checksum.failures.min=-1) (op_list_files.failures.min=-1) (object_list_request.min=-1) (stream_read_block_fetch_operations.failures.min=-1) (op_xattr_get_named_map.min=-1) (op_get_content_summary.failures.min=-1) (stream_read_remote_stream_aborted.min=-1) (committer_stage_file_upload.min=-1) (object_bulk_delete_request.min=-1) (op_get_content_summary.min=-1) (op_get_file_status.failures.min=-1) (multipart_upload_list.min=-1) (op_xattr_get_map.min=-1) (op_copy_from_local_file.failures.min=-1) (action_executor_acquired.min=-1) (stream_read_block_fetch_operations.min=-1) (op_xattr_list.min=-1) (stream_file_cache_eviction.failures.min=-1) (object_put_request.failures.min=-1) (op_abort.min=-1) (op_create.min=-1) (delegation_tokens_issued.min=-1) (action_http_head_request.failures.min=-1) (committer_load_single_pending_file.min=-1) (op_is_file.failures.min=-1) (op_xattr_get_map.failures.min=-1) (object_multipart_aborted.min=-1) (op_createfile.failures.min=-1) (stream_read_remote_stream_drain.min=-1) (op_xattr_list.failures.min=-1) (stream_read_block_acquire_read.failures.min=-1) (committer_materialize_file.min=-1) (op_access.min=-1) (action_file_opened.failures.min=-1) (op_is_file.min=-1) (object_put_request.min=-1) (op_list_files.min=-1) (op_rename.failures.min=-1) (op_xattr_get_named_map.failures.min=-1) (stream_write_queue_duration.min=-1) (stream_read_block_acquire_read.min=-1) (object_multipart_initiated.failures.min=-1) (committer_stage_file_upload.failures.min=-1) (op_is_directory.failures.min=-1) (op_mkdirs.failures.min=-1) (op_get_delegation_token.failures.min=-1) (action_executor_acquired.failures.min=-1) (action_file_opened.min=-1));
maximums=((action_http_get_request.max=-1) (object_delete_request.failures.max=-1) (action_executor_acquired.failures.max=-1) (op_xattr_get_named.max=-1) (op_xattr_get_named_map.max=-1) (object_continue_list_request.failures.max=-1) (committer_magic_marker_put.failures.max=-1) (object_multipart_initiated.max=-1) (committer_commit_job.failures.max=-1) (stream_file_cache_eviction.failures.max=-1) (op_glob_status.failures.max=-1) (op_delete.failures.max=-1) (stream_read_block_fetch_operations.failures.max=-1) (object_bulk_delete_request.failures.max=-1) (store_exists_probe.failures.max=-1) (op_delete.max=-1) (object_multipart_list.failures.max=-1) (stream_write_queue_duration.failures.max=-1) (object_multipart_aborted.failures.max=-1) (op_xattr_get_named.failures.max=-1) (op_mkdirs.failures.max=-1) (op_xattr_get_map.max=-1) (object_continue_list_request.max=-1) (op_abort.max=-1) (op_get_file_checksum.max=-1) (stream_read_remote_stream_aborted.failures.max=-1) (object_multipart_list.max=-1) (op_create.max=-1) (object_list_request.failures.max=-1) (action_http_get_request.failures.max=-1) (op_get_content_summary.max=-1) (op_get_file_status.failures.max=-1) (stream_write_queue_duration.max=-1) (action_http_head_request.failures.max=-1) (op_get_delegation_token.max=-1) (op_copy_from_local_file.max=-1) (op_exists.failures.max=-1) (action_http_head_request.max=-1) (op_is_directory.failures.max=-1) (committer_materialize_file.failures.max=-1) (op_access.failures.max=-1) (op_createfile.max=-1) (object_bulk_delete_request.max=-1) (object_multipart_aborted.max=-1) (op_copy_from_local_file.failures.max=-1) (object_list_request.max=-1) (op_xattr_list.max=-1) (stream_read_remote_stream_aborted.max=-1) (op_list_files.max=-1) (multipart_upload_list.max=-1) (op_get_file_status.max=-1) (op_access.max=-1) (op_get_content_summary.failures.max=-1) (committer_commit_job.max=-1) (op_get_delegation_token.failures.max=-1) (committer_materialize_file.max=-1) (multipart_upload_list.failures.max=-1) (op_xattr_get_map.failures.max=-1) (action_file_opened.max=-1) (stream_read_remote_stream_drain.max=-1) (object_multipart_initiated.failures.max=-1) (op_createfile.failures.max=-1) (op_xattr_list.failures.max=-1) (delegation_tokens_issued.failures.max=-1) (object_put_request.failures.max=-1) (op_create.failures.max=-1) (delegation_tokens_issued.max=-1) (op_glob_status.max=-1) (action_executor_acquired.max=-1) (stream_read_block_acquire_read.failures.max=-1) (action_file_opened.failures.max=-1) (op_xattr_get_named_map.failures.max=-1) (stream_read_remote_stream_drain.failures.max=-1) (store_exists_probe.max=-1) (object_delete_request.max=-1) (committer_load_single_pending_file.max=-1) (op_list_status.max=-1) (stream_file_cache_eviction.max=-1) (committer_stage_file_upload.failures.max=-1) (op_is_directory.max=-1) (op_rename.failures.max=-1) (committer_magic_marker_put.max=-1) (op_mkdirs.max=-1) (op_rename.max=-1) (op_exists.max=-1) (committer_load_single_pending_file.failures.max=-1) (stream_read_block_fetch_operations.max=-1) (op_is_file.max=-1) (op_list_files.failures.max=-1) (object_put_request.max=-1) (op_abort.failures.max=-1) (committer_stage_file_upload.max=-1) (stream_read_block_acquire_read.max=-1) (op_list_status.failures.max=-1) (op_get_file_checksum.failures.max=-1) (op_is_file.failures.max=-1));
means=((delegation_tokens_issued.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_access.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_get_content_summary.failures.mean=(samples=0, sum=0, mean=0.0000)) (committer_stage_file_upload.mean=(samples=0, sum=0, mean=0.0000)) (op_list_status.mean=(samples=0, sum=0, mean=0.0000)) (op_xattr_get_named_map.mean=(samples=0, sum=0, mean=0.0000)) (committer_magic_marker_put.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_is_file.mean=(samples=0, sum=0, mean=0.0000)) (op_glob_status.mean=(samples=0, sum=0, mean=0.0000)) (object_multipart_aborted.mean=(samples=0, sum=0, mean=0.0000)) (delegation_tokens_issued.mean=(samples=0, sum=0, mean=0.0000)) (object_delete_request.mean=(samples=0, sum=0, mean=0.0000)) (op_delete.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_abort.mean=(samples=0, sum=0, mean=0.0000)) (committer_magic_marker_put.mean=(samples=0, sum=0, mean=0.0000)) (action_executor_acquired.failures.mean=(samples=0, sum=0, mean=0.0000)) (action_file_opened.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_copy_from_local_file.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_put_request.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_continue_list_request.mean=(samples=0, sum=0, mean=0.0000)) (op_is_directory.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_copy_from_local_file.mean=(samples=0, sum=0, mean=0.0000)) (stream_read_block_acquire_read.mean=(samples=0, sum=0, mean=0.0000)) (op_glob_status.failures.mean=(samples=0, sum=0, mean=0.0000)) (action_executor_acquired.mean=(samples=0, sum=0, mean=0.0000)) (op_create.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_xattr_list.mean=(samples=0, sum=0, mean=0.0000)) (object_multipart_aborted.failures.mean=(samples=0, sum=0, mean=0.0000)) (committer_materialize_file.mean=(samples=0, sum=0, mean=0.0000)) (action_http_get_request.mean=(samples=0, sum=0, mean=0.0000)) (object_delete_request.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_get_file_checksum.mean=(samples=0, sum=0, mean=0.0000)) (op_delete.mean=(samples=0, sum=0, mean=0.0000)) (action_file_opened.mean=(samples=0, sum=0, mean=0.0000)) (op_exists.mean=(samples=0, sum=0, mean=0.0000)) (op_abort.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_access.mean=(samples=0, sum=0, mean=0.0000)) (op_xattr_get_named_map.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_list_status.failures.mean=(samples=0, sum=0, mean=0.0000)) (action_http_head_request.failures.mean=(samples=0, sum=0, mean=0.0000)) (committer_load_single_pending_file.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_mkdirs.mean=(samples=0, sum=0, mean=0.0000)) (op_get_delegation_token.failures.mean=(samples=0, sum=0, mean=0.0000)) (stream_write_queue_duration.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_createfile.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_xattr_get_named.failures.mean=(samples=0, sum=0, mean=0.0000)) (action_http_get_request.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_xattr_get_map.failures.mean=(samples=0, sum=0, mean=0.0000)) (stream_file_cache_eviction.failures.mean=(samples=0, sum=0, mean=0.0000)) (committer_commit_job.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_multipart_initiated.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_list_files.mean=(samples=0, sum=0, mean=0.0000)) (op_createfile.mean=(samples=0, sum=0, mean=0.0000)) (stream_read_block_fetch_operations.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_multipart_list.mean=(samples=0, sum=0, mean=0.0000)) (store_exists_probe.failures.mean=(samples=0, sum=0, mean=0.0000)) (committer_stage_file_upload.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_is_file.failures.mean=(samples=0, sum=0, mean=0.0000)) (action_http_head_request.mean=(samples=0, sum=0, mean=0.0000)) (object_continue_list_request.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_get_file_status.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_get_file_checksum.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_is_directory.mean=(samples=0, sum=0, mean=0.0000)) (op_list_files.failures.mean=(samples=0, sum=0, mean=0.0000)) (committer_commit_job.mean=(samples=0, sum=0, mean=0.0000)) (op_rename.failures.mean=(samples=0, sum=0, mean=0.0000)) (stream_read_remote_stream_aborted.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_multipart_initiated.mean=(samples=0, sum=0, mean=0.0000)) (object_bulk_delete_request.mean=(samples=0, sum=0, mean=0.0000)) (op_rename.mean=(samples=0, sum=0, mean=0.0000)) (store_exists_probe.mean=(samples=0, sum=0, mean=0.0000)) (stream_write_queue_duration.mean=(samples=0, sum=0, mean=0.0000)) (op_exists.failures.mean=(samples=0, sum=0, mean=0.0000)) (stream_read_block_fetch_operations.mean=(samples=0, sum=0, mean=0.0000)) (stream_read_remote_stream_aborted.mean=(samples=0, sum=0, mean=0.0000)) (stream_read_block_acquire_read.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_multipart_list.failures.mean=(samples=0, sum=0, mean=0.0000)) (stream_read_remote_stream_drain.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_create.mean=(samples=0, sum=0, mean=0.0000)) (object_list_request.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_bulk_delete_request.failures.mean=(samples=0, sum=0, mean=0.0000)) (committer_materialize_file.failures.mean=(samples=0, sum=0, mean=0.0000)) (stream_read_remote_stream_drain.mean=(samples=0, sum=0, mean=0.0000)) (stream_file_cache_eviction.mean=(samples=0, sum=0, mean=0.0000)) (object_list_request.mean=(samples=0, sum=0, mean=0.0000)) (op_get_content_summary.mean=(samples=0, sum=0, mean=0.0000)) (multipart_upload_list.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_xattr_get_map.mean=(samples=0, sum=0, mean=0.0000)) (op_xattr_get_named.mean=(samples=0, sum=0, mean=0.0000)) (op_mkdirs.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_get_file_status.mean=(samples=0, sum=0, mean=0.0000)) (op_get_delegation_token.mean=(samples=0, sum=0, mean=0.0000)) (object_put_request.mean=(samples=0, sum=0, mean=0.0000)) (committer_load_single_pending_file.mean=(samples=0, sum=0, mean=0.0000)) (op_xattr_list.failures.mean=(samples=0, sum=0, mean=0.0000)) (multipart_upload_list.mean=(samples=0, sum=0, mean=0.0000)));
}}, ClientSideEncryption=false}
2024-04-24 15:57:23,534 [setup] DEBUG impl.MkdirOperation (MkdirOperation.java:execute(94)) - Making directory: s3a://stevel-london/test
2024-04-24 15:57:23,535 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:innerGetFileStatus(3950)) - Getting path status for s3a://stevel-london/test  (test); needEmptyDirectory=false
2024-04-24 15:57:23,535 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:s3GetFileStatus(4009)) - S3GetFileStatus s3a://stevel-london/test
2024-04-24 15:57:23,541 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:listObjects(2965)) - LIST List stevel-london:/test/ delimiter=/ keys=2 requester pays=null
2024-04-24 15:57:23,541 [setup] DEBUG s3a.S3AFileSystem (DurationInfo.java:<init>(80)) - Starting: LIST
2024-04-24 15:57:23,574 [setup] DEBUG s3a.Invoker (DurationInfo.java:<init>(80)) - Starting: create credentials
2024-04-24 15:57:23,577 [setup] DEBUG s3a.Invoker (DurationInfo.java:close(101)) - create credentials: duration 0:00.003s
2024-04-24 15:57:23,577 [setup] DEBUG s3a.AWSCredentialProviderList (AWSCredentialProviderList.java:resolveCredentials(195)) - No credentials from TemporaryAWSCredentialsProvider: org.apache.hadoop.fs.s3a.auth.NoAwsCredentialsException: Session credentials in Hadoop configuration: No AWS Credentials
2024-04-24 15:57:23,577 [setup] DEBUG s3a.AWSCredentialProviderList (AWSCredentialProviderList.java:resolveCredentials(182)) - Using credentials from SimpleAWSCredentialsProvider{accessKey.empty=false, secretKey.empty=false}
2024-04-24 15:57:23,599 [setup] DEBUG impl.LoggingAuditor (LoggingAuditor.java:modifyHttpRequest(400)) - [11] 0d643328-91f6-4da7-acae-86fd72161299-00000005 Executing op_mkdirs with {object_list_request 'test/' size=2, mutating=false}; https://audit.example.org/hadoop/1/op_mkdirs/0d643328-91f6-4da7-acae-86fd72161299-00000005/?op=op_mkdirs&p1=test&pr=stevel&ps=d8c4e4fc-c0aa-48d0-9605-2bf9ed7b08b2&id=0d643328-91f6-4da7-acae-86fd72161299-00000005&t0=11&fs=0d643328-91f6-4da7-acae-86fd72161299&t1=11&ts=1713970643534
2024-04-24 15:57:24,224 [setup] DEBUG s3a.S3AFileSystem (DurationInfo.java:close(101)) - LIST: duration 0:00.682s
2024-04-24 15:57:24,224 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:s3GetFileStatus(4100)) - Not Found: s3a://stevel-london/test
2024-04-24 15:57:24,224 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:innerGetFileStatus(3950)) - Getting path status for s3a://stevel-london/test  (test); needEmptyDirectory=false
2024-04-24 15:57:24,224 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:s3GetFileStatus(4009)) - S3GetFileStatus s3a://stevel-london/test
2024-04-24 15:57:24,230 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:lambda$getObjectMetadata$10(2903)) - HEAD test with change tracker null
2024-04-24 15:57:24,233 [setup] DEBUG impl.LoggingAuditor (LoggingAuditor.java:modifyHttpRequest(400)) - [11] 0d643328-91f6-4da7-acae-86fd72161299-00000005 Executing op_mkdirs with {action_http_head_request 'test' size=0, mutating=false}; https://audit.example.org/hadoop/1/op_mkdirs/0d643328-91f6-4da7-acae-86fd72161299-00000005/?op=op_mkdirs&p1=test&pr=stevel&ps=d8c4e4fc-c0aa-48d0-9605-2bf9ed7b08b2&id=0d643328-91f6-4da7-acae-86fd72161299-00000005&t0=11&fs=0d643328-91f6-4da7-acae-86fd72161299&t1=11&ts=1713970643534
2024-04-24 15:57:24,276 [setup] DEBUG s3a.Invoker (Invoker.java:retryUntranslated(474)) - GET test ; software.amazon.awssdk.services.s3.model.NoSuchKeyException: null (Service: S3, Status Code: 404, Request ID: YM4CDP856RPCKP8R, Extended Request ID: JbbjhNCFZf+5Hrj/WlnzH54455UAuBPn08CVH1vUqgpVm9J9Z7r6zpwxTeoaudWLrPYAzNKN0N/VmQL1yyv2LQ==) (Service: S3, Status Code: 404, Request ID: YM4CDP856RPCKP8R), 
2024-04-24 15:57:24,277 [setup] DEBUG s3a.S3ARetryPolicy (S3ARetryPolicy.java:shouldRetry(308)) - Retry probe for FileNotFoundException with 0 retries and 0 failovers, idempotent=true, due to java.io.FileNotFoundException: GET test on /: software.amazon.awssdk.services.s3.model.NoSuchKeyException: null (Service: S3, Status Code: 404, Request ID: YM4CDP856RPCKP8R, Extended Request ID: JbbjhNCFZf+5Hrj/WlnzH54455UAuBPn08CVH1vUqgpVm9J9Z7r6zpwxTeoaudWLrPYAzNKN0N/VmQL1yyv2LQ==) (Service: S3, Status Code: 404, Request ID: YM4CDP856RPCKP8R):NoSuchKey
java.io.FileNotFoundException: GET test on /: software.amazon.awssdk.services.s3.model.NoSuchKeyException: null (Service: S3, Status Code: 404, Request ID: YM4CDP856RPCKP8R, Extended Request ID: JbbjhNCFZf+5Hrj/WlnzH54455UAuBPn08CVH1vUqgpVm9J9Z7r6zpwxTeoaudWLrPYAzNKN0N/VmQL1yyv2LQ==) (Service: S3, Status Code: 404, Request ID: YM4CDP856RPCKP8R):NoSuchKey
	at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:278)
	at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:481)
	at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:431)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:2895)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:2875)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:4024)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3952)
	at org.apache.hadoop.fs.s3a.S3AFileSystem$MkdirOperationCallbacksImpl.probePathStatus(S3AFileSystem.java:3809)
	at org.apache.hadoop.fs.s3a.impl.MkdirOperation.probePathStatusOrNull(MkdirOperation.java:173)
	at org.apache.hadoop.fs.s3a.impl.MkdirOperation.getPathStatusExpectingDir(MkdirOperation.java:197)
	at org.apache.hadoop.fs.s3a.impl.MkdirOperation.execute(MkdirOperation.java:108)
	at org.apache.hadoop.fs.s3a.impl.MkdirOperation.execute(MkdirOperation.java:57)
	at org.apache.hadoop.fs.s3a.impl.ExecutingStoreOperation.apply(ExecutingStoreOperation.java:76)
	at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:556)
	at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:537)
	at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:458)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2722)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2741)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:3781)
	at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2494)
	at org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:363)
	at org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:205)
	at org.apache.hadoop.fs.s3a.AbstractS3ATestBase.setup(AbstractS3ATestBase.java:111)
	at org.apache.hadoop.fs.s3a.performance.AbstractS3ACostTest.setup(AbstractS3ACostTest.java:129)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
	at org.junit.internal.runners.statements.RunBefores.invokeMethod(RunBefores.java:33)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.lang.Thread.run(Thread.java:750)
Caused by: software.amazon.awssdk.services.s3.model.NoSuchKeyException: null (Service: S3, Status Code: 404, Request ID: YM4CDP856RPCKP8R, Extended Request ID: JbbjhNCFZf+5Hrj/WlnzH54455UAuBPn08CVH1vUqgpVm9J9Z7r6zpwxTeoaudWLrPYAzNKN0N/VmQL1yyv2LQ==) (Service: S3, Status Code: 404, Request ID: YM4CDP856RPCKP8R)
	at software.amazon.awssdk.services.s3.model.NoSuchKeyException$BuilderImpl.build(NoSuchKeyException.java:126)
	at software.amazon.awssdk.services.s3.model.NoSuchKeyException$BuilderImpl.build(NoSuchKeyException.java:80)
	at software.amazon.awssdk.services.s3.internal.handlers.ExceptionTranslationInterceptor.modifyException(ExceptionTranslationInterceptor.java:63)
	at software.amazon.awssdk.core.interceptor.ExecutionInterceptorChain.modifyException(ExecutionInterceptorChain.java:181)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.utils.ExceptionReportingUtils.runModifyException(ExceptionReportingUtils.java:54)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.utils.ExceptionReportingUtils.reportFailureToInterceptors(ExceptionReportingUtils.java:38)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:39)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:26)
	at software.amazon.awssdk.core.internal.http.AmazonSyncHttpClient$RequestExecutionBuilderImpl.execute(AmazonSyncHttpClient.java:224)
	at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.invoke(BaseSyncClientHandler.java:103)
	at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.doExecute(BaseSyncClientHandler.java:173)
	at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.lambda$execute$1(BaseSyncClientHandler.java:80)
	at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.measureApiCallSuccess(BaseSyncClientHandler.java:182)
	at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.execute(BaseSyncClientHandler.java:74)
	at software.amazon.awssdk.core.client.handler.SdkSyncClientHandler.execute(SdkSyncClientHandler.java:45)
	at software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler.execute(AwsSyncClientHandler.java:53)
	at software.amazon.awssdk.services.s3.DefaultS3Client.headObject(DefaultS3Client.java:6319)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$getObjectMetadata$10(S3AFileSystem.java:2907)
	at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:468)
	... 37 more
2024-04-24 15:57:24,278 [setup] DEBUG s3a.S3ARetryPolicy (S3ARetryPolicy.java:shouldRetry(313)) - Retry action is RetryAction(action=FAIL, delayMillis=0, reason=try once and fail.)
2024-04-24 15:57:24,279 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:s3GetFileStatus(4100)) - Not Found: s3a://stevel-london/test
2024-04-24 15:57:24,280 [setup] DEBUG s3a.Invoker (DurationInfo.java:<init>(80)) - Starting: PUT 0-byte object 
2024-04-24 15:57:24,286 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:putObjectDirect(3267)) - PUT 0 bytes to test/
2024-04-24 15:57:24,286 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:incrementPutStartStatistics(3341)) - PUT start 0 bytes
2024-04-24 15:57:24,292 [setup] DEBUG impl.LoggingAuditor (LoggingAuditor.java:modifyHttpRequest(400)) - [11] 0d643328-91f6-4da7-acae-86fd72161299-00000005 Executing op_mkdirs with {object_put_request 'test/' size=0, mutating=true}; https://audit.example.org/hadoop/1/op_mkdirs/0d643328-91f6-4da7-acae-86fd72161299-00000005/?op=op_mkdirs&p1=test&pr=stevel&ps=d8c4e4fc-c0aa-48d0-9605-2bf9ed7b08b2&id=0d643328-91f6-4da7-acae-86fd72161299-00000005&t0=11&fs=0d643328-91f6-4da7-acae-86fd72161299&t1=11&ts=1713970643534
2024-04-24 15:57:24,376 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:incrementPutCompletedStatistics(3357)) - PUT completed success=true; 0 bytes
2024-04-24 15:57:24,376 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:finishedWrite(4690)) - Finished write to test/, len 0. etag "879c68e18e6aa75a0edcab0e0a361786", version JOVjFhK1ZAGKfaJV39PLzFeh0gEF9JCJ
2024-04-24 15:57:24,376 [setup] DEBUG s3a.Invoker (DurationInfo.java:close(101)) - PUT 0-byte object : duration 0:00.096s
2024-04-24 15:57:24,376 [setup] DEBUG S3AFileSystem.Progress (S3AFileSystem.java:incrementPutProgressStatistics(3374)) - PUT test: 0 bytes
2024-04-24 15:57:24,428 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:initializeClass(5724)) - Initialize S3A class
2024-04-24 15:57:24,435 [setup] DEBUG s3a.S3ATestUtils (S3ATestUtils.java:removeBucketOverrides(914)) - Removing option fs.s3a.bucket.stevel-london.directory.marker.retention; was keep
2024-04-24 15:57:24,436 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:initializeClass(5724)) - Initialize S3A class
2024-04-24 15:57:24,436 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:initialize(549)) - Initializing S3AFileSystem for stevel-london
2024-04-24 15:57:24,436 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:propagateBucketOptions(1103)) - Propagating entries under fs.s3a.bucket.stevel-london.
2024-04-24 15:57:24,437 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:propagateBucketOptions(1124)) - Updating fs.s3a.versioned.store from [core-site.xml]
2024-04-24 15:57:24,437 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:propagateBucketOptions(1124)) - Updating fs.s3a.encryption.algorithm from [core-site.xml]
2024-04-24 15:57:24,437 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:propagateBucketOptions(1124)) - Updating fs.s3a.endpoint from [core-site.xml]
2024-04-24 15:57:24,437 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:propagateBucketOptions(1124)) - Updating fs.s3a.encryption.key from [core-site.xml]
2024-04-24 15:57:24,437 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:propagateBucketOptions(1124)) - Updating fs.s3a.change.detection.source from [core-site.xml]
2024-04-24 15:57:24,437 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:maybeIsolateClassloader(1708)) - Configuration classloader set to S3AFileSystem classloader: sun.misc.Launcher$AppClassLoader@18b4aac2
2024-04-24 15:57:24,437 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:buildEncryptionSecrets(1477)) - Using SSE-KMS with key of length 75 ending with 3
2024-04-24 15:57:24,437 [setup] DEBUG s3a.S3ARetryPolicy (S3ARetryPolicy.java:<init>(145)) - Retrying on recoverable AWS failures 3 times with an initial interval of 500ms
2024-04-24 15:57:24,440 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:initialize(605)) - Client Side Encryption enabled: false
2024-04-24 15:57:24,440 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of fs.s3a.paging.maximum is 5000
2024-04-24 15:57:24,440 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:longBytesOption(952)) - Value of fs.s3a.block.size is 33554432
2024-04-24 15:57:24,440 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:longBytesOption(952)) - Value of fs.s3a.prefetch.block.size is 131072
2024-04-24 15:57:24,440 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of fs.s3a.prefetch.block.count is 8
2024-04-24 15:57:24,441 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of fs.s3a.max.total.tasks is 32
2024-04-24 15:57:24,441 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.threads.keepalivetime = PT1M
2024-04-24 15:57:24,441 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of fs.s3a.executor.capacity is 16
2024-04-24 15:57:24,442 [setup] DEBUG auth.SignerManager (SignerManager.java:initCustomSigners(68)) - No custom signers specified
2024-04-24 15:57:24,442 [setup] DEBUG audit.AuditIntegration (AuditIntegration.java:createAndInitAuditor(109)) - Auditor class is class org.apache.hadoop.fs.s3a.audit.impl.LoggingAuditor
2024-04-24 15:57:24,442 [setup] DEBUG impl.ActiveAuditManagerS3A (ActiveAuditManagerS3A.java:serviceInit(199)) - Audit manager initialized with audit service LoggingAuditor{ID='e40ad9fb-842f-43ee-8784-912d440e2355', headerEnabled=true, rejectOutOfSpan=true, isMultipartUploadEnabled=true}
2024-04-24 15:57:24,442 [setup] DEBUG impl.ActiveAuditManagerS3A (ActiveAuditManagerS3A.java:serviceStart(212)) - Started audit service LoggingAuditor{ID='e40ad9fb-842f-43ee-8784-912d440e2355', headerEnabled=true, rejectOutOfSpan=true, isMultipartUploadEnabled=true}
2024-04-24 15:57:24,442 [setup] DEBUG audit.AuditIntegration (AuditIntegration.java:createAndStartAuditManager(76)) - Started Audit Manager Service ActiveAuditManagerS3A in state ActiveAuditManagerS3A: STARTED, auditor=LoggingAuditor{ID='e40ad9fb-842f-43ee-8784-912d440e2355', headerEnabled=true, rejectOutOfSpan=true, isMultipartUploadEnabled=true}}
2024-04-24 15:57:24,442 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:longOption(930)) - Value of fs.s3a.internal.upload.part.count.limit is 10000
2024-04-24 15:57:24,442 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:createRequestFactory(1202)) - Unset storage class property fs.s3a.create.storage.class; falling back to default storage class
2024-04-24 15:57:24,443 [setup] DEBUG auth.CredentialProviderListFactory (CredentialProviderListFactory.java:createAWSV2CredentialProvider(306)) - Credential provider class is org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider
2024-04-24 15:57:24,443 [setup] DEBUG auth.CredentialProviderListFactory (CredentialProviderListFactory.java:createAWSV2CredentialProvider(306)) - Credential provider class is org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider
2024-04-24 15:57:24,443 [setup] DEBUG auth.CredentialProviderListFactory (CredentialProviderListFactory.java:createAWSCredentialProviderList(151)) - For URI s3a://stevel-london/, using credentials AWSCredentialProviderList name=; refcount= 1; size=2: [TemporaryAWSCredentialsProvider, SimpleAWSCredentialsProvider{accessKey.empty=false, secretKey.empty=false}]
2024-04-24 15:57:24,443 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:bindAWSClient(1047)) - Using credential provider AWSCredentialProviderList name=; refcount= 1; size=2: [TemporaryAWSCredentialsProvider, SimpleAWSCredentialsProvider{accessKey.empty=false, secretKey.empty=false}]
2024-04-24 15:57:24,443 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of fs.s3a.connection.maximum is 512
2024-04-24 15:57:24,443 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.acquisition.timeout = PT1M
2024-04-24 15:57:24,443 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.ttl = PT5M
2024-04-24 15:57:24,443 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.establish.timeout = PT30S
2024-04-24 15:57:24,443 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.idle.time = PT1M
2024-04-24 15:57:24,443 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.timeout = PT25S
2024-04-24 15:57:24,443 [setup] DEBUG s3a.DefaultS3ClientFactory (DefaultS3ClientFactory.java:getS3RegionFromEndpoint(397)) - Endpoint s3.eu-west-2.amazonaws.com is not the default; parsing
2024-04-24 15:57:24,443 [setup] DEBUG s3a.DefaultS3ClientFactory (DefaultS3ClientFactory.java:configureEndpointAndRegion(326)) - Setting endpoint to https://s3.eu-west-2.amazonaws.com
2024-04-24 15:57:24,443 [setup] DEBUG s3a.DefaultS3ClientFactory (DefaultS3ClientFactory.java:configureEndpointAndRegion(353)) - Setting region to eu-west-2 from endpoint
2024-04-24 15:57:24,443 [setup] DEBUG s3a.DefaultS3ClientFactory (DefaultS3ClientFactory.java:maybeApplyS3AccessGrantsConfigurations(419)) - S3 Access Grants plugin is not enabled.
2024-04-24 15:57:24,443 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.request.timeout = PT1M
2024-04-24 15:57:24,444 [setup] DEBUG impl.AWSClientConfig (AWSClientConfig.java:initUserAgent(375)) - Using User-Agent: Hadoop 3.5.0-SNAPSHOT
2024-04-24 15:57:24,444 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of fs.s3a.attempts.maximum is 2
2024-04-24 15:57:24,447 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of fs.s3a.connection.maximum is 512
2024-04-24 15:57:24,447 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.acquisition.timeout = PT1M
2024-04-24 15:57:24,447 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.ttl = PT5M
2024-04-24 15:57:24,447 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.establish.timeout = PT30S
2024-04-24 15:57:24,447 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.idle.time = PT1M
2024-04-24 15:57:24,447 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.timeout = PT25S
2024-04-24 15:57:24,447 [setup] DEBUG s3a.DefaultS3ClientFactory (DefaultS3ClientFactory.java:getS3RegionFromEndpoint(397)) - Endpoint s3.eu-west-2.amazonaws.com is not the default; parsing
2024-04-24 15:57:24,447 [setup] DEBUG s3a.DefaultS3ClientFactory (DefaultS3ClientFactory.java:configureEndpointAndRegion(326)) - Setting endpoint to https://s3.eu-west-2.amazonaws.com
2024-04-24 15:57:24,447 [setup] DEBUG s3a.DefaultS3ClientFactory (DefaultS3ClientFactory.java:configureEndpointAndRegion(353)) - Setting region to eu-west-2 from endpoint
2024-04-24 15:57:24,447 [setup] DEBUG s3a.DefaultS3ClientFactory (DefaultS3ClientFactory.java:maybeApplyS3AccessGrantsConfigurations(419)) - S3 Access Grants plugin is not enabled.
2024-04-24 15:57:24,448 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.request.timeout = PT1M
2024-04-24 15:57:24,448 [setup] DEBUG impl.AWSClientConfig (AWSClientConfig.java:initUserAgent(375)) - Using User-Agent: Hadoop 3.5.0-SNAPSHOT
2024-04-24 15:57:24,448 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of fs.s3a.attempts.maximum is 2
2024-04-24 15:57:24,450 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:doBucketProbing(831)) - skipping check for bucket existence
2024-04-24 15:57:24,450 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:initialize(696)) - Input fadvise policy = default
2024-04-24 15:57:24,450 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:initialize(698)) - Change detection policy = VersionIdChangeDetectionPolicy mode=Server
2024-04-24 15:57:24,451 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:initialize(702)) - Filesystem support for magic committers is enabled
2024-04-24 15:57:24,451 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of fs.s3a.fast.upload.active.blocks is 4
2024-04-24 15:57:24,451 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:initialize(721)) - Using S3ABlockOutputStream with buffer = disk; block=67108864; queue limit=4; multipart=true
2024-04-24 15:57:24,451 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:initialize(731)) - fs.s3a.create.performance = false
2024-04-24 15:57:24,451 [setup] DEBUG impl.DirectoryPolicyImpl (DirectoryPolicyImpl.java:getDirectoryPolicy(189)) - Directory markers will be kept
2024-04-24 15:57:24,451 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:initialize(737)) - Directory marker retention policy is DirectoryMarkerRetention{policy='keep'}
2024-04-24 15:57:24,451 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of fs.s3a.bulk.delete.page.size is 250
2024-04-24 15:57:24,451 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:longBytesOption(952)) - Value of fs.s3a.readahead.range is 32768
2024-04-24 15:57:24,451 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of io.file.buffer.size is 4194304
2024-04-24 15:57:24,451 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:longBytesOption(952)) - Value of fs.s3a.input.async.drain.threshold is 1024
2024-04-24 15:57:24,451 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of fs.s3a.vectored.active.ranged.reads is 4
2024-04-24 15:57:24,451 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:longBytesOption(952)) - Value of fs.s3a.vectored.read.min.seek.size is 4096
2024-04-24 15:57:24,451 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:longBytesOption(952)) - Value of fs.s3a.vectored.read.max.merged.size is 1048576
2024-04-24 15:57:24,451 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:initialize(763)) - Using optimized copyFromLocal implementation: true
2024-04-24 15:57:24,454 [setup] INFO  contract.AbstractFSContractTestBase (AbstractFSContractTestBase.java:setup(196)) - Test filesystem = s3a://stevel-london implemented by S3AFileSystem{uri=s3a://stevel-london, workingDir=s3a://stevel-london/user/stevel, partSize=67108864, enableMultiObjectsDelete=true, maxKeys=5000, OpenFileSupport{changePolicy=VersionIdChangeDetectionPolicy mode=Server, defaultReadAhead=32768, defaultBufferSize=4194304, defaultAsyncDrainThreshold=1024, defaultInputPolicy=default}, blockSize=33554432, multiPartThreshold=134217728, s3EncryptionAlgorithm='SSE_KMS', blockFactory=org.apache.hadoop.fs.s3a.S3ADataBlocks$DiskBlockFactory@fbacef6, auditManager=Service ActiveAuditManagerS3A in state ActiveAuditManagerS3A: STARTED, auditor=LoggingAuditor{ID='e40ad9fb-842f-43ee-8784-912d440e2355', headerEnabled=true, rejectOutOfSpan=true, isMultipartUploadEnabled=true}}, authoritativePath=[], useListV1=false, magicCommitter=true, boundedExecutor=BlockingThreadPoolExecutorService{SemaphoredDelegatingExecutor{permitCount=200, available=200, waiting=0}, activeCount=0}, unboundedExecutor=java.util.concurrent.ThreadPoolExecutor@6e3a3097[Running, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0], credentials=AWSCredentialProviderList name=; refcount= 1; size=2: [TemporaryAWSCredentialsProvider, SimpleAWSCredentialsProvider{accessKey.empty=false, secretKey.empty=false}], delegation tokens=disabled, DirectoryMarkerRetention{policy='keep'}, instrumentation {S3AInstrumentation{instanceIOStatistics=counters=((stream_file_cache_eviction=0) (audit_failure=0) (committer_load_single_pending_file=0) (fake_directories_deleted=0) (store_io_request=0) (stream_read_block_acquire_read.failures=0) (multipart_upload_part_put=0) (stream_read_seek_backward_operations=0) (committer_commits_created=0) (stream_write_queue_duration.failures=0) (files_copied=0) (op_is_file=0) (op_createfile=0) (action_http_get_request.failures=0) (op_get_file_status=0) (object_continue_list_request.failures=0) (stream_read_vectored_incoming_ranges=0) (op_xattr_get_map=0) (committer_bytes_uploaded=0) (multipart_upload_abort_under_path_invoked=0) (stream_read_vectored_read_bytes_discarded=0) (op_rename=0) (multipart_upload_list.failures=0) (committer_magic_files_created=0) (store_exists_probe=0) (stream_read_block_fetch_operations.failures=0) (action_executor_acquired=0) (action_file_opened=0) (committer_commits_completed=0) (object_multipart_initiated=0) (op_copy_from_local_file.failures=0) (object_put_bytes=0) (op_mkdirs.failures=0) (committer_jobs_completed=0) (object_multipart_initiated.failures=0) (stream_read_operations=0) (object_delete_objects=0) (op_get_content_summary.failures=0) (stream_read_close_operations=0) (action_http_head_request.failures=0) (stream_read_closed=0) (audit_request_execution=0) (op_get_file_status.failures=0) (op_get_file_checksum=0) (stream_write_total_data=0) (stream_read_seek_bytes_skipped=0) (committer_materialize_file.failures=0) (op_list_status.failures=0) (stream_read_seek_forward_operations=0) (multipart_upload_completed=0) (store_io_throttled=0) (files_created=0) (stream_read_bytes=0) (directories_deleted=0) (committer_jobs_failed=0) (stream_read_operations_incomplete=0) (committer_materialize_file=0) (op_list_status=0) (op_xattr_get_named_map=0) (object_copy_requests=0) (committer_bytes_committed=0) (stream_read_vectored_combined_ranges=0) (stream_read_remote_stream_drain=0) (op_is_directory.failures=0) (op_get_content_summary=0) (object_multipart_aborted=0) (op_delete.failures=0) (op_list_files.failures=0) (op_access=0) (store_io_retry=0) (op_exists=0) (stream_read_total_bytes=0) (op_createfile.failures=0) (op_exists.failures=0) (stream_read_opened=0) (object_metadata_request=0) (object_put_request_completed=0) (op_create.failures=0) (object_put_request=0) (op_open=0) (op_glob_status.failures=0) (multipart_upload_started=0) (committer_tasks_failed=0) (stream_read_version_mismatches=0) (object_multipart_list=0) (fake_directories_created=0) (op_abort=0) (object_delete_request.failures=0) (committer_commits.failures=0) (stream_read_exceptions=0) (stream_read_seek_bytes_discarded=0) (committer_stage_file_upload=0) (object_bulk_delete_request=0) (object_continue_list_request=0) (object_multipart_aborted.failures=0) (op_hsync=0) (stream_write_exceptions=0) (op_rename.failures=0) (directories_created=0) (object_delete_request=0) (stream_write_block_uploads=0) (stream_read_fully_operations=0) (stream_write_queue_duration=0) (op_copy_from_local_file=0) (object_bulk_delete_request.failures=0) (stream_read_vectored_operations=0) (op_xattr_get_map.failures=0) (stream_write_block_uploads_aborted=0) (delegation_tokens_issued.failures=0) (committer_commits_aborted=0) (committer_load_single_pending_file.failures=0) (stream_write_total_time=0) (stream_read_seek_operations=0) (stream_read_remote_stream_aborted.failures=0) (object_put_request.failures=0) (action_executor_acquired.failures=0) (op_create=0) (op_get_delegation_token=0) (op_xattr_get_named=0) (committer_stage_file_upload.failures=0) (op_get_delegation_token.failures=0) (stream_read_bytes_discarded_in_abort=0) (committer_magic_marker_put.failures=0) (op_get_file_checksum.failures=0) (object_list_request.failures=0) (op_xattr_list.failures=0) (op_access.failures=0) (files_copied_bytes=0) (committer_magic_marker_put=0) (multipart_upload_aborted=0) (op_xattr_get_named_map.failures=0) (multipart_instantiated=0) (op_delete=0) (stream_read_unbuffered=0) (op_is_file.failures=0) (op_xattr_list=0) (delegation_tokens_issued=0) (op_list_located_status=0) (action_http_get_request=0) (stream_file_cache_eviction.failures=0) (multipart_upload_part_put_bytes=0) (object_list_request=0) (op_abort.failures=0) (op_list_files=0) (committer_tasks_completed=0) (stream_read_seek_policy_changed=0) (files_deleted=0) (op_xattr_get_named.failures=0) (stream_read_bytes_backwards_on_seek=0) (object_multipart_list.failures=0) (committer_commit_job.failures=0) (stream_read_remote_stream_drain.failures=0) (audit_access_check_failure=0) (op_mkdirs=0) (op_hflush=0) (multipart_upload_list=0) (stream_read_block_fetch_operations=0) (stream_write_block_uploads_committed=0) (stream_read_bytes_discarded_in_close=0) (op_openfile=0) (action_http_head_request=0) (stream_read_remote_stream_aborted=0) (files_delete_rejected=0) (audit_span_creation=1) (stream_read_block_acquire_read=0) (op_is_directory=0) (ignored_errors=0) (stream_evict_blocks_from_cache=0) (op_create_non_recursive=0) (stream_write_exceptions_completing_upload=0) (committer_commit_job=0) (committer_commits_reverted=0) (store_exists_probe.failures=0) (stream_aborted=0) (op_glob_status=0) (action_file_opened.failures=0) (stream_write_bytes=0));
gauges=((stream_write_block_uploads_active=0) (stream_read_blocks_in_cache=0) (stream_read_block_cache_enabled=0) (object_put_request_active=0) (stream_read_active_memory_in_use=0) (stream_read_block_fetch_operations=0) (stream_read_block_prefetch_enabled=0) (stream_read_active_prefetch_operations=0) (stream_write_block_uploads_data_pending=0) (client_side_encryption_enabled=0) (object_put_bytes_pending=0) (stream_read_block_size=0) (stream_write_block_uploads_pending=0) (stream_read_block_prefetch_limit=0));
minimums=((op_exists.failures.min=-1) (op_create.failures.min=-1) (op_is_directory.min=-1) (multipart_upload_list.failures.min=-1) (object_multipart_aborted.failures.min=-1) (stream_write_queue_duration.failures.min=-1) (stream_file_cache_eviction.min=-1) (object_multipart_list.failures.min=-1) (op_glob_status.failures.min=-1) (stream_read_remote_stream_aborted.min=-1) (action_http_get_request.failures.min=-1) (stream_read_block_acquire_read.failures.min=-1) (object_delete_request.failures.min=-1) (op_xattr_list.failures.min=-1) (action_executor_acquired.min=-1) (op_abort.failures.min=-1) (delegation_tokens_issued.min=-1) (stream_file_cache_eviction.failures.min=-1) (op_list_status.failures.min=-1) (op_xattr_list.min=-1) (op_get_content_summary.failures.min=-1) (object_multipart_initiated.min=-1) (op_abort.min=-1) (op_get_file_status.failures.min=-1) (op_create.min=-1) (op_createfile.min=-1) (committer_materialize_file.failures.min=-1) (op_createfile.failures.min=-1) (op_xattr_get_named_map.failures.min=-1) (action_http_head_request.min=-1) (action_file_opened.failures.min=-1) (object_list_request.min=-1) (op_get_file_checksum.failures.min=-1) (op_mkdirs.failures.min=-1) (op_list_files.failures.min=-1) (committer_stage_file_upload.failures.min=-1) (stream_read_block_acquire_read.min=-1) (committer_magic_marker_put.min=-1) (object_put_request.failures.min=-1) (stream_read_remote_stream_aborted.failures.min=-1) (action_http_get_request.min=-1) (object_continue_list_request.failures.min=-1) (object_multipart_aborted.min=-1) (op_get_file_status.min=-1) (store_exists_probe.failures.min=-1) (action_http_head_request.failures.min=-1) (committer_stage_file_upload.min=-1) (op_mkdirs.min=-1) (op_rename.min=-1) (delegation_tokens_issued.failures.min=-1) (op_is_directory.failures.min=-1) (object_put_request.min=-1) (op_access.min=-1) (object_delete_request.min=-1) (op_xattr_get_named.min=-1) (committer_commit_job.min=-1) (object_bulk_delete_request.failures.min=-1) (multipart_upload_list.min=-1) (op_list_status.min=-1) (stream_read_block_fetch_operations.failures.min=-1) (stream_read_remote_stream_drain.min=-1) (stream_read_block_fetch_operations.min=-1) (committer_load_single_pending_file.failures.min=-1) (stream_read_remote_stream_drain.failures.min=-1) (object_continue_list_request.min=-1) (op_get_content_summary.min=-1) (op_get_delegation_token.min=-1) (op_is_file.min=-1) (op_xattr_get_map.min=-1) (op_is_file.failures.min=-1) (stream_write_queue_duration.min=-1) (committer_materialize_file.min=-1) (op_rename.failures.min=-1) (op_list_files.min=-1) (op_xattr_get_named_map.min=-1) (op_glob_status.min=-1) (op_get_delegation_token.failures.min=-1) (op_copy_from_local_file.failures.min=-1) (op_xattr_get_map.failures.min=-1) (object_multipart_list.min=-1) (op_access.failures.min=-1) (op_exists.min=-1) (object_multipart_initiated.failures.min=-1) (op_delete.failures.min=-1) (committer_magic_marker_put.failures.min=-1) (op_get_file_checksum.min=-1) (op_delete.min=-1) (object_bulk_delete_request.min=-1) (object_list_request.failures.min=-1) (committer_commit_job.failures.min=-1) (op_xattr_get_named.failures.min=-1) (action_file_opened.min=-1) (committer_load_single_pending_file.min=-1) (op_copy_from_local_file.min=-1) (action_executor_acquired.failures.min=-1) (store_exists_probe.min=-1));
maximums=((object_delete_request.max=-1) (op_is_file.max=-1) (object_multipart_initiated.failures.max=-1) (action_http_get_request.failures.max=-1) (op_xattr_get_named_map.max=-1) (op_mkdirs.failures.max=-1) (action_http_head_request.max=-1) (op_createfile.max=-1) (action_executor_acquired.max=-1) (op_list_files.failures.max=-1) (stream_read_remote_stream_aborted.max=-1) (stream_read_remote_stream_drain.max=-1) (op_abort.failures.max=-1) (stream_write_queue_duration.failures.max=-1) (op_get_content_summary.max=-1) (object_continue_list_request.failures.max=-1) (op_get_delegation_token.max=-1) (op_xattr_get_map.max=-1) (object_continue_list_request.max=-1) (committer_materialize_file.max=-1) (action_http_head_request.failures.max=-1) (delegation_tokens_issued.max=-1) (op_rename.max=-1) (object_bulk_delete_request.failures.max=-1) (store_exists_probe.failures.max=-1) (op_create.max=-1) (stream_read_block_fetch_operations.max=-1) (op_get_file_status.failures.max=-1) (op_is_directory.failures.max=-1) (object_multipart_aborted.failures.max=-1) (op_create.failures.max=-1) (op_copy_from_local_file.max=-1) (stream_read_block_acquire_read.max=-1) (op_xattr_list.failures.max=-1) (op_xattr_get_named_map.failures.max=-1) (op_get_content_summary.failures.max=-1) (object_multipart_list.max=-1) (op_glob_status.failures.max=-1) (op_get_file_status.max=-1) (op_exists.failures.max=-1) (op_access.max=-1) (object_list_request.max=-1) (committer_commit_job.failures.max=-1) (stream_read_block_fetch_operations.failures.max=-1) (op_delete.failures.max=-1) (op_mkdirs.max=-1) (op_get_delegation_token.failures.max=-1) (op_rename.failures.max=-1) (committer_load_single_pending_file.max=-1) (op_list_status.failures.max=-1) (stream_file_cache_eviction.max=-1) (op_is_file.failures.max=-1) (committer_magic_marker_put.max=-1) (op_exists.max=-1) (multipart_upload_list.max=-1) (op_xattr_get_named.max=-1) (store_exists_probe.max=-1) (op_list_status.max=-1) (action_http_get_request.max=-1) (action_file_opened.max=-1) (committer_magic_marker_put.failures.max=-1) (action_executor_acquired.failures.max=-1) (op_createfile.failures.max=-1) (object_put_request.failures.max=-1) (committer_stage_file_upload.max=-1) (action_file_opened.failures.max=-1) (stream_read_remote_stream_drain.failures.max=-1) (op_copy_from_local_file.failures.max=-1) (op_access.failures.max=-1) (op_delete.max=-1) (object_put_request.max=-1) (stream_read_remote_stream_aborted.failures.max=-1) (op_is_directory.max=-1) (committer_load_single_pending_file.failures.max=-1) (object_multipart_list.failures.max=-1) (object_bulk_delete_request.max=-1) (stream_read_block_acquire_read.failures.max=-1) (op_abort.max=-1) (object_list_request.failures.max=-1) (op_get_file_checksum.failures.max=-1) (object_multipart_initiated.max=-1) (committer_stage_file_upload.failures.max=-1) (op_list_files.max=-1) (op_xattr_get_named.failures.max=-1) (multipart_upload_list.failures.max=-1) (object_multipart_aborted.max=-1) (op_get_file_checksum.max=-1) (delegation_tokens_issued.failures.max=-1) (stream_write_queue_duration.max=-1) (op_xattr_list.max=-1) (committer_commit_job.max=-1) (stream_file_cache_eviction.failures.max=-1) (op_xattr_get_map.failures.max=-1) (object_delete_request.failures.max=-1) (committer_materialize_file.failures.max=-1) (op_glob_status.max=-1));
means=((op_access.failures.mean=(samples=0, sum=0, mean=0.0000)) (stream_read_remote_stream_aborted.failures.mean=(samples=0, sum=0, mean=0.0000)) (stream_write_queue_duration.mean=(samples=0, sum=0, mean=0.0000)) (op_get_delegation_token.failures.mean=(samples=0, sum=0, mean=0.0000)) (stream_file_cache_eviction.mean=(samples=0, sum=0, mean=0.0000)) (op_is_directory.mean=(samples=0, sum=0, mean=0.0000)) (op_xattr_get_named_map.mean=(samples=0, sum=0, mean=0.0000)) (committer_magic_marker_put.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_get_delegation_token.mean=(samples=0, sum=0, mean=0.0000)) (action_executor_acquired.failures.mean=(samples=0, sum=0, mean=0.0000)) (action_http_head_request.mean=(samples=0, sum=0, mean=0.0000)) (delegation_tokens_issued.mean=(samples=0, sum=0, mean=0.0000)) (op_glob_status.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_is_file.mean=(samples=0, sum=0, mean=0.0000)) (committer_load_single_pending_file.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_put_request.mean=(samples=0, sum=0, mean=0.0000)) (object_delete_request.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_is_file.failures.mean=(samples=0, sum=0, mean=0.0000)) (action_file_opened.mean=(samples=0, sum=0, mean=0.0000)) (committer_load_single_pending_file.mean=(samples=0, sum=0, mean=0.0000)) (object_multipart_initiated.mean=(samples=0, sum=0, mean=0.0000)) (stream_read_block_fetch_operations.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_get_file_checksum.failures.mean=(samples=0, sum=0, mean=0.0000)) (committer_stage_file_upload.mean=(samples=0, sum=0, mean=0.0000)) (op_get_file_status.failures.mean=(samples=0, sum=0, mean=0.0000)) (stream_read_remote_stream_drain.mean=(samples=0, sum=0, mean=0.0000)) (op_xattr_get_map.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_list_request.mean=(samples=0, sum=0, mean=0.0000)) (op_glob_status.mean=(samples=0, sum=0, mean=0.0000)) (object_multipart_initiated.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_create.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_continue_list_request.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_bulk_delete_request.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_copy_from_local_file.failures.mean=(samples=0, sum=0, mean=0.0000)) (action_executor_acquired.mean=(samples=0, sum=0, mean=0.0000)) (committer_stage_file_upload.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_copy_from_local_file.mean=(samples=0, sum=0, mean=0.0000)) (action_file_opened.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_list_files.mean=(samples=0, sum=0, mean=0.0000)) (op_abort.mean=(samples=0, sum=0, mean=0.0000)) (op_createfile.failures.mean=(samples=0, sum=0, mean=0.0000)) (stream_file_cache_eviction.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_get_file_status.mean=(samples=0, sum=0, mean=0.0000)) (multipart_upload_list.mean=(samples=0, sum=0, mean=0.0000)) (object_list_request.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_multipart_list.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_list_status.mean=(samples=0, sum=0, mean=0.0000)) (op_xattr_get_map.mean=(samples=0, sum=0, mean=0.0000)) (store_exists_probe.failures.mean=(samples=0, sum=0, mean=0.0000)) (committer_commit_job.mean=(samples=0, sum=0, mean=0.0000)) (op_delete.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_xattr_get_named.mean=(samples=0, sum=0, mean=0.0000)) (stream_read_remote_stream_drain.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_rename.mean=(samples=0, sum=0, mean=0.0000)) (op_get_file_checksum.mean=(samples=0, sum=0, mean=0.0000)) (stream_read_remote_stream_aborted.mean=(samples=0, sum=0, mean=0.0000)) (store_exists_probe.mean=(samples=0, sum=0, mean=0.0000)) (op_mkdirs.mean=(samples=0, sum=0, mean=0.0000)) (op_create.mean=(samples=0, sum=0, mean=0.0000)) (committer_materialize_file.mean=(samples=0, sum=0, mean=0.0000)) (op_get_content_summary.mean=(samples=0, sum=0, mean=0.0000)) (op_list_files.failures.mean=(samples=0, sum=0, mean=0.0000)) (stream_read_block_acquire_read.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_abort.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_delete.mean=(samples=0, sum=0, mean=0.0000)) (op_xattr_list.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_is_directory.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_xattr_get_named_map.failures.mean=(samples=0, sum=0, mean=0.0000)) (stream_write_queue_duration.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_delete_request.mean=(samples=0, sum=0, mean=0.0000)) (object_continue_list_request.mean=(samples=0, sum=0, mean=0.0000)) (op_get_content_summary.failures.mean=(samples=0, sum=0, mean=0.0000)) (stream_read_block_fetch_operations.mean=(samples=0, sum=0, mean=0.0000)) (object_put_request.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_bulk_delete_request.mean=(samples=0, sum=0, mean=0.0000)) (stream_read_block_acquire_read.mean=(samples=0, sum=0, mean=0.0000)) (delegation_tokens_issued.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_rename.failures.mean=(samples=0, sum=0, mean=0.0000)) (committer_commit_job.failures.mean=(samples=0, sum=0, mean=0.0000)) (action_http_head_request.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_xattr_list.mean=(samples=0, sum=0, mean=0.0000)) (committer_magic_marker_put.mean=(samples=0, sum=0, mean=0.0000)) (op_exists.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_list_status.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_access.mean=(samples=0, sum=0, mean=0.0000)) (object_multipart_aborted.mean=(samples=0, sum=0, mean=0.0000)) (action_http_get_request.mean=(samples=0, sum=0, mean=0.0000)) (op_exists.mean=(samples=0, sum=0, mean=0.0000)) (op_xattr_get_named.failures.mean=(samples=0, sum=0, mean=0.0000)) (action_http_get_request.failures.mean=(samples=0, sum=0, mean=0.0000)) (multipart_upload_list.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_multipart_aborted.failures.mean=(samples=0, sum=0, mean=0.0000)) (op_createfile.mean=(samples=0, sum=0, mean=0.0000)) (op_mkdirs.failures.mean=(samples=0, sum=0, mean=0.0000)) (object_multipart_list.mean=(samples=0, sum=0, mean=0.0000)) (committer_materialize_file.failures.mean=(samples=0, sum=0, mean=0.0000)));
}}, ClientSideEncryption=false}
2024-04-24 15:57:24,455 [setup] DEBUG impl.MkdirOperation (MkdirOperation.java:execute(94)) - Making directory: s3a://stevel-london/test
2024-04-24 15:57:24,455 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:innerGetFileStatus(3950)) - Getting path status for s3a://stevel-london/test  (test); needEmptyDirectory=false
2024-04-24 15:57:24,455 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:s3GetFileStatus(4009)) - S3GetFileStatus s3a://stevel-london/test
2024-04-24 15:57:24,455 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:listObjects(2965)) - LIST List stevel-london:/test/ delimiter=/ keys=2 requester pays=null
2024-04-24 15:57:24,455 [setup] DEBUG s3a.S3AFileSystem (DurationInfo.java:<init>(80)) - Starting: LIST
2024-04-24 15:57:24,456 [setup] DEBUG s3a.Invoker (DurationInfo.java:<init>(80)) - Starting: create credentials
2024-04-24 15:57:24,456 [setup] DEBUG s3a.Invoker (DurationInfo.java:close(101)) - create credentials: duration 0:00.000s
2024-04-24 15:57:24,456 [setup] DEBUG s3a.AWSCredentialProviderList (AWSCredentialProviderList.java:resolveCredentials(195)) - No credentials from TemporaryAWSCredentialsProvider: org.apache.hadoop.fs.s3a.auth.NoAwsCredentialsException: Session credentials in Hadoop configuration: No AWS Credentials
2024-04-24 15:57:24,456 [setup] DEBUG s3a.AWSCredentialProviderList (AWSCredentialProviderList.java:resolveCredentials(182)) - Using credentials from SimpleAWSCredentialsProvider{accessKey.empty=false, secretKey.empty=false}
2024-04-24 15:57:24,457 [setup] DEBUG impl.LoggingAuditor (LoggingAuditor.java:modifyHttpRequest(400)) - [11] e40ad9fb-842f-43ee-8784-912d440e2355-00000008 Executing op_mkdirs with {object_list_request 'test/' size=2, mutating=false}; https://audit.example.org/hadoop/1/op_mkdirs/e40ad9fb-842f-43ee-8784-912d440e2355-00000008/?op=op_mkdirs&p1=test&pr=stevel&ps=d8c4e4fc-c0aa-48d0-9605-2bf9ed7b08b2&id=e40ad9fb-842f-43ee-8784-912d440e2355-00000008&t0=11&fs=e40ad9fb-842f-43ee-8784-912d440e2355&t1=11&ts=1713970644455
2024-04-24 15:57:24,848 [setup] DEBUG s3a.S3AFileSystem (DurationInfo.java:close(101)) - LIST: duration 0:00.393s
2024-04-24 15:57:24,849 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:s3GetFileStatus(4073)) - Found path as directory (with /)
2024-04-24 15:57:24,849 [setup] DEBUG s3a.S3AFileSystem (S3ListResult.java:logAtDebug(146)) - Prefix count = 0; object count=1
2024-04-24 15:57:24,849 [setup] DEBUG s3a.S3AFileSystem (S3ListResult.java:logAtDebug(149)) - Summary: test/ 0
2024-04-24 15:57:24,862 [setup] DEBUG s3a.S3ATestUtils (S3ATestUtils.java:removeBucketOverrides(914)) - Removing option fs.s3a.bucket.stevel-london.directory.marker.retention; was keep
2024-04-24 15:57:24,863 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:initialize(549)) - Initializing S3AFileSystem for noaa-cors-pds
2024-04-24 15:57:24,863 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:propagateBucketOptions(1103)) - Propagating entries under fs.s3a.bucket.noaa-cors-pds.
2024-04-24 15:57:24,865 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:propagateBucketOptions(1124)) - Updating fs.s3a.endpoint.region from [core-site.xml]
2024-04-24 15:57:24,865 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:maybeIsolateClassloader(1708)) - Configuration classloader set to S3AFileSystem classloader: sun.misc.Launcher$AppClassLoader@18b4aac2
2024-04-24 15:57:24,865 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:buildEncryptionSecrets(1493)) - Data is unencrypted
2024-04-24 15:57:24,865 [setup] DEBUG s3a.S3ARetryPolicy (S3ARetryPolicy.java:<init>(145)) - Retrying on recoverable AWS failures 3 times with an initial interval of 500ms
2024-04-24 15:57:24,868 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:initialize(605)) - Client Side Encryption enabled: false
2024-04-24 15:57:24,868 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of fs.s3a.paging.maximum is 5000
2024-04-24 15:57:24,868 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:longBytesOption(952)) - Value of fs.s3a.block.size is 33554432
2024-04-24 15:57:24,868 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:longBytesOption(952)) - Value of fs.s3a.prefetch.block.size is 131072
2024-04-24 15:57:24,868 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of fs.s3a.prefetch.block.count is 8
2024-04-24 15:57:24,868 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of fs.s3a.max.total.tasks is 32
2024-04-24 15:57:24,868 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.threads.keepalivetime = PT1M
2024-04-24 15:57:24,868 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of fs.s3a.executor.capacity is 16
2024-04-24 15:57:24,869 [setup] DEBUG auth.SignerManager (SignerManager.java:initCustomSigners(68)) - No custom signers specified
2024-04-24 15:57:24,869 [setup] DEBUG audit.AuditIntegration (AuditIntegration.java:createAndInitAuditor(109)) - Auditor class is class org.apache.hadoop.fs.s3a.audit.impl.LoggingAuditor
2024-04-24 15:57:24,869 [setup] DEBUG impl.ActiveAuditManagerS3A (ActiveAuditManagerS3A.java:serviceInit(199)) - Audit manager initialized with audit service LoggingAuditor{ID='8ccb4ca6-435a-4ba8-b138-ac7140a2c673', headerEnabled=true, rejectOutOfSpan=true, isMultipartUploadEnabled=true}
2024-04-24 15:57:24,869 [setup] DEBUG impl.ActiveAuditManagerS3A (ActiveAuditManagerS3A.java:serviceStart(212)) - Started audit service LoggingAuditor{ID='8ccb4ca6-435a-4ba8-b138-ac7140a2c673', headerEnabled=true, rejectOutOfSpan=true, isMultipartUploadEnabled=true}
2024-04-24 15:57:24,869 [setup] DEBUG audit.AuditIntegration (AuditIntegration.java:createAndStartAuditManager(76)) - Started Audit Manager Service ActiveAuditManagerS3A in state ActiveAuditManagerS3A: STARTED, auditor=LoggingAuditor{ID='8ccb4ca6-435a-4ba8-b138-ac7140a2c673', headerEnabled=true, rejectOutOfSpan=true, isMultipartUploadEnabled=true}}
2024-04-24 15:57:24,869 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:longOption(930)) - Value of fs.s3a.internal.upload.part.count.limit is 10000
2024-04-24 15:57:24,870 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:createRequestFactory(1202)) - Unset storage class property fs.s3a.create.storage.class; falling back to default storage class
2024-04-24 15:57:24,870 [setup] DEBUG auth.CredentialProviderListFactory (CredentialProviderListFactory.java:createAWSV2CredentialProvider(306)) - Credential provider class is org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider
2024-04-24 15:57:24,870 [setup] DEBUG auth.CredentialProviderListFactory (CredentialProviderListFactory.java:createAWSV2CredentialProvider(306)) - Credential provider class is org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider
2024-04-24 15:57:24,870 [setup] DEBUG auth.CredentialProviderListFactory (CredentialProviderListFactory.java:createAWSCredentialProviderList(151)) - For URI s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz, using credentials AWSCredentialProviderList name=; refcount= 1; size=2: [TemporaryAWSCredentialsProvider, SimpleAWSCredentialsProvider{accessKey.empty=false, secretKey.empty=false}]
2024-04-24 15:57:24,870 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:bindAWSClient(1047)) - Using credential provider AWSCredentialProviderList name=; refcount= 1; size=2: [TemporaryAWSCredentialsProvider, SimpleAWSCredentialsProvider{accessKey.empty=false, secretKey.empty=false}]
2024-04-24 15:57:24,870 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of fs.s3a.connection.maximum is 512
2024-04-24 15:57:24,870 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.acquisition.timeout = PT1M
2024-04-24 15:57:24,870 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.ttl = PT5M
2024-04-24 15:57:24,870 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.establish.timeout = PT30S
2024-04-24 15:57:24,870 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.idle.time = PT1M
2024-04-24 15:57:24,870 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.timeout = PT25S
2024-04-24 15:57:24,871 [setup] DEBUG s3a.DefaultS3ClientFactory (DefaultS3ClientFactory.java:configureEndpointAndRegion(353)) - Setting region to us-east-1 from fs.s3a.endpoint.region
2024-04-24 15:57:24,871 [setup] DEBUG s3a.DefaultS3ClientFactory (DefaultS3ClientFactory.java:maybeApplyS3AccessGrantsConfigurations(419)) - S3 Access Grants plugin is not enabled.
2024-04-24 15:57:24,871 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.request.timeout = PT1M
2024-04-24 15:57:24,871 [setup] DEBUG impl.AWSClientConfig (AWSClientConfig.java:initUserAgent(375)) - Using User-Agent: Hadoop 3.5.0-SNAPSHOT
2024-04-24 15:57:24,871 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of fs.s3a.attempts.maximum is 2
2024-04-24 15:57:24,876 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of fs.s3a.connection.maximum is 512
2024-04-24 15:57:24,877 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.acquisition.timeout = PT1M
2024-04-24 15:57:24,877 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.ttl = PT5M
2024-04-24 15:57:24,877 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.establish.timeout = PT30S
2024-04-24 15:57:24,877 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.idle.time = PT1M
2024-04-24 15:57:24,877 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.timeout = PT25S
2024-04-24 15:57:24,877 [setup] DEBUG s3a.DefaultS3ClientFactory (DefaultS3ClientFactory.java:configureEndpointAndRegion(353)) - Setting region to us-east-1 from fs.s3a.endpoint.region
2024-04-24 15:57:24,877 [setup] DEBUG s3a.DefaultS3ClientFactory (DefaultS3ClientFactory.java:maybeApplyS3AccessGrantsConfigurations(419)) - S3 Access Grants plugin is not enabled.
2024-04-24 15:57:24,877 [setup] DEBUG impl.ConfigurationHelper (ConfigurationHelper.java:getDuration(80)) - Duration of fs.s3a.connection.request.timeout = PT1M
2024-04-24 15:57:24,877 [setup] DEBUG impl.AWSClientConfig (AWSClientConfig.java:initUserAgent(375)) - Using User-Agent: Hadoop 3.5.0-SNAPSHOT
2024-04-24 15:57:24,877 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of fs.s3a.attempts.maximum is 2
2024-04-24 15:57:24,880 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:doBucketProbing(831)) - skipping check for bucket existence
2024-04-24 15:57:24,880 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:initialize(696)) - Input fadvise policy = default
2024-04-24 15:57:24,880 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:initialize(698)) - Change detection policy = ETagChangeDetectionPolicy mode=Server
2024-04-24 15:57:24,880 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:initialize(702)) - Filesystem support for magic committers is enabled
2024-04-24 15:57:24,880 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of fs.s3a.fast.upload.active.blocks is 4
2024-04-24 15:57:24,880 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:initialize(721)) - Using S3ABlockOutputStream with buffer = disk; block=67108864; queue limit=4; multipart=true
2024-04-24 15:57:24,880 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:initialize(731)) - fs.s3a.create.performance = false
2024-04-24 15:57:24,881 [setup] DEBUG impl.DirectoryPolicyImpl (DirectoryPolicyImpl.java:getDirectoryPolicy(189)) - Directory markers will be kept
2024-04-24 15:57:24,881 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:initialize(737)) - Directory marker retention policy is DirectoryMarkerRetention{policy='keep'}
2024-04-24 15:57:24,881 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of fs.s3a.bulk.delete.page.size is 250
2024-04-24 15:57:24,881 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:longBytesOption(952)) - Value of fs.s3a.readahead.range is 32768
2024-04-24 15:57:24,881 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of io.file.buffer.size is 4194304
2024-04-24 15:57:24,881 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:longBytesOption(952)) - Value of fs.s3a.input.async.drain.threshold is 1024
2024-04-24 15:57:24,881 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:intOption(909)) - Value of fs.s3a.vectored.active.ranged.reads is 4
2024-04-24 15:57:24,881 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:longBytesOption(952)) - Value of fs.s3a.vectored.read.min.seek.size is 4096
2024-04-24 15:57:24,881 [setup] DEBUG s3a.S3AUtils (S3AUtils.java:longBytesOption(952)) - Value of fs.s3a.vectored.read.max.merged.size is 1048576
2024-04-24 15:57:24,881 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:initialize(763)) - Using optimized copyFromLocal implementation: true
2024-04-24 15:57:24,881 [setup] INFO  contract.AbstractFSContractTestBase (AbstractFSContractTestBase.java:describe(280)) - Verify that FS cache files exist on local FS
2024-04-24 15:57:24,885 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:innerGetFileStatus(3950)) - Getting path status for s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz  (raw/2023/017/ohfh/OHFH017d.23_.gz); needEmptyDirectory=false
2024-04-24 15:57:24,885 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:s3GetFileStatus(4009)) - S3GetFileStatus s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz
2024-04-24 15:57:24,885 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:lambda$getObjectMetadata$10(2903)) - HEAD raw/2023/017/ohfh/OHFH017d.23_.gz with change tracker null
2024-04-24 15:57:24,886 [setup] DEBUG s3a.Invoker (DurationInfo.java:<init>(80)) - Starting: create credentials
2024-04-24 15:57:24,886 [setup] DEBUG s3a.Invoker (DurationInfo.java:close(101)) - create credentials: duration 0:00.000s
2024-04-24 15:57:24,886 [setup] DEBUG s3a.AWSCredentialProviderList (AWSCredentialProviderList.java:resolveCredentials(195)) - No credentials from TemporaryAWSCredentialsProvider: org.apache.hadoop.fs.s3a.auth.NoAwsCredentialsException: Session credentials in Hadoop configuration: No AWS Credentials
2024-04-24 15:57:24,886 [setup] DEBUG s3a.AWSCredentialProviderList (AWSCredentialProviderList.java:resolveCredentials(182)) - Using credentials from SimpleAWSCredentialsProvider{accessKey.empty=false, secretKey.empty=false}
2024-04-24 15:57:24,887 [setup] DEBUG impl.LoggingAuditor (LoggingAuditor.java:modifyHttpRequest(400)) - [11] 8ccb4ca6-435a-4ba8-b138-ac7140a2c673-00000011 Executing op_open with {action_http_head_request 'raw/2023/017/ohfh/OHFH017d.23_.gz' size=0, mutating=false}; https://audit.example.org/hadoop/1/op_open/8ccb4ca6-435a-4ba8-b138-ac7140a2c673-00000011/?op=op_open&p1=raw/2023/017/ohfh/OHFH017d.23_.gz&pr=stevel&ps=d8c4e4fc-c0aa-48d0-9605-2bf9ed7b08b2&id=8ccb4ca6-435a-4ba8-b138-ac7140a2c673-00000011&t0=11&fs=8ccb4ca6-435a-4ba8-b138-ac7140a2c673&t1=11&ts=1713970644885
2024-04-24 15:57:25,279 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:s3GetFileStatus(4025)) - Found exact file: normal file raw/2023/017/ohfh/OHFH017d.23_.gz
2024-04-24 15:57:25,283 [setup] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:executeOpen(1764)) - Opening 'S3AReadOpContext{path=s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz, inputPolicy=default, readahead=32768, changeDetectionPolicy=ETagChangeDetectionPolicy mode=Server}'
2024-04-24 15:57:25,291 [setup] DEBUG prefetch.S3APrefetchingInputStream (S3APrefetchingInputStream.java:<init>(118)) - Creating caching input stream for s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz
2024-04-24 15:57:25,293 [setup] DEBUG impl.ChangeTracker (ChangeTracker.java:<init>(98)) - Tracker ETagChangeDetectionPolicy mode=Server has revision ID for object at s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: "3825f3178ed5c22fd7bbadbdddb65509-3"
2024-04-24 15:57:25,296 [setup] DEBUG prefetch.S3ACachingInputStream (S3ACachingInputStream.java:demandCreateBlockManager(109)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: creating block manager
2024-04-24 15:57:25,306 [setup] DEBUG prefetch.S3ACachingInputStream (S3ACachingInputStream.java:<init>(96)) - Created caching input stream for s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz (size = 21511174)
2024-04-24 15:57:25,307 [setup] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:requestPrefetch(278)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Requesting prefetch for block 1
2024-04-24 15:57:25,307 [setup] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:requestPrefetch(284)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: acquired [001] id: 994329416, State: EMPTY: buffer: (id = 304436027, pos = 0, lim = 131072), checksum: 0, future: (none)
2024-04-24 15:57:25,308 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- requestPrefetch(1)
2024-04-24 15:57:25,309 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** requestPrefetch(1)
2024-04-24 15:57:25,309 [setup] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:requestPrefetch(278)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Requesting prefetch for block 2
2024-04-24 15:57:25,309 [s3a-transfer-noaa-cors-pds-bounded-pool6-t1] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- prefetch(1)
2024-04-24 15:57:25,309 [setup] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:requestPrefetch(284)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: acquired [002] id: 1529096865, State: EMPTY: buffer: (id = 1510469437, pos = 0, lim = 131072), checksum: 0, future: (none)
2024-04-24 15:57:25,309 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- requestPrefetch(2)
2024-04-24 15:57:25,309 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** requestPrefetch(2)
2024-04-24 15:57:25,310 [setup] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:requestPrefetch(278)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Requesting prefetch for block 3
2024-04-24 15:57:25,310 [s3a-transfer-noaa-cors-pds-bounded-pool6-t2] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- prefetch(2)
2024-04-24 15:57:25,310 [setup] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:requestPrefetch(284)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: acquired [003] id: 58234618, State: EMPTY: buffer: (id = 1717825455, pos = 0, lim = 131072), checksum: 0, future: (none)
2024-04-24 15:57:25,310 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- requestPrefetch(3)
2024-04-24 15:57:25,310 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** requestPrefetch(3)
2024-04-24 15:57:25,310 [setup] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:requestPrefetch(278)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Requesting prefetch for block 4
2024-04-24 15:57:25,310 [s3a-transfer-noaa-cors-pds-bounded-pool6-t2] DEBUG s3a.Invoker (DurationInfo.java:<init>(80)) - Starting: read s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz [262144-393216]
2024-04-24 15:57:25,310 [s3a-transfer-noaa-cors-pds-bounded-pool6-t3] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- prefetch(3)
2024-04-24 15:57:25,310 [s3a-transfer-noaa-cors-pds-bounded-pool6-t1] DEBUG s3a.Invoker (DurationInfo.java:<init>(80)) - Starting: read s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz [131072-262144]
2024-04-24 15:57:25,310 [s3a-transfer-noaa-cors-pds-bounded-pool6-t3] DEBUG s3a.Invoker (DurationInfo.java:<init>(80)) - Starting: read s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz [393216-524288]
2024-04-24 15:57:25,310 [setup] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:requestPrefetch(284)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: acquired [004] id: 109870965, State: EMPTY: buffer: (id = 2146170524, pos = 0, lim = 131072), checksum: 0, future: (none)
2024-04-24 15:57:25,310 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- requestPrefetch(4)
2024-04-24 15:57:25,310 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** requestPrefetch(4)
2024-04-24 15:57:25,310 [setup] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:requestPrefetch(278)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Requesting prefetch for block 5
2024-04-24 15:57:25,310 [s3a-transfer-noaa-cors-pds-bounded-pool6-t4] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- prefetch(4)
2024-04-24 15:57:25,310 [setup] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:requestPrefetch(284)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: acquired [005] id: 1263896387, State: EMPTY: buffer: (id = 1797965731, pos = 0, lim = 131072), checksum: 0, future: (none)
2024-04-24 15:57:25,310 [s3a-transfer-noaa-cors-pds-bounded-pool6-t4] DEBUG s3a.Invoker (DurationInfo.java:<init>(80)) - Starting: read s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz [524288-655360]
2024-04-24 15:57:25,310 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- requestPrefetch(5)
2024-04-24 15:57:25,311 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** requestPrefetch(5)
2024-04-24 15:57:25,311 [setup] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:requestPrefetch(278)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Requesting prefetch for block 6
2024-04-24 15:57:25,311 [s3a-transfer-noaa-cors-pds-bounded-pool6-t5] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- prefetch(5)
2024-04-24 15:57:25,311 [setup] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:requestPrefetch(284)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: acquired [006] id: 1150889795, State: EMPTY: buffer: (id = 2041839532, pos = 0, lim = 131072), checksum: 0, future: (none)
2024-04-24 15:57:25,311 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- requestPrefetch(6)
2024-04-24 15:57:25,311 [s3a-transfer-noaa-cors-pds-bounded-pool6-t5] DEBUG s3a.Invoker (DurationInfo.java:<init>(80)) - Starting: read s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz [655360-786432]
2024-04-24 15:57:25,311 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** requestPrefetch(6)
2024-04-24 15:57:25,311 [setup] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:requestPrefetch(278)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Requesting prefetch for block 7
2024-04-24 15:57:25,311 [s3a-transfer-noaa-cors-pds-bounded-pool6-t6] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- prefetch(6)
2024-04-24 15:57:25,311 [setup] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:requestPrefetch(284)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: acquired [007] id: 1792328953, State: EMPTY: buffer: (id = 217843856, pos = 0, lim = 131072), checksum: 0, future: (none)
2024-04-24 15:57:25,311 [s3a-transfer-noaa-cors-pds-bounded-pool6-t6] DEBUG s3a.Invoker (DurationInfo.java:<init>(80)) - Starting: read s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz [786432-917504]
2024-04-24 15:57:25,311 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- requestPrefetch(7)
2024-04-24 15:57:25,311 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** requestPrefetch(7)
2024-04-24 15:57:25,312 [setup] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:requestPrefetch(278)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Requesting prefetch for block 8
2024-04-24 15:57:25,312 [s3a-transfer-noaa-cors-pds-bounded-pool6-t7] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- prefetch(7)
2024-04-24 15:57:25,312 [setup] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:requestPrefetch(284)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: acquired [008] id: 1916205902, State: EMPTY: buffer: (id = 518178064, pos = 0, lim = 131072), checksum: 0, future: (none)
2024-04-24 15:57:25,312 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- requestPrefetch(8)
2024-04-24 15:57:25,312 [s3a-transfer-noaa-cors-pds-bounded-pool6-t7] DEBUG s3a.Invoker (DurationInfo.java:<init>(80)) - Starting: read s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz [917504-1048576]
2024-04-24 15:57:25,312 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** requestPrefetch(8)
2024-04-24 15:57:25,312 [s3a-transfer-noaa-cors-pds-bounded-pool6-t8] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- prefetch(8)
2024-04-24 15:57:25,312 [s3a-transfer-noaa-cors-pds-bounded-pool6-t8] DEBUG s3a.Invoker (DurationInfo.java:<init>(80)) - Starting: read s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz [1048576-1179648]
2024-04-24 15:57:25,313 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- getRead(0)
2024-04-24 15:57:25,313 [setup] DEBUG s3a.Invoker (DurationInfo.java:<init>(80)) - Starting: read s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz [0-131072]
2024-04-24 15:57:25,315 [s3a-transfer-noaa-cors-pds-bounded-pool6-t1] DEBUG impl.ChangeDetectionPolicy (ChangeDetectionPolicy.java:applyRevisionConstraint(372)) - Restricting get request to etag "3825f3178ed5c22fd7bbadbdddb65509-3"
2024-04-24 15:57:25,315 [s3a-transfer-noaa-cors-pds-bounded-pool6-t6] DEBUG impl.ChangeDetectionPolicy (ChangeDetectionPolicy.java:applyRevisionConstraint(372)) - Restricting get request to etag "3825f3178ed5c22fd7bbadbdddb65509-3"
2024-04-24 15:57:25,315 [setup] DEBUG impl.ChangeDetectionPolicy (ChangeDetectionPolicy.java:applyRevisionConstraint(372)) - Restricting get request to etag "3825f3178ed5c22fd7bbadbdddb65509-3"
2024-04-24 15:57:25,315 [s3a-transfer-noaa-cors-pds-bounded-pool6-t7] DEBUG impl.ChangeDetectionPolicy (ChangeDetectionPolicy.java:applyRevisionConstraint(372)) - Restricting get request to etag "3825f3178ed5c22fd7bbadbdddb65509-3"
2024-04-24 15:57:25,315 [s3a-transfer-noaa-cors-pds-bounded-pool6-t4] DEBUG impl.ChangeDetectionPolicy (ChangeDetectionPolicy.java:applyRevisionConstraint(372)) - Restricting get request to etag "3825f3178ed5c22fd7bbadbdddb65509-3"
2024-04-24 15:57:25,315 [s3a-transfer-noaa-cors-pds-bounded-pool6-t2] DEBUG impl.ChangeDetectionPolicy (ChangeDetectionPolicy.java:applyRevisionConstraint(372)) - Restricting get request to etag "3825f3178ed5c22fd7bbadbdddb65509-3"
2024-04-24 15:57:25,315 [s3a-transfer-noaa-cors-pds-bounded-pool6-t3] DEBUG impl.ChangeDetectionPolicy (ChangeDetectionPolicy.java:applyRevisionConstraint(372)) - Restricting get request to etag "3825f3178ed5c22fd7bbadbdddb65509-3"
2024-04-24 15:57:25,315 [s3a-transfer-noaa-cors-pds-bounded-pool6-t5] DEBUG impl.ChangeDetectionPolicy (ChangeDetectionPolicy.java:applyRevisionConstraint(372)) - Restricting get request to etag "3825f3178ed5c22fd7bbadbdddb65509-3"
2024-04-24 15:57:25,315 [s3a-transfer-noaa-cors-pds-bounded-pool6-t8] DEBUG impl.ChangeDetectionPolicy (ChangeDetectionPolicy.java:applyRevisionConstraint(372)) - Restricting get request to etag "3825f3178ed5c22fd7bbadbdddb65509-3"
2024-04-24 15:57:25,322 [s3a-transfer-noaa-cors-pds-bounded-pool6-t6] DEBUG impl.LoggingAuditor (LoggingAuditor.java:modifyHttpRequest(400)) - [41] 8ccb4ca6-435a-4ba8-b138-ac7140a2c673-00000011 Executing op_open with {action_http_get_request 'raw/2023/017/ohfh/OHFH017d.23_.gz' size=131071, mutating=false}; range=786432-917503; https://audit.example.org/hadoop/1/op_open/8ccb4ca6-435a-4ba8-b138-ac7140a2c673-00000011/?op=op_open&p1=raw/2023/017/ohfh/OHFH017d.23_.gz&pr=stevel&ps=d8c4e4fc-c0aa-48d0-9605-2bf9ed7b08b2&rg=131072-262143&id=8ccb4ca6-435a-4ba8-b138-ac7140a2c673-00000011&t0=11&fs=8ccb4ca6-435a-4ba8-b138-ac7140a2c673&t1=41&ts=1713970644885
2024-04-24 15:57:25,322 [s3a-transfer-noaa-cors-pds-bounded-pool6-t8] DEBUG impl.LoggingAuditor (LoggingAuditor.java:modifyHttpRequest(400)) - [43] 8ccb4ca6-435a-4ba8-b138-ac7140a2c673-00000011 Executing op_open with {action_http_get_request 'raw/2023/017/ohfh/OHFH017d.23_.gz' size=131071, mutating=false}; range=1048576-1179647; https://audit.example.org/hadoop/1/op_open/8ccb4ca6-435a-4ba8-b138-ac7140a2c673-00000011/?op=op_open&p1=raw/2023/017/ohfh/OHFH017d.23_.gz&pr=stevel&ps=d8c4e4fc-c0aa-48d0-9605-2bf9ed7b08b2&rg=131072-262143&id=8ccb4ca6-435a-4ba8-b138-ac7140a2c673-00000011&t0=11&fs=8ccb4ca6-435a-4ba8-b138-ac7140a2c673&t1=41&ts=1713970644885
2024-04-24 15:57:25,322 [s3a-transfer-noaa-cors-pds-bounded-pool6-t4] DEBUG impl.LoggingAuditor (LoggingAuditor.java:modifyHttpRequest(400)) - [39] 8ccb4ca6-435a-4ba8-b138-ac7140a2c673-00000011 Executing op_open with {action_http_get_request 'raw/2023/017/ohfh/OHFH017d.23_.gz' size=131071, mutating=false}; range=524288-655359; https://audit.example.org/hadoop/1/op_open/8ccb4ca6-435a-4ba8-b138-ac7140a2c673-00000011/?op=op_open&p1=raw/2023/017/ohfh/OHFH017d.23_.gz&pr=stevel&ps=d8c4e4fc-c0aa-48d0-9605-2bf9ed7b08b2&rg=131072-262143&id=8ccb4ca6-435a-4ba8-b138-ac7140a2c673-00000011&t0=11&fs=8ccb4ca6-435a-4ba8-b138-ac7140a2c673&t1=41&ts=1713970644885
2024-04-24 15:57:25,322 [setup] DEBUG impl.LoggingAuditor (LoggingAuditor.java:modifyHttpRequest(400)) - [11] 8ccb4ca6-435a-4ba8-b138-ac7140a2c673-00000011 Executing op_open with {action_http_get_request 'raw/2023/017/ohfh/OHFH017d.23_.gz' size=131071, mutating=false}; range=0-131071; https://audit.example.org/hadoop/1/op_open/8ccb4ca6-435a-4ba8-b138-ac7140a2c673-00000011/?op=op_open&p1=raw/2023/017/ohfh/OHFH017d.23_.gz&pr=stevel&ps=d8c4e4fc-c0aa-48d0-9605-2bf9ed7b08b2&rg=131072-262143&id=8ccb4ca6-435a-4ba8-b138-ac7140a2c673-00000011&t0=11&fs=8ccb4ca6-435a-4ba8-b138-ac7140a2c673&t1=41&ts=1713970644885
2024-04-24 15:57:25,322 [s3a-transfer-noaa-cors-pds-bounded-pool6-t5] DEBUG impl.LoggingAuditor (LoggingAuditor.java:modifyHttpRequest(400)) - [40] 8ccb4ca6-435a-4ba8-b138-ac7140a2c673-00000011 Executing op_open with {action_http_get_request 'raw/2023/017/ohfh/OHFH017d.23_.gz' size=131071, mutating=false}; range=655360-786431; https://audit.example.org/hadoop/1/op_open/8ccb4ca6-435a-4ba8-b138-ac7140a2c673-00000011/?op=op_open&p1=raw/2023/017/ohfh/OHFH017d.23_.gz&pr=stevel&ps=d8c4e4fc-c0aa-48d0-9605-2bf9ed7b08b2&rg=655360-786431&id=8ccb4ca6-435a-4ba8-b138-ac7140a2c673-00000011&t0=11&fs=8ccb4ca6-435a-4ba8-b138-ac7140a2c673&t1=40&ts=1713970644885
2024-04-24 15:57:25,322 [s3a-transfer-noaa-cors-pds-bounded-pool6-t2] DEBUG impl.LoggingAuditor (LoggingAuditor.java:modifyHttpRequest(400)) - [37] 8ccb4ca6-435a-4ba8-b138-ac7140a2c673-00000011 Executing op_open with {action_http_get_request 'raw/2023/017/ohfh/OHFH017d.23_.gz' size=131071, mutating=false}; range=262144-393215; https://audit.example.org/hadoop/1/op_open/8ccb4ca6-435a-4ba8-b138-ac7140a2c673-00000011/?op=op_open&p1=raw/2023/017/ohfh/OHFH017d.23_.gz&pr=stevel&ps=d8c4e4fc-c0aa-48d0-9605-2bf9ed7b08b2&rg=131072-262143&id=8ccb4ca6-435a-4ba8-b138-ac7140a2c673-00000011&t0=11&fs=8ccb4ca6-435a-4ba8-b138-ac7140a2c673&t1=41&ts=1713970644885
2024-04-24 15:57:25,322 [s3a-transfer-noaa-cors-pds-bounded-pool6-t3] DEBUG impl.LoggingAuditor (LoggingAuditor.java:modifyHttpRequest(400)) - [38] 8ccb4ca6-435a-4ba8-b138-ac7140a2c673-00000011 Executing op_open with {action_http_get_request 'raw/2023/017/ohfh/OHFH017d.23_.gz' size=131071, mutating=false}; range=393216-524287; https://audit.example.org/hadoop/1/op_open/8ccb4ca6-435a-4ba8-b138-ac7140a2c673-00000011/?op=op_open&p1=raw/2023/017/ohfh/OHFH017d.23_.gz&pr=stevel&ps=d8c4e4fc-c0aa-48d0-9605-2bf9ed7b08b2&rg=131072-262143&id=8ccb4ca6-435a-4ba8-b138-ac7140a2c673-00000011&t0=11&fs=8ccb4ca6-435a-4ba8-b138-ac7140a2c673&t1=41&ts=1713970644885
2024-04-24 15:57:25,322 [s3a-transfer-noaa-cors-pds-bounded-pool6-t1] DEBUG impl.LoggingAuditor (LoggingAuditor.java:modifyHttpRequest(400)) - [36] 8ccb4ca6-435a-4ba8-b138-ac7140a2c673-00000011 Executing op_open with {action_http_get_request 'raw/2023/017/ohfh/OHFH017d.23_.gz' size=131071, mutating=false}; range=131072-262143; https://audit.example.org/hadoop/1/op_open/8ccb4ca6-435a-4ba8-b138-ac7140a2c673-00000011/?op=op_open&p1=raw/2023/017/ohfh/OHFH017d.23_.gz&pr=stevel&ps=d8c4e4fc-c0aa-48d0-9605-2bf9ed7b08b2&rg=131072-262143&id=8ccb4ca6-435a-4ba8-b138-ac7140a2c673-00000011&t0=11&fs=8ccb4ca6-435a-4ba8-b138-ac7140a2c673&t1=41&ts=1713970644885
2024-04-24 15:57:25,322 [s3a-transfer-noaa-cors-pds-bounded-pool6-t7] DEBUG impl.LoggingAuditor (LoggingAuditor.java:modifyHttpRequest(400)) - [42] 8ccb4ca6-435a-4ba8-b138-ac7140a2c673-00000011 Executing op_open with {action_http_get_request 'raw/2023/017/ohfh/OHFH017d.23_.gz' size=131071, mutating=false}; range=917504-1048575; https://audit.example.org/hadoop/1/op_open/8ccb4ca6-435a-4ba8-b138-ac7140a2c673-00000011/?op=op_open&p1=raw/2023/017/ohfh/OHFH017d.23_.gz&pr=stevel&ps=d8c4e4fc-c0aa-48d0-9605-2bf9ed7b08b2&rg=131072-262143&id=8ccb4ca6-435a-4ba8-b138-ac7140a2c673-00000011&t0=11&fs=8ccb4ca6-435a-4ba8-b138-ac7140a2c673&t1=41&ts=1713970644885
2024-04-24 15:57:25,623 [setup] DEBUG impl.SDKStreamDrainer (SDKStreamDrainer.java:drainOrAbortHttpStream(218)) - Closing stream
2024-04-24 15:57:25,623 [setup] DEBUG s3a.Invoker (DurationInfo.java:close(101)) - read s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz [0-131072]: duration 0:00.310s
2024-04-24 15:57:25,624 [setup] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:readBlock(409)) - Completed read of block 0 [0-131072]
2024-04-24 15:57:25,624 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** getRead(0)
2024-04-24 15:57:25,624 [setup] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:readBlock(435)) - fetch completed:  Duration of stream_read_block_fetch_operations: 0:00.311s
2024-04-24 15:57:25,624 [setup] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:requestCaching(517)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block 0: request caching of [000] id: 1299545752, State: READY: buffer: (id = 1384379967, pos = 0, lim = 131072), checksum: 3320762083, future: (none)
2024-04-24 15:57:25,624 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- requestCaching(0)
2024-04-24 15:57:25,625 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** requestCaching(0)
2024-04-24 15:57:25,625 [s3a-transfer-noaa-cors-pds-bounded-pool6-t9] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:addToCacheAndRelease(575)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block 0: Preparing to cache block
2024-04-24 15:57:25,625 [s3a-transfer-noaa-cors-pds-bounded-pool6-t9] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:addToCacheAndRelease(582)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block 0: awaiting any read to complete
2024-04-24 15:57:25,625 [setup] DEBUG prefetch.S3ACachingInputStream (S3ACachingInputStream.java:ensureCurrentBuffer(215)) - lazy-seek(2:262144)
2024-04-24 15:57:25,625 [setup] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:cancelPrefetches(315)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Cancelling prefetches: RandomIO
2024-04-24 15:57:25,626 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - ... cancelPrefetches()
2024-04-24 15:57:25,626 [setup] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:requestCaching(517)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block 2: request caching of [002] id: 1529096865, State: PREFETCHING: buffer: (id = 1510469437, pos = 0, lim = 131072), checksum: 0, future: not done
2024-04-24 15:57:25,626 [s3a-transfer-noaa-cors-pds-bounded-pool6-t9] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- putC(0)
2024-04-24 15:57:25,626 [s3a-transfer-noaa-cors-pds-bounded-pool6-t9] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:cachePut(650)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block java.nio.HeapByteBufferR[pos=0 lim=131072 cap=131072]: Caching
2024-04-24 15:57:25,667 [s3a-transfer-noaa-cors-pds-bounded-pool6-t9] INFO  prefetch.SingleFilePerBlockCache (DurationInfo.java:<init>(77)) - Starting: save 131072 bytes to /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-2283440820786611743-block-0000.bin
2024-04-24 15:57:25,668 [s3a-transfer-noaa-cors-pds-bounded-pool6-t9] INFO  prefetch.SingleFilePerBlockCache (DurationInfo.java:close(98)) - save 131072 bytes to /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-2283440820786611743-block-0000.bin: duration 0:00.001s
2024-04-24 15:57:25,668 [s3a-transfer-noaa-cors-pds-bounded-pool6-t9] DEBUG prefetch.SingleFilePerBlockCache (SingleFilePerBlockCache.java:maybePushToHeadOfBlockList(336)) - Block 0 to be added to the head. Current head block 0 and tail block 0; ([000] /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-2283440820786611743-block-0000.bin: size = 131,072, checksum = 3320762083)
2024-04-24 15:57:25,668 [s3a-transfer-noaa-cors-pds-bounded-pool6-t9] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** putC(0)
2024-04-24 15:57:25,998 [s3a-transfer-noaa-cors-pds-bounded-pool6-t2] DEBUG impl.SDKStreamDrainer (SDKStreamDrainer.java:drainOrAbortHttpStream(218)) - Closing stream
2024-04-24 15:57:25,999 [s3a-transfer-noaa-cors-pds-bounded-pool6-t2] DEBUG s3a.Invoker (DurationInfo.java:close(101)) - read s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz [262144-393216]: duration 0:00.689s
2024-04-24 15:57:25,999 [s3a-transfer-noaa-cors-pds-bounded-pool6-t2] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:readBlock(409)) - Completed prefetch of block 2 [262144-393216]
2024-04-24 15:57:25,999 [s3a-transfer-noaa-cors-pds-bounded-pool6-t2] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** prefetch(2)
2024-04-24 15:57:25,999 [s3a-transfer-noaa-cors-pds-bounded-pool6-t2] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:readBlock(435)) - fetch completed:  Duration of stream_read_prefetch_operations: 0:00.689s
2024-04-24 15:57:25,999 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- requestCaching(2)
2024-04-24 15:57:26,000 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** requestCaching(2)
2024-04-24 15:57:26,000 [s3a-transfer-noaa-cors-pds-bounded-pool6-t10] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:addToCacheAndRelease(575)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block 2: Preparing to cache block
2024-04-24 15:57:26,000 [s3a-transfer-noaa-cors-pds-bounded-pool6-t10] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:addToCacheAndRelease(582)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block 2: awaiting any read to complete
2024-04-24 15:57:26,000 [setup] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:requestCaching(517)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block 5: request caching of [005] id: 1263896387, State: PREFETCHING: buffer: (id = 1797965731, pos = 86573, lim = 131072), checksum: 0, future: not done
2024-04-24 15:57:26,000 [s3a-transfer-noaa-cors-pds-bounded-pool6-t10] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- putC(2)
2024-04-24 15:57:26,000 [s3a-transfer-noaa-cors-pds-bounded-pool6-t10] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:cachePut(650)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block java.nio.HeapByteBufferR[pos=0 lim=131072 cap=131072]: Caching
2024-04-24 15:57:26,002 [s3a-transfer-noaa-cors-pds-bounded-pool6-t10] INFO  prefetch.SingleFilePerBlockCache (DurationInfo.java:<init>(77)) - Starting: save 131072 bytes to /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-1079555837997227351-block-0002.bin
2024-04-24 15:57:26,002 [s3a-transfer-noaa-cors-pds-bounded-pool6-t10] INFO  prefetch.SingleFilePerBlockCache (DurationInfo.java:close(98)) - save 131072 bytes to /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-1079555837997227351-block-0002.bin: duration 0:00.000s
2024-04-24 15:57:26,004 [s3a-transfer-noaa-cors-pds-bounded-pool6-t10] DEBUG prefetch.SingleFilePerBlockCache (SingleFilePerBlockCache.java:maybePushToHeadOfBlockList(336)) - Block 2 to be added to the head. Current head block 0 and tail block 0; ([002] /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-1079555837997227351-block-0002.bin: size = 131,072, checksum = 2067391204)
2024-04-24 15:57:26,004 [s3a-transfer-noaa-cors-pds-bounded-pool6-t10] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** putC(2)
2024-04-24 15:57:26,120 [s3a-transfer-noaa-cors-pds-bounded-pool6-t5] DEBUG impl.SDKStreamDrainer (SDKStreamDrainer.java:drainOrAbortHttpStream(218)) - Closing stream
2024-04-24 15:57:26,121 [s3a-transfer-noaa-cors-pds-bounded-pool6-t5] DEBUG s3a.Invoker (DurationInfo.java:close(101)) - read s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz [655360-786432]: duration 0:00.810s
2024-04-24 15:57:26,121 [s3a-transfer-noaa-cors-pds-bounded-pool6-t5] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:readBlock(409)) - Completed prefetch of block 5 [655360-786432]
2024-04-24 15:57:26,121 [s3a-transfer-noaa-cors-pds-bounded-pool6-t5] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** prefetch(5)
2024-04-24 15:57:26,121 [s3a-transfer-noaa-cors-pds-bounded-pool6-t5] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:readBlock(435)) - fetch completed:  Duration of stream_read_prefetch_operations: 0:00.810s
2024-04-24 15:57:26,120 [s3a-transfer-noaa-cors-pds-bounded-pool6-t1] DEBUG impl.SDKStreamDrainer (SDKStreamDrainer.java:drainOrAbortHttpStream(218)) - Closing stream
2024-04-24 15:57:26,122 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- requestCaching(5)
2024-04-24 15:57:26,122 [s3a-transfer-noaa-cors-pds-bounded-pool6-t1] DEBUG s3a.Invoker (DurationInfo.java:close(101)) - read s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz [131072-262144]: duration 0:00.812s
2024-04-24 15:57:26,122 [s3a-transfer-noaa-cors-pds-bounded-pool6-t1] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:readBlock(409)) - Completed prefetch of block 1 [131072-262144]
2024-04-24 15:57:26,123 [s3a-transfer-noaa-cors-pds-bounded-pool6-t1] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** prefetch(1)
2024-04-24 15:57:26,123 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** requestCaching(5)
2024-04-24 15:57:26,123 [s3a-transfer-noaa-cors-pds-bounded-pool6-t11] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:addToCacheAndRelease(575)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block 5: Preparing to cache block
2024-04-24 15:57:26,123 [s3a-transfer-noaa-cors-pds-bounded-pool6-t11] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:addToCacheAndRelease(582)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block 5: awaiting any read to complete
2024-04-24 15:57:26,123 [s3a-transfer-noaa-cors-pds-bounded-pool6-t1] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:readBlock(435)) - fetch completed:  Duration of stream_read_prefetch_operations: 0:00.814s
2024-04-24 15:57:26,123 [s3a-transfer-noaa-cors-pds-bounded-pool6-t11] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- putC(5)
2024-04-24 15:57:26,123 [setup] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:requestCaching(517)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block 6: request caching of [006] id: 1150889795, State: PREFETCHING: buffer: (id = 2041839532, pos = 88239, lim = 131072), checksum: 0, future: not done
2024-04-24 15:57:26,123 [s3a-transfer-noaa-cors-pds-bounded-pool6-t11] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:cachePut(650)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block java.nio.HeapByteBufferR[pos=0 lim=131072 cap=131072]: Caching
2024-04-24 15:57:26,125 [s3a-transfer-noaa-cors-pds-bounded-pool6-t11] INFO  prefetch.SingleFilePerBlockCache (DurationInfo.java:<init>(77)) - Starting: save 131072 bytes to /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-8757359741004979112-block-0005.bin
2024-04-24 15:57:26,125 [s3a-transfer-noaa-cors-pds-bounded-pool6-t11] INFO  prefetch.SingleFilePerBlockCache (DurationInfo.java:close(98)) - save 131072 bytes to /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-8757359741004979112-block-0005.bin: duration 0:00.000s
2024-04-24 15:57:26,125 [s3a-transfer-noaa-cors-pds-bounded-pool6-t11] DEBUG prefetch.SingleFilePerBlockCache (SingleFilePerBlockCache.java:maybePushToHeadOfBlockList(336)) - Block 5 to be added to the head. Current head block 2 and tail block 0; ([005] /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-8757359741004979112-block-0005.bin: size = 131,072, checksum = 2937666868)
2024-04-24 15:57:26,126 [s3a-transfer-noaa-cors-pds-bounded-pool6-t11] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** putC(5)
2024-04-24 15:57:26,129 [s3a-transfer-noaa-cors-pds-bounded-pool6-t3] DEBUG impl.SDKStreamDrainer (SDKStreamDrainer.java:drainOrAbortHttpStream(218)) - Closing stream
2024-04-24 15:57:26,129 [s3a-transfer-noaa-cors-pds-bounded-pool6-t4] DEBUG impl.SDKStreamDrainer (SDKStreamDrainer.java:drainOrAbortHttpStream(218)) - Closing stream
2024-04-24 15:57:26,129 [s3a-transfer-noaa-cors-pds-bounded-pool6-t3] DEBUG s3a.Invoker (DurationInfo.java:close(101)) - read s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz [393216-524288]: duration 0:00.819s
2024-04-24 15:57:26,129 [s3a-transfer-noaa-cors-pds-bounded-pool6-t4] DEBUG s3a.Invoker (DurationInfo.java:close(101)) - read s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz [524288-655360]: duration 0:00.819s
2024-04-24 15:57:26,129 [s3a-transfer-noaa-cors-pds-bounded-pool6-t3] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:readBlock(409)) - Completed prefetch of block 3 [393216-524288]
2024-04-24 15:57:26,129 [s3a-transfer-noaa-cors-pds-bounded-pool6-t3] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** prefetch(3)
2024-04-24 15:57:26,129 [s3a-transfer-noaa-cors-pds-bounded-pool6-t4] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:readBlock(409)) - Completed prefetch of block 4 [524288-655360]
2024-04-24 15:57:26,129 [s3a-transfer-noaa-cors-pds-bounded-pool6-t3] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:readBlock(435)) - fetch completed:  Duration of stream_read_prefetch_operations: 0:00.819s
2024-04-24 15:57:26,130 [s3a-transfer-noaa-cors-pds-bounded-pool6-t8] DEBUG impl.SDKStreamDrainer (SDKStreamDrainer.java:drainOrAbortHttpStream(218)) - Closing stream
2024-04-24 15:57:26,130 [s3a-transfer-noaa-cors-pds-bounded-pool6-t4] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** prefetch(4)
2024-04-24 15:57:26,130 [s3a-transfer-noaa-cors-pds-bounded-pool6-t8] DEBUG s3a.Invoker (DurationInfo.java:close(101)) - read s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz [1048576-1179648]: duration 0:00.818s
2024-04-24 15:57:26,130 [s3a-transfer-noaa-cors-pds-bounded-pool6-t7] DEBUG impl.SDKStreamDrainer (SDKStreamDrainer.java:drainOrAbortHttpStream(218)) - Closing stream
2024-04-24 15:57:26,130 [s3a-transfer-noaa-cors-pds-bounded-pool6-t8] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:readBlock(409)) - Completed prefetch of block 8 [1048576-1179648]
2024-04-24 15:57:26,130 [s3a-transfer-noaa-cors-pds-bounded-pool6-t4] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:readBlock(435)) - fetch completed:  Duration of stream_read_prefetch_operations: 0:00.820s
2024-04-24 15:57:26,130 [s3a-transfer-noaa-cors-pds-bounded-pool6-t8] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** prefetch(8)
2024-04-24 15:57:26,130 [s3a-transfer-noaa-cors-pds-bounded-pool6-t7] DEBUG s3a.Invoker (DurationInfo.java:close(101)) - read s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz [917504-1048576]: duration 0:00.818s
2024-04-24 15:57:26,130 [s3a-transfer-noaa-cors-pds-bounded-pool6-t8] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:readBlock(435)) - fetch completed:  Duration of stream_read_prefetch_operations: 0:00.818s
2024-04-24 15:57:26,130 [s3a-transfer-noaa-cors-pds-bounded-pool6-t7] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:readBlock(409)) - Completed prefetch of block 7 [917504-1048576]
2024-04-24 15:57:26,131 [s3a-transfer-noaa-cors-pds-bounded-pool6-t7] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** prefetch(7)
2024-04-24 15:57:26,131 [s3a-transfer-noaa-cors-pds-bounded-pool6-t7] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:readBlock(435)) - fetch completed:  Duration of stream_read_prefetch_operations: 0:00.819s
2024-04-24 15:57:26,152 [s3a-transfer-noaa-cors-pds-bounded-pool6-t6] DEBUG impl.SDKStreamDrainer (SDKStreamDrainer.java:drainOrAbortHttpStream(218)) - Closing stream
2024-04-24 15:57:26,152 [s3a-transfer-noaa-cors-pds-bounded-pool6-t6] DEBUG s3a.Invoker (DurationInfo.java:close(101)) - read s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz [786432-917504]: duration 0:00.841s
2024-04-24 15:57:26,152 [s3a-transfer-noaa-cors-pds-bounded-pool6-t6] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:readBlock(409)) - Completed prefetch of block 6 [786432-917504]
2024-04-24 15:57:26,152 [s3a-transfer-noaa-cors-pds-bounded-pool6-t6] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** prefetch(6)
2024-04-24 15:57:26,152 [s3a-transfer-noaa-cors-pds-bounded-pool6-t6] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:readBlock(435)) - fetch completed:  Duration of stream_read_prefetch_operations: 0:00.841s
2024-04-24 15:57:26,152 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- requestCaching(6)
2024-04-24 15:57:26,153 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** requestCaching(6)
2024-04-24 15:57:26,153 [setup] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:requestCaching(517)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block 1: request caching of [001] id: 994329416, State: READY: buffer: (id = 1603736230, pos = 0, lim = 131072), checksum: 2642073199, future: done
2024-04-24 15:57:26,153 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- requestCaching(1)
2024-04-24 15:57:26,153 [s3a-transfer-noaa-cors-pds-bounded-pool6-t12] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:addToCacheAndRelease(575)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block 6: Preparing to cache block
2024-04-24 15:57:26,153 [s3a-transfer-noaa-cors-pds-bounded-pool6-t12] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:addToCacheAndRelease(582)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block 6: awaiting any read to complete
2024-04-24 15:57:26,153 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** requestCaching(1)
2024-04-24 15:57:26,153 [s3a-transfer-noaa-cors-pds-bounded-pool6-t13] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:addToCacheAndRelease(575)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block 1: Preparing to cache block
2024-04-24 15:57:26,154 [s3a-transfer-noaa-cors-pds-bounded-pool6-t13] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:addToCacheAndRelease(582)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block 1: awaiting any read to complete
2024-04-24 15:57:26,153 [s3a-transfer-noaa-cors-pds-bounded-pool6-t12] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- putC(6)
2024-04-24 15:57:26,153 [setup] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:requestCaching(517)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block 8: request caching of [008] id: 1916205902, State: READY: buffer: (id = 1134026783, pos = 0, lim = 131072), checksum: 1926174145, future: done
2024-04-24 15:57:26,154 [s3a-transfer-noaa-cors-pds-bounded-pool6-t13] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- putC(1)
2024-04-24 15:57:26,154 [s3a-transfer-noaa-cors-pds-bounded-pool6-t13] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:cachePut(650)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block java.nio.HeapByteBufferR[pos=0 lim=131072 cap=131072]: Caching
2024-04-24 15:57:26,154 [s3a-transfer-noaa-cors-pds-bounded-pool6-t12] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:cachePut(650)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block java.nio.HeapByteBufferR[pos=0 lim=131072 cap=131072]: Caching
2024-04-24 15:57:26,154 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- requestCaching(8)
2024-04-24 15:57:26,154 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** requestCaching(8)
2024-04-24 15:57:26,154 [s3a-transfer-noaa-cors-pds-bounded-pool6-t14] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:addToCacheAndRelease(575)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block 8: Preparing to cache block
2024-04-24 15:57:26,155 [s3a-transfer-noaa-cors-pds-bounded-pool6-t14] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:addToCacheAndRelease(582)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block 8: awaiting any read to complete
2024-04-24 15:57:26,155 [setup] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:requestCaching(517)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block 4: request caching of [004] id: 109870965, State: READY: buffer: (id = 920412102, pos = 0, lim = 131072), checksum: 926889257, future: done
2024-04-24 15:57:26,155 [s3a-transfer-noaa-cors-pds-bounded-pool6-t14] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- putC(8)
2024-04-24 15:57:26,155 [s3a-transfer-noaa-cors-pds-bounded-pool6-t14] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:cachePut(650)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block java.nio.HeapByteBufferR[pos=0 lim=131072 cap=131072]: Caching
2024-04-24 15:57:26,155 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- requestCaching(4)
2024-04-24 15:57:26,155 [s3a-transfer-noaa-cors-pds-bounded-pool6-t13] INFO  prefetch.SingleFilePerBlockCache (DurationInfo.java:<init>(77)) - Starting: save 131072 bytes to /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-3891072200315223991-block-0001.bin
2024-04-24 15:57:26,155 [s3a-transfer-noaa-cors-pds-bounded-pool6-t12] INFO  prefetch.SingleFilePerBlockCache (DurationInfo.java:<init>(77)) - Starting: save 131072 bytes to /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-831876345754085681-block-0006.bin
2024-04-24 15:57:26,155 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** requestCaching(4)
2024-04-24 15:57:26,155 [s3a-transfer-noaa-cors-pds-bounded-pool6-t15] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:addToCacheAndRelease(575)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block 4: Preparing to cache block
2024-04-24 15:57:26,156 [s3a-transfer-noaa-cors-pds-bounded-pool6-t13] INFO  prefetch.SingleFilePerBlockCache (DurationInfo.java:close(98)) - save 131072 bytes to /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-3891072200315223991-block-0001.bin: duration 0:00.001s
2024-04-24 15:57:26,156 [setup] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:requestCaching(517)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block 7: request caching of [007] id: 1792328953, State: READY: buffer: (id = 323265539, pos = 0, lim = 131072), checksum: 2744416371, future: done
2024-04-24 15:57:26,156 [s3a-transfer-noaa-cors-pds-bounded-pool6-t12] INFO  prefetch.SingleFilePerBlockCache (DurationInfo.java:close(98)) - save 131072 bytes to /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-831876345754085681-block-0006.bin: duration 0:00.001s
2024-04-24 15:57:26,156 [s3a-transfer-noaa-cors-pds-bounded-pool6-t15] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:addToCacheAndRelease(582)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block 4: awaiting any read to complete
2024-04-24 15:57:26,156 [s3a-transfer-noaa-cors-pds-bounded-pool6-t14] INFO  prefetch.SingleFilePerBlockCache (DurationInfo.java:<init>(77)) - Starting: save 131072 bytes to /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-5000586859770259516-block-0008.bin
2024-04-24 15:57:26,156 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- requestCaching(7)
2024-04-24 15:57:26,156 [s3a-transfer-noaa-cors-pds-bounded-pool6-t13] DEBUG prefetch.SingleFilePerBlockCache (SingleFilePerBlockCache.java:maybePushToHeadOfBlockList(336)) - Block 1 to be added to the head. Current head block 5 and tail block 0; ([001] /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-3891072200315223991-block-0001.bin: size = 131,072, checksum = 2642073199)
2024-04-24 15:57:26,156 [s3a-transfer-noaa-cors-pds-bounded-pool6-t14] INFO  prefetch.SingleFilePerBlockCache (DurationInfo.java:close(98)) - save 131072 bytes to /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-5000586859770259516-block-0008.bin: duration 0:00.000s
2024-04-24 15:57:26,156 [s3a-transfer-noaa-cors-pds-bounded-pool6-t15] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- putC(4)
2024-04-24 15:57:26,156 [s3a-transfer-noaa-cors-pds-bounded-pool6-t16] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:addToCacheAndRelease(575)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block 7: Preparing to cache block
2024-04-24 15:57:26,156 [s3a-transfer-noaa-cors-pds-bounded-pool6-t16] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:addToCacheAndRelease(582)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block 7: awaiting any read to complete
2024-04-24 15:57:26,156 [s3a-transfer-noaa-cors-pds-bounded-pool6-t13] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** putC(1)
2024-04-24 15:57:26,156 [s3a-transfer-noaa-cors-pds-bounded-pool6-t15] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:cachePut(650)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block java.nio.HeapByteBufferR[pos=0 lim=131072 cap=131072]: Caching
2024-04-24 15:57:26,156 [s3a-transfer-noaa-cors-pds-bounded-pool6-t12] DEBUG prefetch.SingleFilePerBlockCache (SingleFilePerBlockCache.java:maybePushToHeadOfBlockList(336)) - Block 6 to be added to the head. Current head block 1 and tail block 0; ([006] /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-831876345754085681-block-0006.bin: size = 131,072, checksum = 3326737228)
2024-04-24 15:57:26,157 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** requestCaching(7)
2024-04-24 15:57:26,157 [s3a-transfer-noaa-cors-pds-bounded-pool6-t12] DEBUG prefetch.SingleFilePerBlockCache (SingleFilePerBlockCache.java:deleteBlockFileAndEvictCache(454)) - Evicting block 0 from cache: ([000] /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-2283440820786611743-block-0000.bin: size = 131,072, checksum = 3320762083)
2024-04-24 15:57:26,157 [setup] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:requestCaching(517)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block 3: request caching of [003] id: 58234618, State: READY: buffer: (id = 1564095827, pos = 0, lim = 131072), checksum: 2582014511, future: done
2024-04-24 15:57:26,157 [s3a-transfer-noaa-cors-pds-bounded-pool6-t16] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- putC(7)
2024-04-24 15:57:26,157 [s3a-transfer-noaa-cors-pds-bounded-pool6-t16] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:cachePut(650)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block java.nio.HeapByteBufferR[pos=0 lim=131072 cap=131072]: Caching
2024-04-24 15:57:26,157 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- requestCaching(3)
2024-04-24 15:57:26,157 [s3a-transfer-noaa-cors-pds-bounded-pool6-t15] INFO  prefetch.SingleFilePerBlockCache (DurationInfo.java:<init>(77)) - Starting: save 131072 bytes to /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-1766922656794707645-block-0004.bin
2024-04-24 15:57:26,158 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** requestCaching(3)
2024-04-24 15:57:26,158 [s3a-transfer-noaa-cors-pds-bounded-pool6-t17] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:addToCacheAndRelease(575)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block 3: Preparing to cache block
2024-04-24 15:57:26,158 [s3a-transfer-noaa-cors-pds-bounded-pool6-t15] INFO  prefetch.SingleFilePerBlockCache (DurationInfo.java:close(98)) - save 131072 bytes to /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-1766922656794707645-block-0004.bin: duration 0:00.001s
2024-04-24 15:57:26,158 [s3a-transfer-noaa-cors-pds-bounded-pool6-t17] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:addToCacheAndRelease(582)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block 3: awaiting any read to complete
2024-04-24 15:57:26,158 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** cancelPrefetches()
2024-04-24 15:57:26,158 [s3a-transfer-noaa-cors-pds-bounded-pool6-t17] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- putC(3)
2024-04-24 15:57:26,158 [s3a-transfer-noaa-cors-pds-bounded-pool6-t16] INFO  prefetch.SingleFilePerBlockCache (DurationInfo.java:<init>(77)) - Starting: save 131072 bytes to /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-3861022982718863148-block-0007.bin
2024-04-24 15:57:26,158 [setup] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:requestPrefetch(278)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Requesting prefetch for block 3
2024-04-24 15:57:26,158 [s3a-transfer-noaa-cors-pds-bounded-pool6-t17] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:cachePut(650)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Block java.nio.HeapByteBufferR[pos=0 lim=131072 cap=131072]: Caching
2024-04-24 15:57:26,158 [s3a-transfer-noaa-cors-pds-bounded-pool6-t16] INFO  prefetch.SingleFilePerBlockCache (DurationInfo.java:close(98)) - save 131072 bytes to /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-3861022982718863148-block-0007.bin: duration 0:00.000s
2024-04-24 15:57:26,159 [s3a-transfer-noaa-cors-pds-bounded-pool6-t17] INFO  prefetch.SingleFilePerBlockCache (DurationInfo.java:<init>(77)) - Starting: save 131072 bytes to /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-6536604957243440720-block-0003.bin
2024-04-24 15:57:26,159 [s3a-transfer-noaa-cors-pds-bounded-pool6-t17] INFO  prefetch.SingleFilePerBlockCache (DurationInfo.java:close(98)) - save 131072 bytes to /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-6536604957243440720-block-0003.bin: duration 0:00.000s
2024-04-24 15:57:26,159 [setup] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:requestPrefetch(284)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: acquired [003] id: 58234618, State: CACHING: buffer: (id = 1564095827, pos = 0, lim = 131072), checksum: 2582014511, future: not done
2024-04-24 15:57:26,159 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - --- getCached(2)
2024-04-24 15:57:26,160 [s3a-transfer-noaa-cors-pds-bounded-pool6-t12] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** putC(6)
2024-04-24 15:57:26,160 [s3a-transfer-noaa-cors-pds-bounded-pool6-t14] DEBUG prefetch.SingleFilePerBlockCache (SingleFilePerBlockCache.java:maybePushToHeadOfBlockList(336)) - Block 8 to be added to the head. Current head block 6 and tail block 2; ([008] /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-5000586859770259516-block-0008.bin: size = 131,072, checksum = 1926174145)
2024-04-24 15:57:26,160 [s3a-transfer-noaa-cors-pds-bounded-pool6-t14] DEBUG prefetch.SingleFilePerBlockCache (SingleFilePerBlockCache.java:deleteBlockFileAndEvictCache(454)) - Evicting block 2 from cache: ([002] /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-1079555837997227351-block-0002.bin: size = 131,072, checksum = 2067391204)
2024-04-24 15:57:26,161 [s3a-transfer-noaa-cors-pds-bounded-pool6-t14] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** putC(8)
2024-04-24 15:57:26,161 [s3a-transfer-noaa-cors-pds-bounded-pool6-t15] DEBUG prefetch.SingleFilePerBlockCache (SingleFilePerBlockCache.java:maybePushToHeadOfBlockList(336)) - Block 4 to be added to the head. Current head block 8 and tail block 5; ([004] /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-1766922656794707645-block-0004.bin: size = 131,072, checksum = 926889257)
2024-04-24 15:57:26,161 [s3a-transfer-noaa-cors-pds-bounded-pool6-t15] DEBUG prefetch.SingleFilePerBlockCache (SingleFilePerBlockCache.java:deleteBlockFileAndEvictCache(454)) - Evicting block 5 from cache: ([005] /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-8757359741004979112-block-0005.bin: size = 131,072, checksum = 2937666868)
2024-04-24 15:57:26,161 [s3a-transfer-noaa-cors-pds-bounded-pool6-t16] DEBUG prefetch.SingleFilePerBlockCache (SingleFilePerBlockCache.java:maybePushToHeadOfBlockList(336)) - Block 7 to be added to the head. Current head block 4 and tail block 1; ([007] /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-3861022982718863148-block-0007.bin: size = 131,072, checksum = 2744416371)
2024-04-24 15:57:26,161 [s3a-transfer-noaa-cors-pds-bounded-pool6-t15] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** putC(4)
2024-04-24 15:57:26,161 [s3a-transfer-noaa-cors-pds-bounded-pool6-t16] DEBUG prefetch.SingleFilePerBlockCache (SingleFilePerBlockCache.java:deleteBlockFileAndEvictCache(454)) - Evicting block 1 from cache: ([001] /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-3891072200315223991-block-0001.bin: size = 131,072, checksum = 2642073199)
2024-04-24 15:57:26,162 [s3a-transfer-noaa-cors-pds-bounded-pool6-t17] DEBUG prefetch.SingleFilePerBlockCache (SingleFilePerBlockCache.java:maybePushToHeadOfBlockList(336)) - Block 3 to be added to the head. Current head block 7 and tail block 6; ([003] /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-6536604957243440720-block-0003.bin: size = 131,072, checksum = 2582014511)
2024-04-24 15:57:26,162 [s3a-transfer-noaa-cors-pds-bounded-pool6-t16] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** putC(7)
2024-04-24 15:57:26,162 [s3a-transfer-noaa-cors-pds-bounded-pool6-t17] DEBUG prefetch.SingleFilePerBlockCache (SingleFilePerBlockCache.java:deleteBlockFileAndEvictCache(454)) - Evicting block 6 from cache: ([006] /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-831876345754085681-block-0006.bin: size = 131,072, checksum = 3326737228)
2024-04-24 15:57:26,162 [s3a-transfer-noaa-cors-pds-bounded-pool6-t17] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** putC(3)
2024-04-24 15:57:26,162 [setup] DEBUG prefetch.SingleFilePerBlockCache (SingleFilePerBlockCache.java:maybePushToHeadOfBlockList(336)) - Block 2 to be added to the head. Current head block 3 and tail block 8; ([002] /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-1079555837997227351-block-0002.bin: size = 131,072, checksum = 2067391204)
2024-04-24 15:57:26,163 [setup] DEBUG prefetch.SingleFilePerBlockCache (SingleFilePerBlockCache.java:maybePushToHeadOfBlockList(344)) - Block 2 is already in block list
2024-04-24 15:57:26,163 [setup] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:readBlock(414)) - Failure in read of block 2 [262144-393216]
java.nio.file.NoSuchFileException: /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-1079555837997227351-block-0002.bin
	at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
	at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177)
	at java.nio.channels.FileChannel.open(FileChannel.java:287)
	at java.nio.channels.FileChannel.open(FileChannel.java:335)
	at org.apache.hadoop.fs.impl.prefetch.SingleFilePerBlockCache.readFile(SingleFilePerBlockCache.java:290)
	at org.apache.hadoop.fs.impl.prefetch.SingleFilePerBlockCache.get(SingleFilePerBlockCache.java:279)
	at org.apache.hadoop.fs.impl.prefetch.CachingBlockManager.readBlock(CachingBlockManager.java:389)
	at org.apache.hadoop.fs.impl.prefetch.CachingBlockManager.read(CachingBlockManager.java:337)
	at org.apache.hadoop.fs.impl.prefetch.CachingBlockManager.getInternal(CachingBlockManager.java:220)
	at org.apache.hadoop.fs.impl.prefetch.CachingBlockManager.get(CachingBlockManager.java:175)
	at org.apache.hadoop.fs.s3a.prefetch.S3ACachingInputStream.lambda$ensureCurrentBuffer$0(S3ACachingInputStream.java:238)
	at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:556)
	at org.apache.hadoop.fs.s3a.prefetch.S3ACachingInputStream.ensureCurrentBuffer(S3ACachingInputStream.java:236)
	at org.apache.hadoop.fs.s3a.prefetch.S3ARemoteInputStream.read(S3ARemoteInputStream.java:357)
	at org.apache.hadoop.fs.s3a.prefetch.S3APrefetchingInputStream.read(S3APrefetchingInputStream.java:198)
	at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:78)
	at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:100)
	at org.apache.hadoop.fs.s3a.ITestS3APrefetchingCacheFiles.testCacheFileExistence(ITestS3APrefetchingCacheFiles.java:137)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.lang.Thread.run(Thread.java:750)
2024-04-24 15:57:26,163 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** getCached(2)
2024-04-24 15:57:26,163 [setup] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:read(339)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: error reading block 2
java.nio.file.NoSuchFileException: /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-1079555837997227351-block-0002.bin
	at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
	at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177)
	at java.nio.channels.FileChannel.open(FileChannel.java:287)
	at java.nio.channels.FileChannel.open(FileChannel.java:335)
	at org.apache.hadoop.fs.impl.prefetch.SingleFilePerBlockCache.readFile(SingleFilePerBlockCache.java:290)
	at org.apache.hadoop.fs.impl.prefetch.SingleFilePerBlockCache.get(SingleFilePerBlockCache.java:279)
	at org.apache.hadoop.fs.impl.prefetch.CachingBlockManager.readBlock(CachingBlockManager.java:389)
	at org.apache.hadoop.fs.impl.prefetch.CachingBlockManager.read(CachingBlockManager.java:337)
	at org.apache.hadoop.fs.impl.prefetch.CachingBlockManager.getInternal(CachingBlockManager.java:220)
	at org.apache.hadoop.fs.impl.prefetch.CachingBlockManager.get(CachingBlockManager.java:175)
	at org.apache.hadoop.fs.s3a.prefetch.S3ACachingInputStream.lambda$ensureCurrentBuffer$0(S3ACachingInputStream.java:238)
	at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:556)
	at org.apache.hadoop.fs.s3a.prefetch.S3ACachingInputStream.ensureCurrentBuffer(S3ACachingInputStream.java:236)
	at org.apache.hadoop.fs.s3a.prefetch.S3ARemoteInputStream.read(S3ARemoteInputStream.java:357)
	at org.apache.hadoop.fs.s3a.prefetch.S3APrefetchingInputStream.read(S3APrefetchingInputStream.java:198)
	at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:78)
	at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:100)
	at org.apache.hadoop.fs.s3a.ITestS3APrefetchingCacheFiles.testCacheFileExistence(ITestS3APrefetchingCacheFiles.java:137)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.lang.Thread.run(Thread.java:750)
2024-04-24 15:57:26,164 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - ... close()
2024-04-24 15:57:26,164 [setup] DEBUG prefetch.CachingBlockManager (CachingBlockManager.java:cancelPrefetches(315)) - s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz: Cancelling prefetches: Close
2024-04-24 15:57:26,164 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - ... cancelPrefetches()
2024-04-24 15:57:26,164 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** cancelPrefetches()
2024-04-24 15:57:26,164 [setup] DEBUG prefetch.SingleFilePerBlockCache (SingleFilePerBlockCache.java:close(520)) - #entries = 4, #gets = 1
2024-04-24 15:57:26,164 [setup] DEBUG prefetch.SingleFilePerBlockCache (SingleFilePerBlockCache.java:deleteCacheFiles(530)) - Prefetch cache close: Deleting 4 cache files
2024-04-24 15:57:26,167 [setup] DEBUG prefetch.SingleFilePerBlockCache (SingleFilePerBlockCache.java:deleteCacheFiles(553)) - Prefetch cache close: Deleted 4 cache files
2024-04-24 15:57:26,167 [setup] INFO  prefetch.BlockOperations (BlockOperations.java:add(162)) - *** close()
2024-04-24 15:57:26,169 [setup] INFO  prefetch.CachingBlockManager (CachingBlockManager.java:close(257)) - RP(1);ERP(1);PF(1);RP(2);ERP(2);PF(2);RP(3);ERP(3);PF(3);RP(4);ERP(4);PF(4);RP(5);ERP(5);PF(5);RP(6);ERP(6);PF(6);RP(7);ERP(7);PF(7);RP(8);ERP(8);PF(8);GR(0);EGR(0);RC(0);ERC(0);CP;C+(0);EC+(0);EPF(2);RC(2);ERC(2);C+(2);EC+(2);EPF(5);RC(5);EPF(1);ERC(5);C+(5);EC+(5);EPF(3);EPF(4);EPF(8);EPF(7);EPF(6);RC(6);ERC(6);RC(1);ERC(1);C+(6);C+(1);RC(8);ERC(8);C+(8);RC(4);ERC(4);RC(7);C+(4);EC+(1);ERC(7);C+(7);RC(3);ERC(3);ECP;C+(3);GC(2);EC+(6);EC+(8);EC+(4);EC+(7);EC+(3);EGC(2);CX;CP;ECP;ECX;
GET_CACHED         : #ops =   1, total =   0.0, min: 0.0, avg: 0.0, max: 0.0
GET_PREFETCHED     : --
GET_READ           : #ops =   1, total =   0.3, min: 0.3, avg: 0.3, max: 0.3
CACHE_PUT          : #ops =   9, total =   0.1, min: 0.0, avg: 0.0, max: 0.0
PREFETCH           : #ops =   8, total =   6.4, min: 0.7, avg: 0.8, max: 0.8
REQUEST_CACHING    : #ops =   9, total =   0.0, min: 0.0, avg: 0.0, max: 0.0
REQUEST_PREFETCH   : #ops =   8, total =   0.0, min: 0.0, avg: 0.0, max: 0.0
CANCEL_PREFETCHES  : #ops =   2, total =   0.5, min: 0.0, avg: 0.3, max: 0.5
RELEASE            : --
CLOSE              : #ops =   1, total =   0.0, min: 0.0, avg: 0.0, max: 0.0

2024-04-24 15:57:26,170 [setup] DEBUG s3a.S3AInstrumentation (S3AInstrumentation.java:merge(1227)) - Merging statistics into FS statistics in close(): counters=((stream_read_vectored_incoming_ranges=0) (action_http_get_request=9) (stream_read_prefetch_operations=8) (stream_read_seek_backward_operations=0) (stream_read_operations=9) (stream_read_block_read.failures=0) (stream_read_fully_operations=1) (action_http_get_request.failures=0) (stream_read_close_operations=1) (action_file_opened.failures=0) (stream_read_vectored_combined_ranges=0) (stream_read_block_read=9) (stream_read_prefetch_operations.failures=0) (stream_read_remote_stream_drain.failures=0) (stream_read_block_acquire_read=2) (stream_read_vectored_operations=0) (stream_read_seek_bytes_discarded=0) (stream_read_seek_bytes_skipped=0) (stream_read_closed=9) (stream_read_block_acquire_read.failures=1) (stream_read_exceptions=0) (stream_read_unbuffered=0) (stream_read_seek_operations=0) (stream_evict_blocks_from_cache=5) (stream_read_vectored_read_bytes_discarded=0) (stream_read_block_fetch_operations.failures=0) (action_executor_acquired.failures=0) (stream_read_bytes=120832) (stream_file_cache_eviction=5) (stream_read_remote_stream_aborted=0) (stream_read_remote_stream_aborted.failures=0) (stream_read_opened=9) (stream_read_bytes_backwards_on_seek=0) (action_executor_acquired=0) (stream_aborted=0) (stream_file_cache_eviction.failures=0) (action_file_opened=1) (stream_read_remote_stream_drain=9) (stream_read_seek_forward_operations=0) (stream_read_operations_incomplete=0) (stream_read_bytes_discarded_in_close=0) (stream_read_version_mismatches=0) (stream_read_total_bytes=1179648) (stream_read_block_fetch_operations=1) (stream_read_bytes_discarded_in_abort=0) (stream_read_seek_policy_changed=1));
gauges=((stream_read_block_prefetch_limit=8) (stream_read_block_cache_enabled=1) (stream_read_block_fetch_operations=0) (stream_read_block_prefetch_enabled=1) (stream_read_blocks_in_cache=0) (stream_read_active_prefetch_operations=0) (stream_read_active_memory_in_use=0) (stream_read_gauge_input_policy=0) (stream_read_block_size=131072));
minimums=((stream_read_block_read.failures.min=-1) (stream_read_prefetch_operations.min=689) (action_http_get_request.min=123) (stream_read_block_acquire_read.failures.min=5) (stream_read_remote_stream_drain.min=0) (stream_read_block_acquire_read.min=312) (stream_read_block_read.min=310) (stream_read_remote_stream_aborted.failures.min=-1) (action_executor_acquired.min=0) (stream_read_prefetch_operations.failures.min=-1) (stream_file_cache_eviction.failures.min=-1) (stream_read_remote_stream_drain.failures.min=-1) (action_file_opened.failures.min=-1) (action_executor_acquired.failures.min=-1) (stream_file_cache_eviction.min=0) (action_http_get_request.failures.min=-1) (stream_read_block_fetch_operations.min=311) (action_file_opened.min=395) (stream_read_block_fetch_operations.failures.min=-1) (stream_read_remote_stream_aborted.min=-1));
maximums=((stream_read_remote_stream_aborted.max=-1) (stream_read_prefetch_operations.max=841) (action_file_opened.failures.max=-1) (action_executor_acquired.max=1) (stream_read_remote_stream_drain.failures.max=-1) (stream_read_block_fetch_operations.max=311) (stream_read_prefetch_operations.failures.max=-1) (stream_read_remote_stream_aborted.failures.max=-1) (stream_read_block_read.failures.max=-1) (action_executor_acquired.failures.max=-1) (stream_read_block_acquire_read.max=312) (stream_file_cache_eviction.max=3) (stream_read_block_read.max=841) (action_http_get_request.failures.max=-1) (action_http_get_request.max=506) (stream_read_remote_stream_drain.max=2) (stream_file_cache_eviction.failures.max=-1) (stream_read_block_fetch_operations.failures.max=-1) (action_file_opened.max=395) (stream_read_block_acquire_read.failures.max=5));
means=((stream_read_block_fetch_operations.mean=(samples=1, sum=311, mean=311.0000)) (action_file_opened.mean=(samples=1, sum=395, mean=395.0000)) (stream_read_block_read.mean=(samples=9, sum=6736, mean=748.4444)) (action_http_get_request.mean=(samples=9, sum=3772, mean=419.1111)) (stream_read_block_acquire_read.mean=(samples=1, sum=312, mean=312.0000)) (action_executor_acquired.mean=(samples=17, sum=7, mean=0.4118)) (stream_read_block_fetch_operations.failures.mean=(samples=0, sum=0, mean=0.0000)) (stream_read_block_acquire_read.failures.mean=(samples=1, sum=5, mean=5.0000)) (stream_read_prefetch_operations.mean=(samples=8, sum=6430, mean=803.7500)) (stream_read_remote_stream_aborted.failures.mean=(samples=0, sum=0, mean=0.0000)) (stream_read_block_read.failures.mean=(samples=0, sum=0, mean=0.0000)) (action_http_get_request.failures.mean=(samples=0, sum=0, mean=0.0000)) (stream_read_remote_stream_drain.failures.mean=(samples=0, sum=0, mean=0.0000)) (action_executor_acquired.failures.mean=(samples=0, sum=0, mean=0.0000)) (action_file_opened.failures.mean=(samples=0, sum=0, mean=0.0000)) (stream_file_cache_eviction.mean=(samples=5, sum=4, mean=0.8000)) (stream_file_cache_eviction.failures.mean=(samples=0, sum=0, mean=0.0000)) (stream_read_prefetch_operations.failures.mean=(samples=0, sum=0, mean=0.0000)) (stream_read_remote_stream_drain.mean=(samples=9, sum=5, mean=0.5556)) (stream_read_remote_stream_aborted.mean=(samples=0, sum=0, mean=0.0000)));

2024-04-24 15:57:26,176 [setup] INFO  prefetch.S3ACachingInputStream (S3ACachingInputStream.java:close(135)) - closed: s3a://noaa-cors-pds/raw/2023/017/ohfh/OHFH017d.23_.gz
2024-04-24 15:57:26,180 [teardown] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:innerGetFileStatus(3950)) - Getting path status for s3a://stevel-london/test  (test); needEmptyDirectory=true
2024-04-24 15:57:26,180 [teardown] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:s3GetFileStatus(4009)) - S3GetFileStatus s3a://stevel-london/test
2024-04-24 15:57:26,180 [teardown] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:lambda$getObjectMetadata$10(2903)) - HEAD test with change tracker null
2024-04-24 15:57:26,181 [teardown] DEBUG impl.LoggingAuditor (LoggingAuditor.java:modifyHttpRequest(400)) - [11] e40ad9fb-842f-43ee-8784-912d440e2355-00000012 Executing op_delete with {action_http_head_request 'test' size=0, mutating=false}; https://audit.example.org/hadoop/1/op_delete/e40ad9fb-842f-43ee-8784-912d440e2355-00000012/?op=op_delete&p1=s3a://stevel-london/test&pr=stevel&ps=d8c4e4fc-c0aa-48d0-9605-2bf9ed7b08b2&id=e40ad9fb-842f-43ee-8784-912d440e2355-00000012&t0=11&fs=e40ad9fb-842f-43ee-8784-912d440e2355&t1=11&ts=1713970646179
2024-04-24 15:57:26,208 [teardown] DEBUG s3a.Invoker (Invoker.java:retryUntranslated(474)) - GET test ; software.amazon.awssdk.services.s3.model.NoSuchKeyException: null (Service: S3, Status Code: 404, Request ID: 0WF8KXPK632CJ3VQ, Extended Request ID: hN2AQhOIUdGPGeN12s8dV0y0kITYsrazXciPu68N5G6oOrXRuJTOTtrpvXGA0liUIP/D4c+HjswEtJI+hY4uBQ==) (Service: S3, Status Code: 404, Request ID: 0WF8KXPK632CJ3VQ), 
2024-04-24 15:57:26,208 [teardown] DEBUG s3a.S3ARetryPolicy (S3ARetryPolicy.java:shouldRetry(308)) - Retry probe for FileNotFoundException with 0 retries and 0 failovers, idempotent=true, due to java.io.FileNotFoundException: GET test on /: software.amazon.awssdk.services.s3.model.NoSuchKeyException: null (Service: S3, Status Code: 404, Request ID: 0WF8KXPK632CJ3VQ, Extended Request ID: hN2AQhOIUdGPGeN12s8dV0y0kITYsrazXciPu68N5G6oOrXRuJTOTtrpvXGA0liUIP/D4c+HjswEtJI+hY4uBQ==) (Service: S3, Status Code: 404, Request ID: 0WF8KXPK632CJ3VQ):NoSuchKey
java.io.FileNotFoundException: GET test on /: software.amazon.awssdk.services.s3.model.NoSuchKeyException: null (Service: S3, Status Code: 404, Request ID: 0WF8KXPK632CJ3VQ, Extended Request ID: hN2AQhOIUdGPGeN12s8dV0y0kITYsrazXciPu68N5G6oOrXRuJTOTtrpvXGA0liUIP/D4c+HjswEtJI+hY4uBQ==) (Service: S3, Status Code: 404, Request ID: 0WF8KXPK632CJ3VQ):NoSuchKey
	at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:278)
	at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:481)
	at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:431)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:2895)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:2875)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:4024)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3952)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.deleteWithoutCloseCheck(S3AFileSystem.java:3538)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.delete(S3AFileSystem.java:3510)
	at org.apache.hadoop.fs.contract.ContractTestUtils.rm(ContractTestUtils.java:425)
	at org.apache.hadoop.fs.contract.ContractTestUtils.cleanup(ContractTestUtils.java:402)
	at org.apache.hadoop.fs.contract.AbstractFSContractTestBase.deleteTestDirInTeardown(AbstractFSContractTestBase.java:229)
	at org.apache.hadoop.fs.contract.AbstractFSContractTestBase.teardown(AbstractFSContractTestBase.java:217)
	at org.apache.hadoop.fs.s3a.AbstractS3ATestBase.teardown(AbstractS3ATestBase.java:124)
	at org.apache.hadoop.fs.s3a.ITestS3APrefetchingCacheFiles.teardown(ITestS3APrefetchingCacheFiles.java:112)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
	at org.junit.internal.runners.statements.RunAfters.invokeMethod(RunAfters.java:46)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
	at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.lang.Thread.run(Thread.java:750)
Caused by: software.amazon.awssdk.services.s3.model.NoSuchKeyException: null (Service: S3, Status Code: 404, Request ID: 0WF8KXPK632CJ3VQ, Extended Request ID: hN2AQhOIUdGPGeN12s8dV0y0kITYsrazXciPu68N5G6oOrXRuJTOTtrpvXGA0liUIP/D4c+HjswEtJI+hY4uBQ==) (Service: S3, Status Code: 404, Request ID: 0WF8KXPK632CJ3VQ)
	at software.amazon.awssdk.services.s3.model.NoSuchKeyException$BuilderImpl.build(NoSuchKeyException.java:126)
	at software.amazon.awssdk.services.s3.model.NoSuchKeyException$BuilderImpl.build(NoSuchKeyException.java:80)
	at software.amazon.awssdk.services.s3.internal.handlers.ExceptionTranslationInterceptor.modifyException(ExceptionTranslationInterceptor.java:63)
	at software.amazon.awssdk.core.interceptor.ExecutionInterceptorChain.modifyException(ExecutionInterceptorChain.java:181)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.utils.ExceptionReportingUtils.runModifyException(ExceptionReportingUtils.java:54)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.utils.ExceptionReportingUtils.reportFailureToInterceptors(ExceptionReportingUtils.java:38)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:39)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:26)
	at software.amazon.awssdk.core.internal.http.AmazonSyncHttpClient$RequestExecutionBuilderImpl.execute(AmazonSyncHttpClient.java:224)
	at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.invoke(BaseSyncClientHandler.java:103)
	at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.doExecute(BaseSyncClientHandler.java:173)
	at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.lambda$execute$1(BaseSyncClientHandler.java:80)
	at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.measureApiCallSuccess(BaseSyncClientHandler.java:182)
	at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.execute(BaseSyncClientHandler.java:74)
	at software.amazon.awssdk.core.client.handler.SdkSyncClientHandler.execute(SdkSyncClientHandler.java:45)
	at software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler.execute(AwsSyncClientHandler.java:53)
	at software.amazon.awssdk.services.s3.DefaultS3Client.headObject(DefaultS3Client.java:6319)
	at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$getObjectMetadata$10(S3AFileSystem.java:2907)
	at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:468)
	... 27 more
2024-04-24 15:57:26,209 [teardown] DEBUG s3a.S3ARetryPolicy (S3ARetryPolicy.java:shouldRetry(313)) - Retry action is RetryAction(action=FAIL, delayMillis=0, reason=try once and fail.)
2024-04-24 15:57:26,209 [teardown] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:listObjects(2965)) - LIST List stevel-london:/test/ delimiter=/ keys=2 requester pays=null
2024-04-24 15:57:26,209 [teardown] DEBUG s3a.S3AFileSystem (DurationInfo.java:<init>(80)) - Starting: LIST
2024-04-24 15:57:26,210 [teardown] DEBUG impl.LoggingAuditor (LoggingAuditor.java:modifyHttpRequest(400)) - [11] e40ad9fb-842f-43ee-8784-912d440e2355-00000012 Executing op_delete with {object_list_request 'test/' size=2, mutating=false}; https://audit.example.org/hadoop/1/op_delete/e40ad9fb-842f-43ee-8784-912d440e2355-00000012/?op=op_delete&p1=s3a://stevel-london/test&pr=stevel&ps=d8c4e4fc-c0aa-48d0-9605-2bf9ed7b08b2&id=e40ad9fb-842f-43ee-8784-912d440e2355-00000012&t0=11&fs=e40ad9fb-842f-43ee-8784-912d440e2355&t1=11&ts=1713970646179
2024-04-24 15:57:26,243 [teardown] DEBUG s3a.S3AFileSystem (DurationInfo.java:close(101)) - LIST: duration 0:00.034s
2024-04-24 15:57:26,243 [teardown] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:s3GetFileStatus(4073)) - Found path as directory (with /)
2024-04-24 15:57:26,243 [teardown] DEBUG s3a.S3AFileSystem (S3ListResult.java:logAtDebug(146)) - Prefix count = 0; object count=1
2024-04-24 15:57:26,243 [teardown] DEBUG s3a.S3AFileSystem (S3ListResult.java:logAtDebug(149)) - Summary: test/ 0
2024-04-24 15:57:26,244 [teardown] DEBUG impl.DeleteOperation (DeleteOperation.java:execute(196)) - Delete path s3a://stevel-london/test - recursive true
2024-04-24 15:57:26,244 [teardown] DEBUG impl.DeleteOperation (DeleteOperation.java:execute(197)) - Type = Empty Directory
2024-04-24 15:57:26,244 [teardown] DEBUG impl.DeleteOperation (DeleteOperation.java:execute(205)) - delete: Path is a directory: s3a://stevel-london/test
2024-04-24 15:57:26,244 [teardown] DEBUG impl.DeleteOperation (DeleteOperation.java:execute(225)) - deleting empty directory s3a://stevel-london/test
2024-04-24 15:57:26,244 [teardown] DEBUG impl.DeleteOperation (DeleteOperation.java:deleteObjectAtPath(381)) - delete: dir marker test/
2024-04-24 15:57:26,245 [teardown] DEBUG s3a.Invoker (DurationInfo.java:<init>(80)) - Starting: delete
2024-04-24 15:57:26,245 [teardown] DEBUG s3a.S3AFileSystem (DurationInfo.java:<init>(80)) - Starting: deleting test/
2024-04-24 15:57:26,249 [teardown] DEBUG impl.LoggingAuditor (LoggingAuditor.java:modifyHttpRequest(400)) - [11] e40ad9fb-842f-43ee-8784-912d440e2355-00000012 Executing op_delete with {object_delete_request 'test/' size=1, mutating=true}; https://audit.example.org/hadoop/1/op_delete/e40ad9fb-842f-43ee-8784-912d440e2355-00000012/?op=op_delete&p1=s3a://stevel-london/test&pr=stevel&ps=d8c4e4fc-c0aa-48d0-9605-2bf9ed7b08b2&ks=1&id=e40ad9fb-842f-43ee-8784-912d440e2355-00000012&t0=11&fs=e40ad9fb-842f-43ee-8784-912d440e2355&t1=11&ts=1713970646179
2024-04-24 15:57:26,285 [teardown] DEBUG s3a.S3AFileSystem (DurationInfo.java:close(101)) - deleting test/: duration 0:00.040s
2024-04-24 15:57:26,285 [teardown] DEBUG s3a.Invoker (DurationInfo.java:close(101)) - delete: duration 0:00.040s
2024-04-24 15:57:26,285 [teardown] DEBUG impl.DeleteOperation (DeleteOperation.java:execute(236)) - Deleted 1 objects
2024-04-24 15:57:26,286 [teardown] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:close(4365)) - Filesystem s3a://stevel-london is closed
2024-04-24 15:57:26,286 [teardown] DEBUG s3a.S3AFileSystem (S3AUtils.java:closeAutocloseables(1552)) - Closing software.amazon.awssdk.transfer.s3.internal.GenericS3TransferManager@699de05
2024-04-24 15:57:26,286 [teardown] DEBUG s3a.S3AFileSystem (S3AUtils.java:closeAutocloseables(1552)) - Closing software.amazon.awssdk.services.s3.DefaultS3Client@41c14b57
2024-04-24 15:57:26,286 [teardown] DEBUG s3a.AWSCredentialProviderList (AWSCredentialProviderList.java:close(314)) - Closing AWSCredentialProviderList name=; refcount= 0; size=2: [TemporaryAWSCredentialsProvider, SimpleAWSCredentialsProvider{accessKey.empty=false, secretKey.empty=false}] last provider: SimpleAWSCredentialsProvider{accessKey.empty=false, secretKey.empty=false}
2024-04-24 15:57:26,288 [teardown] DEBUG s3a.S3AFileSystem (S3AUtils.java:closeAutocloseables(1552)) - Closing software.amazon.awssdk.services.s3.internal.multipart.MultipartS3AsyncClient@29dc39ba
2024-04-24 15:57:26,292 [teardown] DEBUG s3a.S3AFileSystem (HadoopExecutors.java:shutdown(118)) - Gracefully shutting down executor service BlockingThreadPoolExecutorService{SemaphoredDelegatingExecutor{permitCount=200, available=200, waiting=0}, activeCount=0}. Waiting max 30 SECONDS
2024-04-24 15:57:26,292 [teardown] DEBUG s3a.S3AFileSystem (HadoopExecutors.java:shutdown(128)) - Succesfully shutdown executor service
2024-04-24 15:57:26,292 [teardown] DEBUG s3a.S3AFileSystem (HadoopExecutors.java:shutdown(118)) - Gracefully shutting down executor service java.util.concurrent.ThreadPoolExecutor@6e3a3097[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]. Waiting max 30 SECONDS
2024-04-24 15:57:26,292 [teardown] DEBUG s3a.S3AFileSystem (HadoopExecutors.java:shutdown(128)) - Succesfully shutdown executor service
2024-04-24 15:57:26,292 [teardown] DEBUG s3a.S3AFileSystem (HadoopExecutors.java:shutdown(118)) - Gracefully shutting down executor service SemaphoredDelegatingExecutor{permitCount=200, available=200, waiting=0}. Waiting max 30 SECONDS
2024-04-24 15:57:26,292 [teardown] DEBUG s3a.S3AFileSystem (HadoopExecutors.java:shutdown(128)) - Succesfully shutdown executor service
2024-04-24 15:57:26,292 [teardown] DEBUG s3a.S3AInstrumentation (S3AInstrumentation.java:close(721)) - Unregistering metrics for S3AMetrics2-stevel-london
2024-04-24 15:57:26,292 [teardown] DEBUG auth.SignerManager (SignerManager.java:close(142)) - Unregistering fs from 0 initializers
2024-04-24 15:57:26,292 [teardown] DEBUG s3a.S3AFileSystem (S3AUtils.java:closeAutocloseables(1552)) - Closing AWSCredentialProviderList name=; refcount= 0; size=2: [TemporaryAWSCredentialsProvider, SimpleAWSCredentialsProvider{accessKey.empty=false, secretKey.empty=false}] last provider: SimpleAWSCredentialsProvider{accessKey.empty=false, secretKey.empty=false}
2024-04-24 15:57:26,294 [teardown] INFO  statistics.IOStatisticsLogging (IOStatisticsLogging.java:logIOStatisticsAtLevel(269)) - IOStatistics: counters=((action_http_head_request=1)
(audit_request_execution=4)
(audit_span_creation=3)
(directories_deleted=1)
(object_delete_objects=1)
(object_delete_request=1)
(object_list_request=2)
(object_metadata_request=1)
(op_delete=1)
(op_mkdirs=1)
(store_io_request=4));

gauges=();

minimums=((action_http_head_request.min=28)
(object_delete_request.min=40)
(object_list_request.min=34)
(op_delete.min=42)
(op_mkdirs.min=396));

maximums=((action_http_head_request.max=28)
(object_delete_request.max=40)
(object_list_request.max=393)
(op_delete.max=42)
(op_mkdirs.max=396));

means=((action_http_head_request.mean=(samples=1, sum=28, mean=28.0000))
(object_delete_request.mean=(samples=1, sum=40, mean=40.0000))
(object_list_request.mean=(samples=2, sum=427, mean=213.5000))
(op_delete.mean=(samples=1, sum=42, mean=42.0000))
(op_mkdirs.mean=(samples=1, sum=396, mean=396.0000)));

2024-04-24 15:57:26,295 [teardown] INFO  contract.AbstractFSContractTestBase (AbstractFSContractTestBase.java:describe(280)) - closing file system
2024-04-24 15:57:26,296 [teardown] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:close(4365)) - Filesystem s3a://noaa-cors-pds is closed
2024-04-24 15:57:26,296 [teardown] DEBUG s3a.S3AFileSystem (S3AUtils.java:closeAutocloseables(1552)) - Closing software.amazon.awssdk.transfer.s3.internal.GenericS3TransferManager@76875f30
2024-04-24 15:57:26,296 [teardown] DEBUG s3a.S3AFileSystem (S3AUtils.java:closeAutocloseables(1552)) - Closing software.amazon.awssdk.services.s3.DefaultS3Client@475b15d4
2024-04-24 15:57:26,296 [teardown] DEBUG s3a.AWSCredentialProviderList (AWSCredentialProviderList.java:close(314)) - Closing AWSCredentialProviderList name=; refcount= 0; size=2: [TemporaryAWSCredentialsProvider, SimpleAWSCredentialsProvider{accessKey.empty=false, secretKey.empty=false}] last provider: SimpleAWSCredentialsProvider{accessKey.empty=false, secretKey.empty=false}
2024-04-24 15:57:26,297 [teardown] DEBUG s3a.S3AFileSystem (S3AUtils.java:closeAutocloseables(1552)) - Closing software.amazon.awssdk.services.s3.internal.multipart.MultipartS3AsyncClient@3a9f0de7
2024-04-24 15:57:26,297 [teardown] DEBUG s3a.S3AFileSystem (HadoopExecutors.java:shutdown(118)) - Gracefully shutting down executor service BlockingThreadPoolExecutorService{SemaphoredDelegatingExecutor{permitCount=200, available=200, waiting=0}, activeCount=0}. Waiting max 30 SECONDS
2024-04-24 15:57:26,297 [teardown] DEBUG s3a.S3AFileSystem (HadoopExecutors.java:shutdown(128)) - Succesfully shutdown executor service
2024-04-24 15:57:26,297 [teardown] DEBUG s3a.S3AFileSystem (HadoopExecutors.java:shutdown(118)) - Gracefully shutting down executor service java.util.concurrent.ThreadPoolExecutor@11ce9005[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]. Waiting max 30 SECONDS
2024-04-24 15:57:26,297 [teardown] DEBUG s3a.S3AFileSystem (HadoopExecutors.java:shutdown(128)) - Succesfully shutdown executor service
2024-04-24 15:57:26,297 [teardown] DEBUG s3a.S3AFileSystem (HadoopExecutors.java:shutdown(118)) - Gracefully shutting down executor service SemaphoredDelegatingExecutor{permitCount=200, available=200, waiting=0}. Waiting max 30 SECONDS
2024-04-24 15:57:26,297 [teardown] DEBUG s3a.S3AFileSystem (HadoopExecutors.java:shutdown(128)) - Succesfully shutdown executor service
2024-04-24 15:57:26,297 [teardown] DEBUG s3a.S3AInstrumentation (S3AInstrumentation.java:close(721)) - Unregistering metrics for S3AMetrics3-noaa-cors-pds
2024-04-24 15:57:26,298 [teardown] DEBUG auth.SignerManager (SignerManager.java:close(142)) - Unregistering fs from 0 initializers
2024-04-24 15:57:26,298 [teardown] DEBUG s3a.S3AFileSystem (S3AUtils.java:closeAutocloseables(1552)) - Closing AWSCredentialProviderList name=; refcount= 0; size=2: [TemporaryAWSCredentialsProvider, SimpleAWSCredentialsProvider{accessKey.empty=false, secretKey.empty=false}] last provider: SimpleAWSCredentialsProvider{accessKey.empty=false, secretKey.empty=false}
2024-04-24 15:57:26,299 [teardown] INFO  statistics.IOStatisticsLogging (IOStatisticsLogging.java:logIOStatisticsAtLevel(269)) - IOStatistics: counters=((action_file_opened=1)
(action_http_get_request=9)
(action_http_head_request=1)
(audit_request_execution=10)
(audit_span_creation=2)
(object_metadata_request=1)
(op_open=1)
(store_io_request=10)
(stream_evict_blocks_from_cache=5)
(stream_file_cache_eviction=5)
(stream_read_block_acquire_read=2)
(stream_read_block_acquire_read.failures=1)
(stream_read_block_fetch_operations=1)
(stream_read_bytes=120832)
(stream_read_close_operations=1)
(stream_read_closed=9)
(stream_read_fully_operations=1)
(stream_read_opened=9)
(stream_read_operations=9)
(stream_read_remote_stream_drain=9)
(stream_read_seek_policy_changed=1)
(stream_read_total_bytes=1179648));

gauges=((stream_read_block_cache_enabled=1)
(stream_read_block_prefetch_enabled=1)
(stream_read_block_prefetch_limit=8)
(stream_read_block_size=131072));

minimums=((action_executor_acquired.min=0)
(action_file_opened.min=395)
(action_http_get_request.min=123)
(action_http_head_request.min=394)
(stream_file_cache_eviction.min=0)
(stream_read_block_acquire_read.failures.min=5)
(stream_read_block_acquire_read.min=312)
(stream_read_block_fetch_operations.min=311)
(stream_read_remote_stream_drain.min=0));

maximums=((action_executor_acquired.max=1)
(action_file_opened.max=395)
(action_http_get_request.max=506)
(action_http_head_request.max=394)
(stream_file_cache_eviction.max=3)
(stream_read_block_acquire_read.failures.max=5)
(stream_read_block_acquire_read.max=312)
(stream_read_block_fetch_operations.max=311)
(stream_read_remote_stream_drain.max=2));

means=((action_executor_acquired.mean=(samples=17, sum=7, mean=0.4118))
(action_file_opened.mean=(samples=1, sum=395, mean=395.0000))
(action_http_get_request.mean=(samples=9, sum=3772, mean=419.1111))
(action_http_head_request.mean=(samples=1, sum=394, mean=394.0000))
(stream_file_cache_eviction.mean=(samples=5, sum=4, mean=0.8000))
(stream_read_block_acquire_read.failures.mean=(samples=1, sum=5, mean=5.0000))
(stream_read_block_acquire_read.mean=(samples=1, sum=312, mean=312.0000))
(stream_read_block_fetch_operations.mean=(samples=1, sum=311, mean=311.0000))
(stream_read_remote_stream_drain.mean=(samples=9, sum=5, mean=0.5556)));


java.nio.file.NoSuchFileException: /Users/stevel/Projects/IDE-files/hadoop-trunk/target/prefetch/a1dd81ba-1499-440d-ba67-79705a17d5e7/fs-cache-1079555837997227351-block-0002.bin

	at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
	at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177)
	at java.nio.channels.FileChannel.open(FileChannel.java:287)
	at java.nio.channels.FileChannel.open(FileChannel.java:335)
	at org.apache.hadoop.fs.impl.prefetch.SingleFilePerBlockCache.readFile(SingleFilePerBlockCache.java:290)
	at org.apache.hadoop.fs.impl.prefetch.SingleFilePerBlockCache.get(SingleFilePerBlockCache.java:279)
	at org.apache.hadoop.fs.impl.prefetch.CachingBlockManager.readBlock(CachingBlockManager.java:389)
	at org.apache.hadoop.fs.impl.prefetch.CachingBlockManager.read(CachingBlockManager.java:337)
	at org.apache.hadoop.fs.impl.prefetch.CachingBlockManager.getInternal(CachingBlockManager.java:220)
	at org.apache.hadoop.fs.impl.prefetch.CachingBlockManager.get(CachingBlockManager.java:175)
	at org.apache.hadoop.fs.s3a.prefetch.S3ACachingInputStream.lambda$ensureCurrentBuffer$0(S3ACachingInputStream.java:238)
	at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:556)
	at org.apache.hadoop.fs.s3a.prefetch.S3ACachingInputStream.ensureCurrentBuffer(S3ACachingInputStream.java:236)
	at org.apache.hadoop.fs.s3a.prefetch.S3ARemoteInputStream.read(S3ARemoteInputStream.java:357)
	at org.apache.hadoop.fs.s3a.prefetch.S3APrefetchingInputStream.read(S3APrefetchingInputStream.java:198)
	at org.apache.hadoop.fs.FSInputStream.read(FSInputStream.java:78)
	at org.apache.hadoop.fs.FSDataInputStream.read(FSDataInputStream.java:100)
	at org.apache.hadoop.fs.s3a.ITestS3APrefetchingCacheFiles.testCacheFileExistence(ITestS3APrefetchingCacheFiles.java:137)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
	at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
	at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.lang.Thread.run(Thread.java:750)

steveloughran added a commit to steveloughran/hadoop that referenced this pull request Apr 24, 2024
…t runs

This is actually trickier than it seems as we will need to go deep into the
implementation of caching.

Specifically: the prefetcher knows the file length and if you open a file
shorter than that, but less than one block, the read is considered a failure
and the whole block is skipped, so read() of the nominally in-range data
returns -1.

This fix has to be considered a PoC and should be combined with the other
big PR for prefetching, apache#5832 as that is where changes should go.

Here is just test tuning and some differentiation of channel problems from
other EOFs.

Change-Id: Icdf7e2fb10ca77b6ca427eb207472fad277130d7
@steveloughran steveloughran force-pushed the s3/pre/HADOOP-18184-unbuffer branch from fb83df2 to e7a73c0 Compare April 24, 2024 18:41
@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 7m 9s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 1s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+0 🆗 markdownlint 0m 0s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 28 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 14m 7s Maven dependency ordering for branch
+1 💚 mvninstall 19m 54s trunk passed
+1 💚 compile 9m 2s trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
+1 💚 compile 8m 9s trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
+1 💚 checkstyle 2m 3s trunk passed
+1 💚 mvnsite 1m 35s trunk passed
+1 💚 javadoc 1m 15s trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
+1 💚 javadoc 1m 10s trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
+1 💚 spotbugs 2m 20s trunk passed
+1 💚 shadedclient 20m 51s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 23s Maven dependency ordering for patch
+1 💚 mvninstall 0m 50s the patch passed
+1 💚 compile 8m 29s the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
+1 💚 javac 8m 29s the patch passed
+1 💚 compile 8m 21s the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
+1 💚 javac 8m 21s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
-0 ⚠️ checkstyle 2m 3s /results-checkstyle-root.txt root: The patch generated 39 new + 9 unchanged - 0 fixed = 48 total (was 9)
+1 💚 mvnsite 1m 35s the patch passed
+1 💚 javadoc 1m 10s the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
-1 ❌ javadoc 0m 32s /results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu120.04-b06 with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu120.04-b06 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
+1 💚 spotbugs 2m 29s the patch passed
+1 💚 shadedclient 20m 52s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 16m 24s hadoop-common in the patch passed.
-1 ❌ unit 2m 33s /patch-unit-hadoop-tools_hadoop-aws.txt hadoop-aws in the patch passed.
+1 💚 asflicense 0m 42s The patch does not generate ASF License warnings.
159m 14s
Reason Tests
Failed junit tests hadoop.fs.s3a.prefetch.TestS3ACachingBlockManager
Subsystem Report/Notes
Docker ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/21/artifact/out/Dockerfile
GITHUB PR #5832
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint
uname Linux 6b813d16319e 5.15.0-94-generic #104-Ubuntu SMP Tue Jan 9 15:25:40 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / e7a73c0
Default Java Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/21/testReport/
Max. process+thread count 2367 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/21/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

steveloughran added a commit to steveloughran/hadoop that referenced this pull request May 22, 2024
…t runs

This is actually trickier than it seems as we will need to go deep into the
implementation of caching.

Specifically: the prefetcher knows the file length and if you open a file
shorter than that, but less than one block, the read is considered a failure
and the whole block is skipped, so read() of the nominally in-range data
returns -1.

This fix has to be considered a PoC and should be combined with the other
big PR for prefetching, apache#5832 as that is where changes should go.

Here is just test tuning and some differentiation of channel problems from
other EOFs.

Change-Id: Icdf7e2fb10ca77b6ca427eb207472fad277130d7
@steveloughran steveloughran force-pushed the s3/pre/HADOOP-18184-unbuffer branch from e7a73c0 to 2134dc1 Compare May 22, 2024 10:32
@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 22s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 1s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+0 🆗 markdownlint 0m 0s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 28 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 0m 20s Maven dependency ordering for branch
-1 ❌ mvninstall 0m 22s /branch-mvninstall-root.txt root in trunk failed.
-1 ❌ compile 0m 22s /branch-compile-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt root in trunk failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.
-1 ❌ compile 0m 22s /branch-compile-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt root in trunk failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.
-0 ⚠️ checkstyle 0m 20s /buildtool-branch-checkstyle-root.txt The patch fails to run checkstyle in root
-1 ❌ mvnsite 0m 22s /branch-mvnsite-hadoop-common-project_hadoop-common.txt hadoop-common in trunk failed.
-1 ❌ mvnsite 0m 22s /branch-mvnsite-hadoop-tools_hadoop-aws.txt hadoop-aws in trunk failed.
-1 ❌ javadoc 0m 22s /branch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt hadoop-common in trunk failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.
-1 ❌ javadoc 0m 22s /branch-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt hadoop-aws in trunk failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.
-1 ❌ javadoc 0m 23s /branch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt hadoop-common in trunk failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.
-1 ❌ javadoc 0m 22s /branch-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt hadoop-aws in trunk failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.
-1 ❌ spotbugs 0m 22s /branch-spotbugs-hadoop-common-project_hadoop-common.txt hadoop-common in trunk failed.
-1 ❌ spotbugs 0m 23s /branch-spotbugs-hadoop-tools_hadoop-aws.txt hadoop-aws in trunk failed.
+1 💚 shadedclient 4m 2s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 19s Maven dependency ordering for patch
-1 ❌ mvninstall 0m 23s /patch-mvninstall-hadoop-common-project_hadoop-common.txt hadoop-common in the patch failed.
-1 ❌ mvninstall 0m 22s /patch-mvninstall-hadoop-tools_hadoop-aws.txt hadoop-aws in the patch failed.
-1 ❌ compile 0m 22s /patch-compile-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt root in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.
-1 ❌ javac 0m 22s /patch-compile-root-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt root in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.
-1 ❌ compile 0m 22s /patch-compile-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt root in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.
-1 ❌ javac 0m 22s /patch-compile-root-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt root in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.
+1 💚 blanks 0m 0s The patch has no blanks issues.
-0 ⚠️ checkstyle 0m 21s /buildtool-patch-checkstyle-root.txt The patch fails to run checkstyle in root
-1 ❌ mvnsite 0m 22s /patch-mvnsite-hadoop-common-project_hadoop-common.txt hadoop-common in the patch failed.
-1 ❌ mvnsite 0m 22s /patch-mvnsite-hadoop-tools_hadoop-aws.txt hadoop-aws in the patch failed.
-1 ❌ javadoc 0m 22s /patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt hadoop-common in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.
-1 ❌ javadoc 0m 22s /patch-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt hadoop-aws in the patch failed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.
-1 ❌ javadoc 0m 23s /patch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt hadoop-common in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.
-1 ❌ javadoc 0m 22s /patch-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt hadoop-aws in the patch failed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.
-1 ❌ spotbugs 0m 23s /patch-spotbugs-hadoop-common-project_hadoop-common.txt hadoop-common in the patch failed.
-1 ❌ spotbugs 0m 22s /patch-spotbugs-hadoop-tools_hadoop-aws.txt hadoop-aws in the patch failed.
+1 💚 shadedclient 5m 43s patch has no errors when building and testing our client artifacts.
_ Other Tests _
-1 ❌ unit 0m 23s /patch-unit-hadoop-common-project_hadoop-common.txt hadoop-common in the patch failed.
-1 ❌ unit 0m 22s /patch-unit-hadoop-tools_hadoop-aws.txt hadoop-aws in the patch failed.
+0 🆗 asflicense 0m 22s ASF License check generated no output?
18m 6s
Subsystem Report/Notes
Docker ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/22/artifact/out/Dockerfile
GITHUB PR #5832
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint
uname Linux 6ac6aaace083 5.15.0-106-generic #116-Ubuntu SMP Wed Apr 17 09:17:56 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 2134dc1
Default Java Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/22/testReport/
Max. process+thread count 32 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/22/console
versions git=2.25.1 maven=3.6.3
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

steveloughran added a commit to steveloughran/hadoop that referenced this pull request May 31, 2024
…t runs

This is actually trickier than it seems as we will need to go deep into the
implementation of caching.

Specifically: the prefetcher knows the file length and if you open a file
shorter than that, but less than one block, the read is considered a failure
and the whole block is skipped, so read() of the nominally in-range data
returns -1.

This fix has to be considered a PoC and should be combined with the other
big PR for prefetching, apache#5832 as that is where changes should go.

Here is just test tuning and some differentiation of channel problems from
other EOFs.

Change-Id: Icdf7e2fb10ca77b6ca427eb207472fad277130d7
@steveloughran steveloughran force-pushed the s3/pre/HADOOP-18184-unbuffer branch from 2134dc1 to bfd3716 Compare May 31, 2024 17:13
@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 22s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+0 🆗 markdownlint 0m 0s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 28 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 14m 36s Maven dependency ordering for branch
+1 💚 mvninstall 19m 59s trunk passed
+1 💚 compile 8m 45s trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
+1 💚 compile 8m 7s trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
+1 💚 checkstyle 2m 7s trunk passed
+1 💚 mvnsite 1m 35s trunk passed
+1 💚 javadoc 1m 15s trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
+1 💚 javadoc 1m 7s trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
+1 💚 spotbugs 2m 17s trunk passed
+1 💚 shadedclient 20m 42s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 23s Maven dependency ordering for patch
+1 💚 mvninstall 0m 50s the patch passed
+1 💚 compile 8m 30s the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
+1 💚 javac 8m 30s the patch passed
+1 💚 compile 8m 6s the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
+1 💚 javac 8m 6s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
-0 ⚠️ checkstyle 2m 5s /results-checkstyle-root.txt root: The patch generated 38 new + 10 unchanged - 0 fixed = 48 total (was 10)
+1 💚 mvnsite 1m 32s the patch passed
-1 ❌ javadoc 0m 37s /results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt hadoop-common-project_hadoop-common-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
-1 ❌ javadoc 0m 32s /results-javadoc-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06.txt hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_402-8u402-ga-2ubuntu120.04-b06 with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu120.04-b06 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
+1 💚 spotbugs 2m 28s the patch passed
+1 💚 shadedclient 20m 34s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 17m 16s hadoop-common in the patch passed.
+1 💚 unit 2m 31s hadoop-aws in the patch passed.
+1 💚 asflicense 0m 41s The patch does not generate ASF License warnings.
153m 26s
Subsystem Report/Notes
Docker ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/23/artifact/out/Dockerfile
GITHUB PR #5832
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint
uname Linux f423ca40c74c 5.15.0-106-generic #116-Ubuntu SMP Wed Apr 17 09:17:56 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / bfd3716
Default Java Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/23/testReport/
Max. process+thread count 1282 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/23/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 34s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 1s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+0 🆗 markdownlint 0m 0s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 28 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 14m 7s Maven dependency ordering for branch
-1 ❌ mvninstall 20m 55s /branch-mvninstall-root.txt root in trunk failed.
+1 💚 compile 9m 49s trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
+1 💚 compile 9m 2s trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
+1 💚 checkstyle 2m 20s trunk passed
+1 💚 mvnsite 1m 26s trunk passed
+1 💚 javadoc 1m 4s trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
+1 💚 javadoc 0m 52s trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
+1 💚 spotbugs 2m 10s trunk passed
+1 💚 shadedclient 21m 57s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 23s Maven dependency ordering for patch
+1 💚 mvninstall 0m 50s the patch passed
+1 💚 compile 9m 39s the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
+1 💚 javac 9m 39s the patch passed
+1 💚 compile 9m 1s the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
+1 💚 javac 9m 1s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
-0 ⚠️ checkstyle 2m 14s /results-checkstyle-root.txt root: The patch generated 28 new + 10 unchanged - 0 fixed = 38 total (was 10)
+1 💚 mvnsite 1m 23s the patch passed
-1 ❌ javadoc 0m 36s /results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt hadoop-common-project_hadoop-common-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
+1 💚 javadoc 0m 50s the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
+1 💚 spotbugs 2m 20s the patch passed
+1 💚 shadedclient 21m 58s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 17m 17s hadoop-common in the patch passed.
+1 💚 unit 2m 35s hadoop-aws in the patch passed.
+1 💚 asflicense 0m 37s The patch does not generate ASF License warnings.
158m 58s
Subsystem Report/Notes
Docker ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/24/artifact/out/Dockerfile
GITHUB PR #5832
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint
uname Linux 97a566ec1209 5.15.0-106-generic #116-Ubuntu SMP Wed Apr 17 09:17:56 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / fa4b3f4
Default Java Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/24/testReport/
Max. process+thread count 3108 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/24/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 31s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 1s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+0 🆗 markdownlint 0m 0s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 28 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 14m 0s Maven dependency ordering for branch
+1 💚 mvninstall 20m 55s trunk passed
+1 💚 compile 9m 53s trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
+1 💚 compile 8m 55s trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
+1 💚 checkstyle 2m 21s trunk passed
+1 💚 mvnsite 1m 25s trunk passed
+1 💚 javadoc 1m 1s trunk passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
+1 💚 javadoc 0m 55s trunk passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
+1 💚 spotbugs 2m 11s trunk passed
+1 💚 shadedclient 22m 11s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 22s Maven dependency ordering for patch
+1 💚 mvninstall 0m 58s the patch passed
+1 💚 compile 9m 29s the patch passed with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1
+1 💚 javac 9m 29s the patch passed
+1 💚 compile 8m 58s the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
+1 💚 javac 8m 58s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
-0 ⚠️ checkstyle 2m 20s /results-checkstyle-root.txt root: The patch generated 28 new + 10 unchanged - 0 fixed = 38 total (was 10)
+1 💚 mvnsite 1m 16s the patch passed
-1 ❌ javadoc 0m 35s /results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1.txt hadoop-common-project_hadoop-common-jdkUbuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 with JDK Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
+1 💚 javadoc 0m 54s the patch passed with JDK Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
+1 💚 spotbugs 2m 28s the patch passed
+1 💚 shadedclient 21m 49s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 17m 26s hadoop-common in the patch passed.
+1 💚 unit 2m 28s hadoop-aws in the patch passed.
+1 💚 asflicense 0m 41s The patch does not generate ASF License warnings.
159m 10s
Subsystem Report/Notes
Docker ClientAPI=1.45 ServerAPI=1.45 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/25/artifact/out/Dockerfile
GITHUB PR #5832
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint
uname Linux 07909cc17124 5.15.0-106-generic #116-Ubuntu SMP Wed Apr 17 09:17:56 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / fa4b3f4
Default Java Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.22+7-post-Ubuntu-0ubuntu220.04.1 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_402-8u402-ga-2ubuntu1~20.04-b06
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/25/testReport/
Max. process+thread count 1287 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/25/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

… prefetch range

This passes the values down but doesn't interpret them; future work

Change-Id: I523b26e5a5a43fbf6ba5d2b6e44614c7e4fc70b7

HADOOP-18184. S3A Prefetching unbuffer.

compiles against v2 sdk now

Change-Id: Ic96af7f76931c6dcc453368ad02ae87d07fa4484

HADOOP-18184. temp file creation/test validation

* use block id in filename
* log statements include fs path
* tests more resilient
* logging auditor prints GET range and length

Tests are failing with signs of
* too many GETs
* incomplete buffers. race conditions?

Change-Id: Ibdca6292df8cf0149697cecfec24035e2be473d8

HADOOP-19043. S3A: Regression: ITestS3AOpenCost fails on prefetch test runs

This is actually trickier than it seems as we will need to go deep into the
implementation of caching.

Specifically: the prefetcher knows the file length and if you open a file
shorter than that, but less than one block, the read is considered a failure
and the whole block is skipped, so read() of the nominally in-range data
returns -1.

This fix has to be considered a PoC and should be combined with the other
big PR for prefetching, apache#5832 as that is where changes should go.

Here is just test tuning and some differentiation of channel problems from
other EOFs.

Change-Id: Icdf7e2fb10ca77b6ca427eb207472fad277130d7

HADOOP-19043. S3A: Regression: ITestS3AOpenCost fails on prefetch test runs

* Adds EOF logic deep into the prefetching code
* Tests still failing.
* this has all conflicts with hadoop trunk resolved

Change-Id: I9b23b01d010d8a1a680ce849d26a0aebab2389e2

HADOOP-18184. fix NPEs in BlockManager unit tests by adding withPath()

Change-Id: Ie3d1c266b1231fa85c01092dd79f2dcf961fe498

HADOOP-18184. prefetching

- Cache was not thread safe and it was possible for cleanup
  to happen while the caller had just verified it was there and
  before a read lock was acquired.
  fix: synchronize check and get into one block, use synchronized elsewhere.
- try to cut back on assertions in ITestS3APrefetchingLargeFiles which
  seem too brittle to prefetch behaviour/race conditions.
- minor doc, log, assertion changes
more work on that test failure needed

Change-Id: I288540ec1fb08e1a5684cde8e94e1c7933d1e41d

HADOOP-18184. prefetching: style

Change-Id: Ifdde5ab33f24515c306a8ccc27ec784c3b6c0a76

HADOOP-18184. unbuffer: reinstate commented out asserts

these asserts fail as I don't understand the prefetch logic
well enough to make valid assertions

Change-Id: I198d10ccead99754afd17040dc4f4c9ebc919906

HADOOP-18184. javadocs

Change-Id: I61d013ba439c8f4093ad0634a67c6a20e82062ad
@steveloughran steveloughran force-pushed the s3/pre/HADOOP-18184-unbuffer branch from fa4b3f4 to 5b85014 Compare November 18, 2024 14:06
@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 19s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 1s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+0 🆗 markdownlint 0m 0s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 27 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 14m 37s Maven dependency ordering for branch
+1 💚 mvninstall 19m 5s trunk passed
+1 💚 compile 8m 48s trunk passed with JDK Ubuntu-11.0.25+9-post-Ubuntu-1ubuntu120.04
+1 💚 compile 8m 9s trunk passed with JDK Private Build-1.8.0_432-8u432-gaus1-0ubuntu220.04-ga
+1 💚 checkstyle 2m 9s trunk passed
+1 💚 mvnsite 1m 40s trunk passed
+1 💚 javadoc 1m 22s trunk passed with JDK Ubuntu-11.0.25+9-post-Ubuntu-1ubuntu120.04
+1 💚 javadoc 1m 7s trunk passed with JDK Private Build-1.8.0_432-8u432-gaus1-0ubuntu220.04-ga
+1 💚 spotbugs 2m 24s trunk passed
+1 💚 shadedclient 21m 15s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 23s Maven dependency ordering for patch
+1 💚 mvninstall 0m 50s the patch passed
+1 💚 compile 8m 31s the patch passed with JDK Ubuntu-11.0.25+9-post-Ubuntu-1ubuntu120.04
+1 💚 javac 8m 31s the patch passed
+1 💚 compile 8m 5s the patch passed with JDK Private Build-1.8.0_432-8u432-gaus1-0ubuntu220.04-ga
+1 💚 javac 8m 5s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
-0 ⚠️ checkstyle 2m 8s /results-checkstyle-root.txt root: The patch generated 26 new + 10 unchanged - 0 fixed = 36 total (was 10)
+1 💚 mvnsite 1m 31s the patch passed
-1 ❌ javadoc 0m 45s /results-javadoc-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.25+9-post-Ubuntu-1ubuntu120.04.txt hadoop-common-project_hadoop-common-jdkUbuntu-11.0.25+9-post-Ubuntu-1ubuntu120.04 with JDK Ubuntu-11.0.25+9-post-Ubuntu-1ubuntu120.04 generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0)
+1 💚 javadoc 1m 5s the patch passed with JDK Private Build-1.8.0_432-8u432-gaus1-0ubuntu220.04-ga
+1 💚 spotbugs 2m 39s the patch passed
+1 💚 shadedclient 21m 22s patch has no errors when building and testing our client artifacts.
_ Other Tests _
+1 💚 unit 17m 16s hadoop-common in the patch passed.
+1 💚 unit 2m 21s hadoop-aws in the patch passed.
+1 💚 asflicense 0m 42s The patch does not generate ASF License warnings.
152m 1s
Subsystem Report/Notes
Docker ClientAPI=1.47 ServerAPI=1.47 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/26/artifact/out/Dockerfile
GITHUB PR #5832
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint
uname Linux 2c3f68b5ec5a 5.15.0-124-generic #134-Ubuntu SMP Fri Sep 27 20:20:17 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 5b85014
Default Java Private Build-1.8.0_432-8u432-gaus1-0ubuntu220.04-ga
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.25+9-post-Ubuntu-1ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_432-8u432-gaus1-0ubuntu220.04-ga
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/26/testReport/
Max. process+thread count 3151 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/26/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

* Lots of logging
* prefetch operation appears to be blocking for much longer than the
  read will take, and the read doesn't take place then anyway.
  Sync problem?
* Test didn't expect any prefetching

Change-Id: Ie9da9a464ddccd481865eec2bc97fce5c50ed306
@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 21s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 1s codespell was not available.
+0 🆗 detsecrets 0m 1s detect-secrets was not available.
+0 🆗 markdownlint 0m 1s markdownlint was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
+1 💚 test4tests 0m 0s The patch appears to include 27 new or modified test files.
_ trunk Compile Tests _
+0 🆗 mvndep 16m 13s Maven dependency ordering for branch
+1 💚 mvninstall 23m 6s trunk passed
+1 💚 compile 10m 45s trunk passed with JDK Ubuntu-11.0.25+9-post-Ubuntu-1ubuntu120.04
+1 💚 compile 9m 17s trunk passed with JDK Private Build-1.8.0_432-8u432-gaus1-0ubuntu220.04-ga
+1 💚 checkstyle 2m 24s trunk passed
+1 💚 mvnsite 1m 38s trunk passed
+1 💚 javadoc 1m 20s trunk passed with JDK Ubuntu-11.0.25+9-post-Ubuntu-1ubuntu120.04
+1 💚 javadoc 0m 56s trunk passed with JDK Private Build-1.8.0_432-8u432-gaus1-0ubuntu220.04-ga
+1 💚 spotbugs 2m 26s trunk passed
+1 💚 shadedclient 26m 40s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 7s Maven dependency ordering for patch
+1 💚 mvninstall 0m 55s the patch passed
-1 ❌ compile 7m 11s /patch-compile-root-jdkUbuntu-11.0.25+9-post-Ubuntu-1ubuntu120.04.txt root in the patch failed with JDK Ubuntu-11.0.25+9-post-Ubuntu-1ubuntu120.04.
-1 ❌ javac 7m 11s /patch-compile-root-jdkUbuntu-11.0.25+9-post-Ubuntu-1ubuntu120.04.txt root in the patch failed with JDK Ubuntu-11.0.25+9-post-Ubuntu-1ubuntu120.04.
-1 ❌ compile 0m 9s /patch-compile-root-jdkPrivateBuild-1.8.0_432-8u432-gaus1-0ubuntu220.04-ga.txt root in the patch failed with JDK Private Build-1.8.0_432-8u432-gaus1-0ubuntu220.04-ga.
-1 ❌ javac 0m 9s /patch-compile-root-jdkPrivateBuild-1.8.0_432-8u432-gaus1-0ubuntu220.04-ga.txt root in the patch failed with JDK Private Build-1.8.0_432-8u432-gaus1-0ubuntu220.04-ga.
+1 💚 blanks 0m 0s The patch has no blanks issues.
-0 ⚠️ checkstyle 0m 22s /buildtool-patch-checkstyle-root.txt The patch fails to run checkstyle in root
-1 ❌ mvnsite 0m 26s /patch-mvnsite-hadoop-common-project_hadoop-common.txt hadoop-common in the patch failed.
-1 ❌ mvnsite 0m 25s /patch-mvnsite-hadoop-tools_hadoop-aws.txt hadoop-aws in the patch failed.
-1 ❌ javadoc 0m 25s /patch-javadoc-hadoop-common-project_hadoop-common-jdkUbuntu-11.0.25+9-post-Ubuntu-1ubuntu120.04.txt hadoop-common in the patch failed with JDK Ubuntu-11.0.25+9-post-Ubuntu-1ubuntu120.04.
-1 ❌ javadoc 0m 25s /patch-javadoc-hadoop-tools_hadoop-aws-jdkUbuntu-11.0.25+9-post-Ubuntu-1ubuntu120.04.txt hadoop-aws in the patch failed with JDK Ubuntu-11.0.25+9-post-Ubuntu-1ubuntu120.04.
-1 ❌ javadoc 0m 24s /patch-javadoc-hadoop-common-project_hadoop-common-jdkPrivateBuild-1.8.0_432-8u432-gaus1-0ubuntu220.04-ga.txt hadoop-common in the patch failed with JDK Private Build-1.8.0_432-8u432-gaus1-0ubuntu220.04-ga.
-1 ❌ javadoc 0m 24s /patch-javadoc-hadoop-tools_hadoop-aws-jdkPrivateBuild-1.8.0_432-8u432-gaus1-0ubuntu220.04-ga.txt hadoop-aws in the patch failed with JDK Private Build-1.8.0_432-8u432-gaus1-0ubuntu220.04-ga.
-1 ❌ spotbugs 0m 25s /patch-spotbugs-hadoop-common-project_hadoop-common.txt hadoop-common in the patch failed.
-1 ❌ shadedclient 1m 40s patch has errors when building and testing our client artifacts.
_ Other Tests _
-1 ❌ unit 6m 16s /patch-unit-hadoop-common-project_hadoop-common.txt hadoop-common in the patch failed.
-1 ❌ unit 0m 24s /patch-unit-hadoop-tools_hadoop-aws.txt hadoop-aws in the patch failed.
+0 🆗 asflicense 0m 24s ASF License check generated no output?
118m 42s
Reason Tests
Failed junit tests hadoop.ha.TestZKFailoverControllerStress
hadoop.ha.TestZKFailoverController
hadoop.security.TestRaceWhenRelogin
hadoop.security.TestUGILoginFromKeytab
Subsystem Report/Notes
Docker ClientAPI=1.47 ServerAPI=1.47 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/27/artifact/out/Dockerfile
GITHUB PR #5832
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets markdownlint
uname Linux d80600f2570b 5.15.0-124-generic #134-Ubuntu SMP Fri Sep 27 20:20:17 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / dfc9a6c
Default Java Private Build-1.8.0_432-8u432-gaus1-0ubuntu220.04-ga
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.25+9-post-Ubuntu-1ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_432-8u432-gaus1-0ubuntu220.04-ga
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/27/testReport/
Max. process+thread count 552 (vs. ulimit of 5500)
modules C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5832/27/console
versions git=2.25.1 maven=3.6.3 spotbugs=4.2.2
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants