Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HADOOP-19046. S3A: update AWS V2 SDK to 2.23.5; v1 to 1.12.599 #6467

Merged

Conversation

steveloughran
Copy link
Contributor

@steveloughran steveloughran commented Jan 18, 2024

Updates AWS v2 SDK to the one released on 2024-01-18

This improves timeout handling for S3Express CreateSession calls, though
we need HADOOP-19045. S3A: pass request timeouts down to SDK to wire this up.

This is the pom update to isolate it from the rest of the code.

Testing in progress

For code changes:

  • Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')?
  • Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation?
  • If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under ASF 2.0?
  • If applicable, have you updated the LICENSE, LICENSE-binary, NOTICE-binary files?

Change-Id: Ie56f5fe793701b2abe9dac9fe76d3c2be97a65ef
@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 31s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+0 🆗 xmllint 0m 0s xmllint was not available.
+0 🆗 shelldocs 0m 0s Shelldocs was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
-1 ❌ test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
_ trunk Compile Tests _
+0 🆗 mvndep 14m 50s Maven dependency ordering for branch
+1 💚 mvninstall 30m 49s trunk passed
+1 💚 compile 16m 18s trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04
+1 💚 compile 14m 50s trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08
+1 💚 mvnsite 18m 43s trunk passed
+1 💚 javadoc 8m 21s trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04
+1 💚 javadoc 7m 34s trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08
+1 💚 shadedclient 47m 41s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 1m 1s Maven dependency ordering for patch
+1 💚 mvninstall 28m 10s the patch passed
+1 💚 compile 15m 46s the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04
+1 💚 javac 15m 46s the patch passed
+1 💚 compile 14m 51s the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08
+1 💚 javac 14m 51s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
+1 💚 mvnsite 13m 45s the patch passed
+1 💚 shellcheck 0m 0s No new issues.
+1 💚 javadoc 8m 17s the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04
+1 💚 javadoc 7m 35s the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08
+1 💚 shadedclient 48m 39s patch has no errors when building and testing our client artifacts.
_ Other Tests _
-1 ❌ unit 735m 50s /patch-unit-root.txt root in the patch passed.
+1 💚 asflicense 1m 22s The patch does not generate ASF License warnings.
1008m 19s
Reason Tests
Failed junit tests hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier
hadoop.hdfs.server.datanode.TestDirectoryScanner
Subsystem Report/Notes
Docker ClientAPI=1.43 ServerAPI=1.43 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6467/1/artifact/out/Dockerfile
GITHUB PR #6467
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint shellcheck shelldocs
uname Linux 478c674e4c15 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / 1be94e5
Default Java Private Build-1.8.0_392-8u392-ga-1~20.04-b08
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6467/1/testReport/
Max. process+thread count 3560 (vs. ulimit of 5500)
modules C: hadoop-project . U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6467/1/console
versions git=2.25.1 maven=3.6.3 shellcheck=0.7.0
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@steveloughran
Copy link
Contributor Author

  • yetus test failures are 100% unrelated.
  • will have a new PR soon which will update docs slightly.
  • There are some test failures from a test run against an s3 express bucket; will try them on their own and isolate as some seem timeout/retry related
  • signing looks to be troublesome again, maybe s3 express only.


[ERROR]   ITestAuditAccessChecks.testMissingPathAccessFNFE:180->AbstractS3ACostTest.verifyMetricsIntercepting:341->Assert.assertEquals:647->Assert.failNotEquals:835->Assert.fail:89 operation returning java.io.FileNotFoundException: No such file or directory: s3a://stevel--usw2-az1--x-s3/fork-0003/test/testMissingPathAccessFNFE: audit_request_execution expected:<2> but was:<3>
[ERROR]   ITestS3ACommitterFactory.testEverything:115->testInvalidFileBinding:165 Expected a org.apache.hadoop.fs.s3a.commit.PathCommitException to be thrown, but got the result: : FileOutputCommitter{PathOutputCommitter{context=TaskAttemptContextImpl{JobContextImpl{jobId=job_202401190405_0003}; taskId=attempt_202401190405_0003_m_000000_0, status=''}; org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter@4df1a4df}; outputPath=s3a://stevel--usw2-az1--x-s3/fork-0003/test/testEverything, workPath=s3a://stevel--usw2-az1--x-s3/fork-0003/test/testEverything/_temporary/1/_temporary/attempt_202401190405_0003_m_000000_0, algorithmVersion=1, skipCleanup=false, ignoreCleanupFailures=false}
[ERROR]   ITestS3ADirectoryPerformance.testMultiPagesListingPerformanceAndCorrectness:337 [object_continue_list_request in counters=((object_continue_list_request=100)
(object_continue_list_request.failures=1)
(object_list_request=1));


[ERROR] Tests run: 2, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 610.737 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3APrefetchingLruEviction
[ERROR] testSeeksWithLruEviction[max-blocks-1](org.apache.hadoop.fs.s3a.ITestS3APrefetchingLruEviction)  Time elapsed: 600.008 s  <<< ERROR!
org.junit.runners.model.TestTimedOutException: test timed out after 600000 milliseconds
        at sun.misc.Unsafe.park(Native Method)
        at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:837)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:999)
        at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1308)
        at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
        at org.apache.hadoop.fs.s3a.ITestS3APrefetchingLruEviction.testSeeksWithLruEviction(ITestS3APrefetchingLruEviction.java:162)
 


[ERROR] Tests run: 6, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 62.391 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.audit.ITestAuditAccessChecks
[ERROR] testMissingPathAccessFNFE(org.apache.hadoop.fs.s3a.audit.ITestAuditAccessChecks)  Time elapsed: 20.786 s  <<< FAILURE!
java.lang.AssertionError: operation returning java.io.FileNotFoundException: No such file or directory: s3a://stevel--usw2-az1--x-s3/fork-0003/test/testMissingPathAccessFNFE: audit_request_execution expected:<2> but was:<3>
        at org.junit.Assert.fail(Assert.java:89)
        at org.junit.Assert.failNotEquals(Assert.java:835)
        at org.junit.Assert.assertEquals(Assert.java:647)
        at org.apache.hadoop.fs.s3a.S3ATestUtils$MetricDiff.assertDiffEquals(S3ATestUtils.java:1180)
        at org.apache.hadoop.fs.s3a.performance.OperationCostValidator$ExpectSingleStatistic.verify(OperationCostValidator.java:418)
        at org.apache.hadoop.fs.s3a.performance.OperationCostValidator.exec(OperationCostValidator.java:177)
        at org.apache.hadoop.fs.s3a.performance.OperationCostValidator.intercepting(OperationCostValidator.java:220)
        at org.apache.hadoop.fs.s3a.performance.AbstractS3ACostTest.verifyMetricsIntercepting(AbstractS3ACostTest.java:341)
        at org.apache.hadoop.fs.s3a.audit.ITestAuditAccessChecks.testMissingPathAccessFNFE(ITestAuditAccessChecks.java:180)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
        at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
        at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
        at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
        at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
        at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
        at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.lang.Thread.run(Thread.java:750)

[ERROR] testMultiPagesListingPerformanceAndCorrectness(org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance)  Time elapsed: 189.792 s  <<< FAILURE!
org.junit.ComparisonFailure: 
[object_continue_list_request in counters=((object_continue_list_request=100)
(object_continue_list_request.failures=1)
(object_list_request=1));

gauges=();

minimums=((object_continue_list_request.failures.min=4901)
(object_continue_list_request.min=157)
(object_list_request.min=1327));

maximums=((object_continue_list_request.failures.max=4901)
(object_continue_list_request.max=5748)
(object_list_request.max=1327));

means=((object_continue_list_request.failures.mean=(samples=1, sum=4901, mean=4901.0000))
(object_continue_list_request.mean=(samples=99, sum=47407, mean=478.8586))
(object_list_request.mean=(samples=1, sum=1327, mean=1327.0000)));
] expected:<[99]L> but was:<[100]L>
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at org.apache.hadoop.fs.s3a.scale.ITestS3ADirectoryPerformance.testMultiPagesListingPerformanceAndCorrectness(ITestS3ADirectoryPerformance.java:337)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

[ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 16.504 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.auth.ITestCustomSigner
[ERROR] testCustomSignerAndInitializer[bulk delete](org.apache.hadoop.fs.s3a.auth.ITestCustomSigner)  Time elapsed: 10.298 s  <<< ERROR!
org.apache.hadoop.fs.s3a.AWSBadRequestException: PUT 0-byte object  on fork-0006/test/testCustomSignerAndInitializer[bulk delete]/customsignerpath1: software.amazon.awssdk.services.s3.model.S3Exception: x-amz-sdk-checksum-algorithm specified, but no corresponding x-amz-checksum-* or x-amz-trailer headers were found. (Service: S3, Status Code: 400, Request ID: 0033eada6b00018d219615ea0509dc88f208b53e, Extended Request ID: ALDHV):InvalidRequest: x-amz-sdk-checksum-algorithm specified, but no corresponding x-amz-checksum-* or x-amz-trailer headers were found. (Service: S3, Status Code: 400, Request ID: 0033eada6b00018d219615ea0509dc88f208b53e, Extended Request ID: ALDHV)
        at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:259)
        at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:124)
        at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:376)
        at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:468)
        at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:372)
        at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:347)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.createEmptyObject(S3AFileSystem.java:4776)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.createFakeDirectory(S3AFileSystem.java:4752)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.access$3000(S3AFileSystem.java:285)
        at org.apache.hadoop.fs.s3a.S3AFileSystem$MkdirOperationCallbacksImpl.createFakeDirectory(S3AFileSystem.java:3812)
        at org.apache.hadoop.fs.s3a.impl.MkdirOperation.execute(MkdirOperation.java:159)
        at org.apache.hadoop.fs.s3a.impl.MkdirOperation.execute(MkdirOperation.java:57)
        at org.apache.hadoop.fs.s3a.impl.ExecutingStoreOperation.apply(ExecutingStoreOperation.java:76)
        at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)
        at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)
        at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2719)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2738)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:3778)
        at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2494)
        at org.apache.hadoop.fs.s3a.auth.ITestCustomSigner.lambda$runStoreOperationsAndVerify$0(ITestCustomSigner.java:160)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1953)
        at org.apache.hadoop.fs.s3a.auth.ITestCustomSigner.runStoreOperationsAndVerify(ITestCustomSigner.java:155)
        at org.apache.hadoop.fs.s3a.auth.ITestCustomSigner.testCustomSignerAndInitializer(ITestCustomSigner.java:135)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
        at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
        at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
        at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
        at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
        at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
        at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.lang.Thread.run(Thread.java:750)
Caused by: software.amazon.awssdk.services.s3.model.S3Exception: x-amz-sdk-checksum-algorithm specified, but no corresponding x-amz-checksum-* or x-amz-trailer headers were found. (Service: S3, Status Code: 400, Request ID: 0033eada6b00018d219615ea0509dc88f208b53e, Extended Request ID: ALDHV)
        at software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handleErrorResponse(AwsXmlPredicatedResponseHandler.java:156)
        at software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handleResponse(AwsXmlPredicatedResponseHandler.java:108)
        at software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handle(AwsXmlPredicatedResponseHandler.java:85)
        at software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handle(AwsXmlPredicatedResponseHandler.java:43)
        at software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler$Crc32ValidationResponseHandler.handle(AwsSyncClientHandler.java:93)
        at software.amazon.awssdk.core.internal.handler.BaseClientHandler.lambda$successTransformationResponseHandler$7(BaseClientHandler.java:279)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.execute(HandleResponseStage.java:50)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.execute(HandleResponseStage.java:38)
        at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute(ApiCallAttemptTimeoutTrackingStage.java:72)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute(ApiCallAttemptTimeoutTrackingStage.java:42)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute(TimeoutExceptionHandlingStage.java:78)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute(TimeoutExceptionHandlingStage.java:40)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptMetricCollectionStage.execute(ApiCallAttemptMetricCollectionStage.java:55)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptMetricCollectionStage.execute(ApiCallAttemptMetricCollectionStage.java:39)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:81)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:36)
        at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
        at software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:56)
        at software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:36)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.executeWithTimer(ApiCallTimeoutTrackingStage.java:80)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:60)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:42)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:50)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:32)
        at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
        at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:37)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:26)
        at software.amazon.awssdk.core.internal.http.AmazonSyncHttpClient$RequestExecutionBuilderImpl.execute(AmazonSyncHttpClient.java:224)
        at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.invoke(BaseSyncClientHandler.java:103)
        at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.doExecute(BaseSyncClientHandler.java:173)
        at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.lambda$execute$1(BaseSyncClientHandler.java:80)
        at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.measureApiCallSuccess(BaseSyncClientHandler.java:182)
        at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.execute(BaseSyncClientHandler.java:74)
        at software.amazon.awssdk.core.client.handler.SdkSyncClientHandler.execute(SdkSyncClientHandler.java:45)
        at software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler.execute(AwsSyncClientHandler.java:53)
        at software.amazon.awssdk.services.s3.DefaultS3Client.putObject(DefaultS3Client.java:10187)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$putObjectDirect$19(S3AFileSystem.java:3272)
        at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDurationOfSupplier(IOStatisticsBinding.java:651)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:3268)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$createEmptyObject$34(S3AFileSystem.java:4777)
        at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:122)
        ... 39 more

[ERROR] testCustomSignerAndInitializer[simple-delete](org.apache.hadoop.fs.s3a.auth.ITestCustomSigner)  Time elapsed: 6.12 s  <<< ERROR!
org.apache.hadoop.fs.s3a.AWSBadRequestException: PUT 0-byte object  on fork-0006/test/testCustomSignerAndInitializer[simple-delete]/customsignerpath1: software.amazon.awssdk.services.s3.model.S3Exception: x-amz-sdk-checksum-algorithm specified, but no corresponding x-amz-checksum-* or x-amz-trailer headers were found. (Service: S3, Status Code: 400, Request ID: 0033eada6b00018d21962f1b05094a80435cca52, Extended Request ID: kZJZG05LGCBu7lsNKNf):InvalidRequest: x-amz-sdk-checksum-algorithm specified, but no corresponding x-amz-checksum-* or x-amz-trailer headers were found. (Service: S3, Status Code: 400, Request ID: 0033eada6b00018d21962f1b05094a80435cca52, Extended Request ID: kZJZG05LGCBu7lsNKNf)
        at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:259)
        at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:124)
        at org.apache.hadoop.fs.s3a.Invoker.lambda$retry$4(Invoker.java:376)
        at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:468)
        at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:372)
        at org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:347)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.createEmptyObject(S3AFileSystem.java:4776)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.createFakeDirectory(S3AFileSystem.java:4752)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.access$3000(S3AFileSystem.java:285)
        at org.apache.hadoop.fs.s3a.S3AFileSystem$MkdirOperationCallbacksImpl.createFakeDirectory(S3AFileSystem.java:3812)
        at org.apache.hadoop.fs.s3a.impl.MkdirOperation.execute(MkdirOperation.java:159)
        at org.apache.hadoop.fs.s3a.impl.MkdirOperation.execute(MkdirOperation.java:57)
        at org.apache.hadoop.fs.s3a.impl.ExecutingStoreOperation.apply(ExecutingStoreOperation.java:76)
        at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.invokeTrackingDuration(IOStatisticsBinding.java:547)
        at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:528)
        at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDuration(IOStatisticsBinding.java:449)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2719)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.trackDurationAndSpan(S3AFileSystem.java:2738)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.mkdirs(S3AFileSystem.java:3778)
        at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:2494)
        at org.apache.hadoop.fs.s3a.auth.ITestCustomSigner.lambda$runStoreOperationsAndVerify$0(ITestCustomSigner.java:160)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1953)
        at org.apache.hadoop.fs.s3a.auth.ITestCustomSigner.runStoreOperationsAndVerify(ITestCustomSigner.java:155)
        at org.apache.hadoop.fs.s3a.auth.ITestCustomSigner.testCustomSignerAndInitializer(ITestCustomSigner.java:135)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
        at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
        at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
        at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
        at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
        at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
        at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
        at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:299)
        at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:293)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.lang.Thread.run(Thread.java:750)
Caused by: software.amazon.awssdk.services.s3.model.S3Exception: x-amz-sdk-checksum-algorithm specified, but no corresponding x-amz-checksum-* or x-amz-trailer headers were found. (Service: S3, Status Code: 400, Request ID: 0033eada6b00018d21962f1b05094a80435cca52, Extended Request ID: kZJZG05LGCBu7lsNKNf)
        at software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handleErrorResponse(AwsXmlPredicatedResponseHandler.java:156)
        at software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handleResponse(AwsXmlPredicatedResponseHandler.java:108)
        at software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handle(AwsXmlPredicatedResponseHandler.java:85)
        at software.amazon.awssdk.protocols.xml.internal.unmarshall.AwsXmlPredicatedResponseHandler.handle(AwsXmlPredicatedResponseHandler.java:43)
        at software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler$Crc32ValidationResponseHandler.handle(AwsSyncClientHandler.java:93)
        at software.amazon.awssdk.core.internal.handler.BaseClientHandler.lambda$successTransformationResponseHandler$7(BaseClientHandler.java:279)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.execute(HandleResponseStage.java:50)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.HandleResponseStage.execute(HandleResponseStage.java:38)
        at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute(ApiCallAttemptTimeoutTrackingStage.java:72)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptTimeoutTrackingStage.execute(ApiCallAttemptTimeoutTrackingStage.java:42)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute(TimeoutExceptionHandlingStage.java:78)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.TimeoutExceptionHandlingStage.execute(TimeoutExceptionHandlingStage.java:40)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptMetricCollectionStage.execute(ApiCallAttemptMetricCollectionStage.java:55)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallAttemptMetricCollectionStage.execute(ApiCallAttemptMetricCollectionStage.java:39)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:81)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.RetryableStage.execute(RetryableStage.java:36)
        at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
        at software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:56)
        at software.amazon.awssdk.core.internal.http.StreamManagingStage.execute(StreamManagingStage.java:36)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.executeWithTimer(ApiCallTimeoutTrackingStage.java:80)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:60)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallTimeoutTrackingStage.execute(ApiCallTimeoutTrackingStage.java:42)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:50)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.ApiCallMetricCollectionStage.execute(ApiCallMetricCollectionStage.java:32)
        at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
        at software.amazon.awssdk.core.internal.http.pipeline.RequestPipelineBuilder$ComposingRequestPipelineStage.execute(RequestPipelineBuilder.java:206)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:37)
        at software.amazon.awssdk.core.internal.http.pipeline.stages.ExecutionFailureExceptionReportingStage.execute(ExecutionFailureExceptionReportingStage.java:26)
        at software.amazon.awssdk.core.internal.http.AmazonSyncHttpClient$RequestExecutionBuilderImpl.execute(AmazonSyncHttpClient.java:224)
        at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.invoke(BaseSyncClientHandler.java:103)
        at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.doExecute(BaseSyncClientHandler.java:173)
        at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.lambda$execute$1(BaseSyncClientHandler.java:80)
        at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.measureApiCallSuccess(BaseSyncClientHandler.java:182)
        at software.amazon.awssdk.core.internal.handler.BaseSyncClientHandler.execute(BaseSyncClientHandler.java:74)
        at software.amazon.awssdk.core.client.handler.SdkSyncClientHandler.execute(SdkSyncClientHandler.java:45)
        at software.amazon.awssdk.awscore.client.handler.AwsSyncClientHandler.execute(AwsSyncClientHandler.java:53)
        at software.amazon.awssdk.services.s3.DefaultS3Client.putObject(DefaultS3Client.java:10187)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$putObjectDirect$19(S3AFileSystem.java:3272)
        at org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.trackDurationOfSupplier(IOStatisticsBinding.java:651)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.putObjectDirect(S3AFileSystem.java:3268)
        at org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$createEmptyObject$34(S3AFileSystem.java:4777)
        at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:122)
        ... 39 more

Updated docs
- NOTICE.binary
testing.md
  + mentions LICENSE.binary
  + v2 sdk references in dependency validation and issue reporting

Change-Id: I4956afebf324910a42c50d14c95e70f4603ffef9
@steveloughran
Copy link
Contributor Author

rerun of individual tests

  • standalone tests all pass
  • except for ITestCustomSigner

But that also fails when I test with the old sdk version, and works with a non-s3 express bucket. Not a regression. Created HADOOP-19048.

[INFO] Results:
[INFO] 
[ERROR] Errors: 
[ERROR]   ITestCustomSigner.testCustomSignerAndInitializer:135->runStoreOperationsAndVerify:155->lambda$runStoreOperationsAndVerify$0:160 » AWSBadRequest
[ERROR]   ITestCustomSigner.testCustomSignerAndInitializer:135->runStoreOperationsAndVerify:155->lambda$runStoreOperationsAndVerify$0:160 » AWSBadRequest
[INFO] 

Copy link
Contributor

@mukund-thakur mukund-thakur left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

@hadoop-yetus
Copy link

💔 -1 overall

Vote Subsystem Runtime Logfile Comment
+0 🆗 reexec 0m 32s Docker mode activated.
_ Prechecks _
+1 💚 dupname 0m 0s No case conflicting files found.
+0 🆗 codespell 0m 0s codespell was not available.
+0 🆗 detsecrets 0m 0s detect-secrets was not available.
+0 🆗 xmllint 0m 0s xmllint was not available.
+0 🆗 markdownlint 0m 0s markdownlint was not available.
+0 🆗 shelldocs 0m 0s Shelldocs was not available.
+1 💚 @author 0m 0s The patch does not contain any @author tags.
-1 ❌ test4tests 0m 0s The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.
_ trunk Compile Tests _
+0 🆗 mvndep 14m 26s Maven dependency ordering for branch
+1 💚 mvninstall 30m 47s trunk passed
+1 💚 compile 16m 0s trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04
+1 💚 compile 14m 49s trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08
+1 💚 mvnsite 18m 12s trunk passed
+1 💚 javadoc 8m 25s trunk passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04
+1 💚 javadoc 7m 30s trunk passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08
+1 💚 shadedclient 47m 28s branch has no errors when building and testing our client artifacts.
_ Patch Compile Tests _
+0 🆗 mvndep 0m 57s Maven dependency ordering for patch
+1 💚 mvninstall 28m 46s the patch passed
+1 💚 compile 15m 49s the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04
+1 💚 javac 15m 49s the patch passed
+1 💚 compile 14m 44s the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08
+1 💚 javac 14m 44s the patch passed
+1 💚 blanks 0m 0s The patch has no blanks issues.
+1 💚 mvnsite 13m 29s the patch passed
+1 💚 shellcheck 0m 0s No new issues.
+1 💚 javadoc 8m 18s the patch passed with JDK Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04
+1 💚 javadoc 7m 27s the patch passed with JDK Private Build-1.8.0_392-8u392-ga-1~20.04-b08
+1 💚 shadedclient 48m 53s patch has no errors when building and testing our client artifacts.
_ Other Tests _
-1 ❌ unit 733m 45s /patch-unit-root.txt root in the patch passed.
+1 💚 asflicense 1m 35s The patch does not generate ASF License warnings.
1005m 25s
Reason Tests
Failed junit tests hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy
hadoop.hdfs.server.datanode.TestDirectoryScanner
hadoop.hdfs.server.namenode.TestNameNodeMXBean
Subsystem Report/Notes
Docker ClientAPI=1.44 ServerAPI=1.44 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6467/2/artifact/out/Dockerfile
GITHUB PR #6467
Optional Tests dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient codespell detsecrets xmllint markdownlint shellcheck shelldocs
uname Linux c56c91a2e38e 5.15.0-88-generic #98-Ubuntu SMP Mon Oct 2 15:18:56 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Build tool maven
Personality dev-support/bin/hadoop.sh
git revision trunk / e628053
Default Java Private Build-1.8.0_392-8u392-ga-1~20.04-b08
Multi-JDK versions /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.21+9-post-Ubuntu-0ubuntu120.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_392-8u392-ga-1~20.04-b08
Test Results https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6467/2/testReport/
Max. process+thread count 4137 (vs. ulimit of 5500)
modules C: hadoop-project hadoop-tools/hadoop-aws . U: .
Console output https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-6467/2/console
versions git=2.25.1 maven=3.6.3 shellcheck=0.7.0
Powered by Apache Yetus 0.14.0 https://yetus.apache.org

This message was automatically generated.

@steveloughran steveloughran merged commit d274f77 into apache:trunk Jan 21, 2024
1 of 4 checks passed
asfgit pushed a commit that referenced this pull request Jan 29, 2024
This update ensures that the timeout set in fs.s3a.connection.request.timeout is passed down
to calls to CreateSession made in the AWS SDK to get S3 Express session tokens.

Contributed by Steve Loughran
jiajunmao pushed a commit to jiajunmao/hadoop-MLEC that referenced this pull request Feb 6, 2024
…e#6467)


This update ensures that the timeout set in fs.s3a.connection.request.timeout is passed down
to calls to CreateSession made in the AWS SDK to get S3 Express session tokens.

Contributed by Steve Loughran
slfan1989 pushed a commit that referenced this pull request Feb 10, 2024
This update ensures that the timeout set in fs.s3a.connection.request.timeout is passed down
to calls to CreateSession made in the AWS SDK to get S3 Express session tokens.

Contributed by Steve Loughran
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants