-
Notifications
You must be signed in to change notification settings - Fork 8.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HADOOP-18869: [ABFS] Fixing Behavior of a File System APIs on root path #6003
HADOOP-18869: [ABFS] Fixing Behavior of a File System APIs on root path #6003
Conversation
💔 -1 overall
This message was automatically generated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reviewed; all comments cover the tests. production code LGTM
final AzureBlobFileSystem fs = getFileSystem(); | ||
Path testFile = path(AbfsHttpConstants.ROOT_PATH); | ||
try { | ||
fs.create(testFile, true); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use LambaTestUtils.intercept.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Taken
@@ -146,6 +149,26 @@ public void testCreateNonRecursive2() throws Exception { | |||
assertIsFile(fs, testFile); | |||
} | |||
|
|||
@Test | |||
public void testCreateOnRoot() throws Exception { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AbstractContractRootDirectoryTest.testCreateFileOverRoot() shoud be doing some of this.
- are those tests running? that's ITestAbfsFileSystemContractRootDirectory
- can you add the createNonRecursive test there?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I checked and for some reason these tests are getting skipped even after adding the required configuration. Will debug more on this and get back.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, I checked further and found the tests were blocked because of the setting here: https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-azure/src/test/resources/abfs.xml#L20
We have to set the required config fs.contract.test.root-tests-enabled as true here for these tests to run
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
aah. well, do that then. it's off by default to stop people accidentally deleting their local disk and then complaining. (this has never happened, but...)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense..
Set the config to true.
Also, I was trying to add createNonRecursive in AbstractContractRootDirectoryTest.
Looks like ContractTestUtils does not have support for this HDFS API.
Directly getting File System instance and calling createNonRecursive could cause failure when this test wil run with file systems other than ABFS.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good point. but at the same time, adding it will see what breaks...if you touch a file in the hdfs source tree (say its contract xml file...it can be left out the final commit) then we see what it does, and for s3a we will find out later and fix.
// Check if the attribute is retrievable | ||
fs.setListenerOperation(FSOperationType.GET_ATTR); | ||
byte[] rv = fs.getXAttr(testPath, attributeName1); | ||
assertTrue(Arrays.equals(rv, attributeValue1)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AssertJ assertThat() supports array equality, so use it. and add a description
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Taken
fs.setListenerOperation(FSOperationType.GET_ATTR); | ||
byte[] rv = fs.getXAttr(testPath, attributeName1); | ||
assertTrue(Arrays.equals(rv, attributeValue1)); | ||
assertEquals(decodedAttributeValue1, fs.getAbfsStore().decodeAttribute(rv)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add description
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added
|
||
// Check all the attributes present and previous Attribute not overridden | ||
rv = fs.getXAttr(testPath, attributeName1); | ||
assertTrue(Arrays.equals(rv, attributeValue1)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same comments as above. remember i always insist on a description, and want assertJ for anything where it makes sense. save time by doing this from the start
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for missing this out...
Will remember this before creating a PR in future
if (f.isRoot()) { | ||
throw new AbfsRestOperationException(HTTP_CONFLICT, | ||
AzureServiceErrorCode.PATH_CONFLICT.getErrorCode(), | ||
"Cannot create file on the root path", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
make a constant string used in both places
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Taken
12dab81
to
b9c2eac
Compare
💔 -1 overall
This message was automatically generated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggested test changes
@@ -146,6 +149,26 @@ public void testCreateNonRecursive2() throws Exception { | |||
assertIsFile(fs, testFile); | |||
} | |||
|
|||
@Test | |||
public void testCreateOnRoot() throws Exception { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
aah. well, do that then. it's off by default to stop people accidentally deleting their local disk and then complaining. (this has never happened, but...)
public void testCreateOnRoot() throws Exception { | ||
final AzureBlobFileSystem fs = getFileSystem(); | ||
Path testFile = path(AbfsHttpConstants.ROOT_PATH); | ||
try { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you shouldn't need the double catch. intercept() will return the caught exception, type case to the class of arg 1, so can be asserted on. and if an exception is not the one expected, the stack trace is too important to lose. so rethrow it or use as the cause of an assertion error
AbfsRestOperationException e = intercept(...)
if (e.getStatusCode()!=HTTP_CONFLICT) {
// rethrow if its not the expected one.
throw e;
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Taken
🎊 +1 overall
This message was automatically generated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
+1
@@ -146,6 +149,26 @@ public void testCreateNonRecursive2() throws Exception { | |||
assertIsFile(fs, testFile); | |||
} | |||
|
|||
@Test | |||
public void testCreateOnRoot() throws Exception { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good point. but at the same time, adding it will see what breaks...if you touch a file in the hdfs source tree (say its contract xml file...it can be left out the final commit) then we see what it does, and for s3a we will find out later and fix.
merge to trunk..as usual do a pr and retest against branch-3.3 for a merge there |
ok, i just gave the wrong credit. will revert and reappy with the correct author. sorry, its just I had Anmol on clipboard |
…6003) Contributed by Anuj Modi
…pache#6003) Contributed by Anuj Modi
…pache#6003) Contributed by Anmol Asrani
…ot path (apache#6003)" This reverts commit 6c6df40. ...so as to give the correct credit
…pache#6003) Contributed by Anuj Modi
…r() on root path (#6592) This reverts most of HADOOP-18869: [ABFS] Fix behavior of a File System APIs on root path (#6003). Calling getXAttr("/") or setXAttr("/") on an abfs container will fail with `Operation failed: "The request URI is invalid.", HTTP 400 Bad Request` This change is to ensure: * Consistency across ADLS clients * Consistency across authentication mechanisms. Contributed by Anuj Modi
…r() on root path (#6592) This reverts most of HADOOP-18869: [ABFS] Fix behavior of a File System APIs on root path (#6003). Calling getXAttr("/") or setXAttr("/") on an abfs container will fail with `Operation failed: "The request URI is invalid.", HTTP 400 Bad Request` This change is to ensure: * Consistency across ADLS clients * Consistency across authentication mechanisms. Contributed by Anuj Modi
Description of PR
Jira Ticket: https://issues.apache.org/jira/browse/HADOOP-18869
Following changes are included in this PR:
Also, ABFS used DFS endpoint to interact with Azure services. As per the official documentation, these operations on DFS endpoint supports only ASCII characters. As a ix tests were modified to work only with ASCII characters.
How was this patch tested?
Existing tests were modified and more tests were added.
Whole test suite was ran to verify the changes. Test results pasted below.
For code changes:
LICENSE
,LICENSE-binary
,NOTICE-binary
files?