You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Note: The version is visible when running AzCopy without any argument
10.22.2
Which platform are you using? (ex: Windows, Mac, Linux)
any
What command did you run?
Note: Please remove the SAS to avoid exposing your credentials. If you cannot remember the exact command, please retrieve it from the beginning of the log file.
The root cause of the issue is the datalake Go SDK. FileSystemClient.NewDirectoryClient and NewFileClient do not call url.PathEscape on the directoryName or fileName passed. See ContainerClient.NewBlobClient for how it should be handled. Special care should be taken so that both the APIs that hit blob endpoints and dfs endpoints continue to work after the fix is made.
The text was updated successfully, but these errors were encountered:
Which version of the AzCopy was used?
Note: The version is visible when running AzCopy without any argument
10.22.2
Which platform are you using? (ex: Windows, Mac, Linux)
any
What command did you run?
Note: Please remove the SAS to avoid exposing your credentials. If you cannot remember the exact command, please retrieve it from the beginning of the log file.
./azcopy.exe rm https://XXXX.dfs.core.windows.net/XXXX/%23%2C%25%2C%3F?SAS --from-to=BlobFSTrash --recursive --log-level=DEBUG
What problem was encountered?
removal of path fails with invalid URL escape "%,?"
How can we reproduce the problem in the simplest way?
Create a directory in an HNS account called #,%,? and run the above command
Have you found a mitigation/solution?
you could use the blob endpoint
related issue microsoft/AzureStorageExplorer#7692
The root cause of the issue is the datalake Go SDK. FileSystemClient.NewDirectoryClient and NewFileClient do not call url.PathEscape on the directoryName or fileName passed. See ContainerClient.NewBlobClient for how it should be handled. Special care should be taken so that both the APIs that hit blob endpoints and dfs endpoints continue to work after the fix is made.
The text was updated successfully, but these errors were encountered: