You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Elasticsearch version (bin/elasticsearch --version): 7.11.1
Plugins installed: [repository-hdfs]
JVM version (8 or 15):
Description of the problem including expected versus actual behavior:
Steps to reproduce:
If HDFS is configured to use over-the-wire encryption (i.e. dfs.encrypt.data.transfer is set to true), then the repository-hdfs plugin fails when creating a snapshot. The reason appears to be that this setting results in the HDFS client using CryptoOutputStream, which requires some additional runtime permissions that this plugin is not already granted.
To make matters worse, the version of CryptoOutputStream in the hadoop 2 client is not compatible with java 9 and later. It looks like this has only been addressed in the hadoop 3 client. Here is a stack trace on java 8:
java.security.AccessControlException: access denied ("java.lang.RuntimePermission" "accessClassInPackage.sun.nio.ch")
at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472) ~[?:1.8.0_275]
at java.security.AccessController.checkPermission(AccessController.java:886) ~[?:1.8.0_275]
at java.lang.SecurityManager.checkPermission(SecurityManager.java:549) ~[?:1.8.0_275]
at java.lang.SecurityManager.checkPackageAccess(SecurityManager.java:1564) ~[?:1.8.0_275]
at java.net.FactoryURLClassLoader.loadClass(URLClassLoader.java:814) ~[?:1.8.0_275]
at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ~[?:1.8.0_275]
at org.apache.hadoop.crypto.CryptoStreamUtils.freeDB(CryptoStreamUtils.java:39) ~[?:?]
at org.apache.hadoop.crypto.CryptoInputStream.freeBuffers(CryptoInputStream.java:683) ~[?:?]
at org.apache.hadoop.crypto.CryptoInputStream.close(CryptoInputStream.java:317) ~[?:?]
at java.io.FilterInputStream.close(FilterInputStream.java:181) ~[?:1.8.0_275]
at org.apache.hadoop.hdfs.DataStreamer.closeStream(DataStreamer.java:996) ~[?:?]
at org.apache.hadoop.hdfs.DataStreamer.closeInternal(DataStreamer.java:839) ~[?:?]
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:834) ~[?:?]
The text was updated successfully, but these errors were encountered:
…on fails (#78409)
Until recently, if a user configured over-the-wire encryption for repository-hdfs they would get an exception. That was fixed in an upgraded ticket in two ways: (1) jvm permissions were opened up for haddop2, and (2) support for the hadoop 3 hdfs client was added. This commit adds configuration to a couple of integration tests so that they fail if over-the-wire encryption is not working.
Relates #76897#76734
Elasticsearch version (
bin/elasticsearch --version
): 7.11.1Plugins installed: [repository-hdfs]
JVM version (
8 or 15
):Description of the problem including expected versus actual behavior:
Steps to reproduce:
If HDFS is configured to use over-the-wire encryption (i.e. dfs.encrypt.data.transfer is set to true), then the repository-hdfs plugin fails when creating a snapshot. The reason appears to be that this setting results in the HDFS client using CryptoOutputStream, which requires some additional runtime permissions that this plugin is not already granted.
To make matters worse, the version of CryptoOutputStream in the hadoop 2 client is not compatible with java 9 and later. It looks like this has only been addressed in the hadoop 3 client. Here is a stack trace on java 8:
The text was updated successfully, but these errors were encountered: