-
Notifications
You must be signed in to change notification settings - Fork 24.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Re-enable integration tests for secure HDFS fixtures #76793
Conversation
Pinging @elastic/es-core-features (Team:Core/Features) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This does highlight that operating Elasticsearch on JDK 16 against secured HDFS installations requires the Elasticsearch JVM options to add exports for the internal java security packages in order for the HDFS client code in any plugins to function correctly. This should be highlighted in the documentation.
Indeed. Perhaps we should add something to the default jvm.options file? This -add-exports
business is starting to get a bit messy. There's still no concrete timeline on when --illegal-access
will be removed. It still exists in JDK 17 which is an LTS release so I suspect not anytime immediately soon. @rjernst perhaps you have some thoughts here.
Does the hadoop 3 client still require this? If we upgraded to the hadoop 3 client could we avoid having to have extra JVM args? Hadoop 3 supposedly has wire compatibility with hadoop 2: https://blog.cloudera.com/upgrading-clusters-workloads-hadoop-2-hadoop-3/ |
Since we've been moving away from the singular jvm.options file, and would rather have additional options in |
Is there anyway to provide a more seamless user experience, such as dynamically add the required JVM arguments in the presence of ths installed plugin? Can we wire this into the jvm arguments parser somehow? |
Maybe we could add an extra hook in the plugin installation process so that plugins that need certain JVM options to function can provide them during installation? Then we could potentially do some kind of validation. Not sure how widespread of a problem this is though, and whether it warrants such a change yet. Perhaps we should start a new issue for the discussion?
I suspect that we will still require this, but I'm doing some quick testing to find out. |
For all practical purposes, we used to be running Elasticsearch with Given that our default runtime configuration was to allow all unprotected access in ES code it doesn't seem any worse to simply add this |
I just confirmed locally that with the hadoop 3 client we do not need those -add-exports calls. And it seems to work fine against a hadoop 2 HDFS. And it fixes #76734. That might be the way to go. |
I put up #76897 as a possible alternative way of doing this by upgrading to hadoop 3. Obviously a little riskier, so probably worth discussing. |
@masseyke If I understand the change correctly, the way it avoids the illegal access is that Hadoop 3 has a proper client jar, which no longer needs to open up jdk kerberos classes? The PR, though, still includes the exports cli option for testclusters, is that still needed? |
@rjernst Yes, the hadoop 3 client avoids the illegal access (and a NoClassDefFoundError that we still get with this PR) because they've rewritten the client classes to not depend on sun.* classes. |
Closing this in favor of #76897 |
When running Elasticsearch on JDK 16 we must add exports for the internal java security packages in a few places in order to make use of the HDFS client in Kerberos secured environments:
This does highlight that operating Elasticsearch on JDK 16 against secured HDFS installations requires the Elasticsearch JVM options to add exports for the internal java security packages in order for the HDFS client code in any plugins to function correctly. This should be highlighted in the documentation.
Resolves #68075