-
Notifications
You must be signed in to change notification settings - Fork 282
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove commons-collections 3.2.2 #2924
Remove commons-collections 3.2.2 #2924
Conversation
Codecov Report
@@ Coverage Diff @@
## main #2924 +/- ##
============================================
- Coverage 62.29% 58.83% -3.46%
+ Complexity 3337 3088 -249
============================================
Files 266 266
Lines 19650 19668 +18
Branches 3329 3334 +5
============================================
- Hits 12240 11571 -669
- Misses 5783 6437 +654
- Partials 1627 1660 +33
|
e3b5f98
to
4353b02
Compare
@@ -158,9 +158,49 @@ private class ResolvedIndicesProvider implements IndicesProvider { | |||
private final ImmutableSet.Builder<String> remoteIndices; | |||
// set of previously resolved index requests to avoid resolving | |||
// the same index more than once while processing bulk requests | |||
private final Set<MultiKey> alreadyResolved; | |||
private final Set<AlreadyResolvedKey> alreadyResolved; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure. But using unbounded collection could lead to a memory leak. So a question to those who knows this part better than me. Is it possible?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we wanted to prevent the chance of a memory leak entirely, I believe we would need to cap the size of the set. The problem is determining an appropriate cap for the size. I believe the set should be cleared with each request so it should not persist for too long, so the issue would only be that we ran into a memory issue in a single bulk request. For determining the size to cap it at, we could look at the expected size of each AlreadyResolvedKey object and then determine how much memory we want to allow the task to allocate. Obviously, we will reach issues at certain extremes unless we can start to calculate based off of node resources.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looks good to me. I just left one question about the use of the CCR parameter.
@@ -158,9 +158,49 @@ private class ResolvedIndicesProvider implements IndicesProvider { | |||
private final ImmutableSet.Builder<String> remoteIndices; | |||
// set of previously resolved index requests to avoid resolving | |||
// the same index more than once while processing bulk requests | |||
private final Set<MultiKey> alreadyResolved; | |||
private final Set<AlreadyResolvedKey> alreadyResolved; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we wanted to prevent the chance of a memory leak entirely, I believe we would need to cap the size of the set. The problem is determining an appropriate cap for the size. I believe the set should be cleared with each request so it should not persist for too long, so the issue would only be that we ran into a memory issue in a single bulk request. For determining the size to cap it at, we could look at the expected size of each AlreadyResolvedKey object and then determine how much memory we want to allow the task to allocate. Obviously, we will reach issues at certain extremes unless we can start to calculate based off of node resources.
|
||
private final IndicesOptions indicesOptions; | ||
|
||
private final boolean enableCrossClusterResolution; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry if I am missing something, but how is this field being used? Is this part of the bulk request logic in some way? I am not particularly familiar with this--are we using this field to differentiate between two instances of the same index operation but with different CCR permissions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
enableCrossClusterResolution
determines here:
security/src/main/java/org/opensearch/security/resolver/IndexResolverReplacer.java
Line 375 in 1089bd6
final boolean enableCrossClusterResolution = localRequest instanceof FieldCapabilitiesRequest |
I played with it a bit. If I remove it all tests pass except PIT tests. TBH did not investigate why.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No worries.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change looks good to me.
@@ -336,9 +376,13 @@ public String[] provide(String[] original, Object localRequest, boolean supports | |||
|| localRequest instanceof SearchRequest | |||
|| localRequest instanceof ResolveIndexAction.Request; | |||
// skip the whole thing if we have seen this exact resolveIndexPatterns request | |||
if (alreadyResolved.add( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change looks fine to me. I was also looking at where alreadyResolved
was used and it looks like this line is primarily why a MultiKey was chosen. IMO the logic utilizing MultiKey was hard to follow and this is clearer.
4353b02
to
1089bd6
Compare
@willyborankin Can you resolve the conflict in this PR? |
sure |
Signed-off-by: Andrey Pleskach <[email protected]>
59e721b
1089bd6
to
59e721b
Compare
The backport to
To backport manually, run these commands in your terminal: # Fetch latest updates from GitHub
git fetch
# Create a new working tree
git worktree add .worktrees/backport-2.x 2.x
# Navigate to the new working tree
cd .worktrees/backport-2.x
# Create a new branch
git switch --create backport/backport-2924-to-2.x
# Cherry-pick the merged commit of this pull request and resolve the conflicts
git cherry-pick -x --mainline 1 49cbf520a9f45ab5e975dfe0b7c9ff489a57c338
# Push it to GitHub
git push --set-upstream origin backport/backport-2924-to-2.x
# Go back to the original working tree
cd ../..
# Delete the working tree
git worktree remove .worktrees/backport-2.x Then, create a pull request where the |
Signed-off-by: Andrey Pleskach <[email protected]> (cherry picked from commit 49cbf52) Signed-off-by: Andrey Pleskach <[email protected]>
(cherry picked from commit 49cbf52) Signed-off-by: Andrey Pleskach <[email protected]>
This change combines the many updates from the following commits: * 5f62e8a dependabot: bump commons-io:commons-io from 2.11.0 to 2.13.0 (opensearch-project#3074) * 2f69a10 bump com.github.wnameless.json:json-base from 2.4.0 to 2.4.1 (opensearch-project#3062) * c0e50da dependabot: bump org.cryptacular:cryptacular from 1.2.4 to 1.2.5 (opensearch-project#3071) * d3488e8 dependabot: bump kafka_version from 3.5.0 to 3.5.1 (opensearch-project#3041) * ab6778d Update ospackage, checker-qual, zcxvbn and error_prone_annotations, camel-xmlsecurity (opensearch-project#3023) * 0e6608d Bump JSON libs (opensearch-project#2926) * df07bea SAML 4.3.0 addition persmission (opensearch-project#2987) * e5348eb Change maven repo location for compatibility check (opensearch-project#2980) * 4a1ec53 Bump jaxb to 2.3.8 (opensearch-project#2977) * 9599155 Bump guava to 32.1.1-jre (opensearch-project#2976) * 06eed60 dependabot: bump org.glassfish.jaxb:jaxb-runtime from 2.3.4 to 4.0.3 (opensearch-project#2970) * 1113244 Bump eventbus to 3.3.1 (opensearch-project#2965) * 99ff7b3 dependabot: bump org.apache.bcel:bcel from 6.6.0 to 6.7.0 (opensearch-project#2969) * 0794c3f dependabot: bump jakarta.xml.bind:jakarta.xml.bind-api (opensearch-project#2968) * 9e6aab3 dependabot: bump com.google.j2objc:j2objc-annotations from 1.3 to 2.8 (opensearch-project#2963) * 8227f64 dependabot: bump com.sun.istack:istack-commons-runtime (opensearch-project#2960) * 8e044a6 dependabot: bump org.apiguardian:apiguardian-api from 1.0.0 to 1.1.2 (opensearch-project#2964) * 49cbf52 Remove commons-collections 3.2.2 (opensearch-project#2924) * 092e8f5 Bump SAML libs (opensearch-project#2927) * 8ab7cb4 Resolve CVE-2023-2976 by forcing use of Guava 32.0.1 (opensearch-project#2937) * 4eef662 Clean up and bump Apache libs (opensearch-project#2925) * 9a72355 Bump BouncyCastle from jdk15on to jdk15to18 (opensearch-project#2901) * e4f4817 [Enhancement] Parallel test jobs for CI (opensearch-project#2861) * d871af3 Update snappy to 1.1.10.1 and guava to 32.0.1-jre (opensearch-project#2886) * c808692 Format everything (opensearch-project#2866) Signed-off-by: Peter Nied <[email protected]>
Description
Removed commons-collections 3.2.2
Reasons:
LogCapturingAppender
- replacedCircularFifoBuffer
with guavaEvictingQueue
ResolvedIndicesProvider
- replaceMultyKey
with the custom oneIssues Resolved
Testing
[Please provide details of testing done: unit testing, integration testing and manual testing]
Check List
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.