-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix direct memory leak #421
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
pv3ntur1
reviewed
Apr 27, 2023
carapace-server/src/main/java/org/carapaceproxy/server/cache/ContentsCache.java
Show resolved
Hide resolved
pv3ntur1
reviewed
Apr 27, 2023
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a comment
hamadodene
force-pushed
the
fix/direct-memory-leak
branch
from
April 27, 2023 13:20
d0f699f
to
e0bb575
Compare
hamadodene
force-pushed
the
fix/direct-memory-leak
branch
from
April 27, 2023 14:13
e0bb575
to
737f928
Compare
hamadodene
force-pushed
the
fix/direct-memory-leak
branch
2 times, most recently
from
April 28, 2023 11:20
fca8d7b
to
576acd8
Compare
hamadodene
force-pushed
the
fix/direct-memory-leak
branch
3 times, most recently
from
April 28, 2023 15:15
315bfcc
to
4baae49
Compare
hamadodene
force-pushed
the
fix/direct-memory-leak
branch
from
April 28, 2023 15:41
4baae49
to
b677aab
Compare
hamadodene
force-pushed
the
fix/direct-memory-leak
branch
from
April 30, 2023 06:07
5d28108
to
defa7ff
Compare
hamadodene
force-pushed
the
fix/direct-memory-leak
branch
from
April 30, 2023 06:07
defa7ff
to
2b490f7
Compare
@pv3nturi Upgrade reactor netty for potential memory leak in netty 4.1.81.Final polled allocator #424 |
hamadodene
force-pushed
the
fix/direct-memory-leak
branch
7 times, most recently
from
May 2, 2023 12:18
d7d2bcb
to
ab0694c
Compare
pv3ntur1
approved these changes
May 3, 2023
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
…. Default unpooled
hamadodene
force-pushed
the
fix/direct-memory-leak
branch
from
May 3, 2023 13:28
ab0694c
to
6499066
Compare
dmercuriali
approved these changes
May 3, 2023
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
In the current implementation we use PooledByteBufAllocator. PooledByteBufAllocator allows you to have a ByteBuf cache for each thread. So if release is called on a ByteBuf, this is put back in the cache so that it can be reused without having to allocate a new one.
Since it is for Threads, if there are threads that are no longer used and new ones are created this inevitably leads to an OOM. Because unused threads will continue to maintain their ByteBuf cache. We've tried enabling the -Dio.netty.allocator.cacheTrimIntervalMillis option but it doesn't seem to help in any way.
In the way we use the caffeine cache it is convenient to switch to UnpolledByteBuf only for the carapace cache.
The fix consists of:
cache.allocator.usepooledbytebufallocator=true/false
(default false - Unpooled) to use pooled versionTo fetch an object from the cache, the logic doesn't change.
It's called retainduplicate() of the chunk.
No need to call release dul duplicate as it will be released already by reactor netty.
Note: reactor netty HttpClient and HttpServer continue use PooledByteBufAllocator. This fix is only to use UnpooledByteBufAllocator for cachable data in caffeine cache.