-
Notifications
You must be signed in to change notification settings - Fork 12.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
jemalloc seems to be holding onto enormous quantities of memory, causing the appearance of a memory leak #28699
Comments
Note that we tried a few variations of the test case. Reassigning test as in |
jemalloc is tunable, so we could just see if there is something we can change. It seems a non-trivial application could find better settings for itself. This seems related to #18236 too. |
What if you touch the heap inside the loop? The comment under
-- http://linux.die.net/man/3/jemalloc However, I can't confirm this (x86_64, linux 4.1.7, nightly). I can't run the test at full capacity (I don't have enough RAM) but I can run it at half capacity and the memory usage spikes but then drops to 6MB instantly. |
Looks like that is the issue. Maybe a general It should be pointed out that this is very unlikely to be a problem in most programs, given the requirement that allocation/deallocation has to basically stop completely. |
I wonder if this is related to my observation of Rust programs using more memory than OCaml, Haskell, and C version of an equivalent program. In my case, there is not much memory allocation. Is this the same as this issue? I've uploaded |
Triage: no comments in almost 18 months. While this does seem like a theoretical problem, it doesn't seem like many people are having it, and allocators' apis still aren't stable. Anyone interested in furthering this issue should probably chime in on the allocator discussion. Closing! |
I was having this issue in practice here: #33082 (comment) |
@frankmcsherry came in today asking about help diagnosing a memory leak in timely-dataflow. His program was sitting on 7GB (as observed through the system monitor) with no apparent reason. Eventually we guessed that maybe the allocator was being greedy behind the scenes, and that does appear to be the case, as shown by the following program:
I compiled and ran this program on my Linux machine and observed with top. After escaping the inner scope and reaching the loop, memory continued to hover around 800 MB. This behavior was independent of whether the program was compiled in -O0 or -O3 (though the latter did once fault with an illegal instruction, though this may have just been OOM-killer nondeterminism).
Is there any doubt that this behavior is due to jemalloc? If not, then we should look into providing some way of on-demand requesting that jemalloc dump whatever memory it's gobbled up.
The text was updated successfully, but these errors were encountered: