-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Experiment with different mallocs #128
Comments
This doesn't seem necessary unless we have performance measurements that require the use of a faster malloc. |
Just saw that Google published a revamped version of TCMalloc (https://github.com/google/tcmalloc) - given current core counts, would a multi-threaded malloc like that be beneficial for Julia? |
It'll certainly not benefit most uses since they don't use malloc anyway. |
Ah, thanks - was just curious. |
Slack today came up with a benchmark where our allocation is really slow. If you make a 100000000 element undef vector repeatedly, Julia is about 10x slower than numpy. We believe the problem is the lack of interaction between the allocator and GC |
FYI, I "only" had a 2x drop in performance:
The vast amount of time is spent in GC, even for minimum time, which is why we suspect that the deallocation/reusing of allocated blocks is somehow slow. |
I wonder if #42566 would be solved or at least partially mitigated by using jemalloc, guys in LinkedIn were able to solve nasty memory leaks by replacing malloc from glibc with jemalloc https://engineering.linkedin.com/blog/2021/taming-memory-fragmentation-in-venice-with-jemalloc |
I'm not an expert on allocators, just came across this, in case it's of interest: They seems to like jemalloc. :-) It's not a very recent comparison, though. |
Stdlib: Statistics URL: https://github.com/JuliaStats/Statistics.jl.git Stdlib branch: master Julia branch: jn/loading-stdlib-exts Old commit: 04e5d89 New commit: 68869af Julia version: 1.11.0-DEV Statistics version: 1.11.1(Does not match) Bump invoked by: @vtjnash Powered by: [BumpStdlibs.jl](https://github.com/JuliaLang/BumpStdlibs.jl) Diff: JuliaStats/Statistics.jl@04e5d89...68869af ``` $ git log --oneline 04e5d89..68869af 68869af Bump patch for version 1.11.1 89f5fc7 Create tagbot.yml dc844db CI: restore v1.9.4 to build matrix (#159) d0523ae relax test for mapreduce_empty (#156) d1c1c42 Drop support for v1.9 in CI (#157) bfc6326 Fix `quantile` with `Date` and `DateTime` (#153) b8ea3d2 Prevent overflow in `mean(::AbstractRange)` and relax type constraint. (#150) a88ae4f Document MATLAB behavior in `quantile` docstring (#152) 46290a0 Revert "Prepare standalone package, step 2 (#128)" (#148) 81a90af make SparseArrays a weak dependency (#134) ``` Co-authored-by: Dilum Aluthge <[email protected]>
There are some very fast mallocs out there:
http://www.canonware.com/jemalloc/
http://goog-perftools.sourceforge.net/doc/tcmalloc.html
It would be useful to try out these instead of the system malloc, to see if we get a real performance boost. I do feel that using jemalloc, rather than the system one, may also make performance and memory behaviour more uniform across different OSes.
I see use of malloc in src/, src/flisp, and src/support. It would also be nice to refactor the code so that other malloc implementations can be experimented with easily.
I guess the libraries in external will continue using their own malloc.
The text was updated successfully, but these errors were encountered: