-
Notifications
You must be signed in to change notification settings - Fork 29.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
buffer: implement pooling mechanism for Buffer.allocUnsafe #30683
Comments
Don’t you mean the other way around? The slices are what should be tracked, not the source buffers, right?
I don’t know what the V8 team’s plan on this is but I guess whatever we implement could be behind that flag for now? Weakrefs are a stage 3 proposal, so I think we can be relatively certain that they’re going to be standardized and implemented without requiring flags in the future. |
WeakReference implementation in v8 seems ready, it only blocked by chromium integration https://bugs.chromium.org/p/v8/issues/detail?id=8179 |
Sure. We need to track slices as source buffer's "holdings". Thanks for noticing this incorrectness. I'll update the description.
Yeah, nothing prevents activating this mechanism under the flag. |
As a follow-up, I have created a PoC repo: https://github.com/puzpuzpuz/nbufpool It's in a very early draft stage at the moment. But at least the pool seems to be working (not covered with tests yet). I have a couple of questions so far:
I see that all tests for
|
Yes, #30616 should address that :)
I think benchmark/buffers/buffer-creation.js would be the main one? |
That's great. I'll be watching this PR.
Thanks. This one is even more primitive than mine, but it's still valuable as I can look at buf sizes and number of ops. |
@puzpuzpuz I’ve merged #30616, that crash should go away now :) |
@addaleax I tested both v13 and v14 nightly builds. v14 works fine, but v13 still crashes. I guess that the commit just wasn't included into the latest nightly and I should wait for one more day, right? |
@puzpuzpuz We don’t always update release branches regularly, usually only when preparing a release. I’ve pushed that commit to v13.x-staging now, though. |
@addaleax Thanks for the clarification and for the push. I'm going to use v13 nightly for further experiments. |
I improved the benchmark and spent some time on experiments. Here are results for the most simple pool implementation (no constraints on pool size or throttling):
Considerations:
Further plans:
Any feedback is highly welcome. |
A small update. I just realized that there is a way to significantly reduce amount of FG bookkeeping by introducing another level of indirection. :) Going to implement it and measure the impact. |
@addaleax However, there is one issue that blocks me. It looks like FG stores strong references to all "holdings" until forever. Even when the finalizer callback was triggered and "holdings" of GCed objects were iterated, references to them are kept internally in FG (or at least, that's how it looks like for the outside). As I was trying to use source buffers as "holdings" in You may see it if you try to run I tried to introduce an intermediate map with Does it sound like expected behavior of FG API or a bug to you? If that's expected, does it mean that the API can only accept on-heap objects and primitives? |
@puzpuzpuz I’m not sure – it looks like a bug in V8 to me, yes. Which I guess is fair for a feature behind an experimental flag. I’m seeing the behaviour with the latest V8 master too – can you maybe try to break this down into a smaller reproduction? |
@addaleax |
Created #30853 to track the memory consumption issue. |
I find myself wanting this very feature and am curious if this idea ran into technical issues, and, indeed, what folk are doing instead? |
The problem
At the moment
Buffer.allocUnsafe
uses a pre-allocated internalBuffer
which is then sliced for newly allocated buffers, when possible. The size of the pool may be configured by settingBuffer.poolSize
property (8KB by default). Once the current internal buffer fills up, a new one is allocated by the pool.Here is the related fragment of the official documentation:
While this mechanism certainly improves performance, it doesn't implement actual pooling, i.e. reclamation and reuse of buffers. Thus it doesn't decrease allocation rate in scenarios when
Buffer.allocUnsafe
is on the hot path (this is a common situation for network client libraries, e.g. DB drivers).This issue is aimed to discuss the value of such pooling mechanism and provide a good starting point for initial design feedback.
There are two main options for pooling in Node.js:
It appears that option 2 is feasible with experimental APIs (FinalizationGroup and, maybe, WeakRefs).
The solution
The implementation idea was described by @addaleax (see #30611 (comment))
FinalizationGroup
API could be enough to implement a pool that would reuse internal buffers once all slices pointing at them were GCed. When the pool uses an active internal buffer (let's call it a source buffer) to create slices, it could register those slices (as "the object whose lifetime we're concerned with") with their source buffer (as "holdings") in aFinalizationGroup
. Once a finalizer function is called and it reports that all slices were GCed (it could be based on a simple counter), the pool could reclaim the corresponding source buffer and reuse it.There are some concerns with this approach.
First, this pooling mechanism may potentially lead to performance degradation under certain conditions. For large source buffers the cost of bookkeeping of all slices in the
FinalizationGroup
may be too high. For small source buffers (especially with 8KB source buffers which is the default in the standard API's global pool) it's probably not worth it at all. This could be mitigated by using a hybrid approach: use current mechanism in case for small buffer allocation as the fallback. So, some PoC experiments and benchmarking have to be done.Another concern is that it's an experimental API, so I'm not sure if
FinalizationGroup
s are available internally without--harmony-weak-refs
flag (or planned to become available), like it's done for internalWeakReference
API.Proposed plan:
Alternatives
Such pool could be certainly implemented as an npm module. However, support in node core could bring performance improvements to a much broader audience.
The text was updated successfully, but these errors were encountered: