You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The fiber stack collectors got disabled for some reason —probably when dealing with dead fiber stacks— but we must re-enable it to avoid sudden spikes to keep the allocated memory when we could recover some memory.
Stacks being kept per execution context, we might also keep stacks allocated in context A despite context B now spawning fibers, while context A doesn't do much anymore (leaking some memory). An alternative to that specific issue could be to have a single Stack Pool for the process, but one per execution context sounds more optimal (they should usually do the same thing, and thus roughly use the same stack space), while different contexts may keep larger stacks allocated, despite not needing that much.
I'll welcome nicer heuristics than the blunt "deallocate half the stacks every 5 seconds".
The text was updated successfully, but these errors were encountered:
Among the nicer heuristics: we could keep the monotonic time when a stack is returned into the pool, and free the stacks that haven't been recycled during the last N seconds.
The balance is in how long does it take the map and unmap a stack vs how long does it take to get the monotonic time.
The fiber stack collectors got disabled for some reason —probably when dealing with dead fiber stacks— but we must re-enable it to avoid sudden spikes to keep the allocated memory when we could recover some memory.
Stacks being kept per execution context, we might also keep stacks allocated in context A despite context B now spawning fibers, while context A doesn't do much anymore (leaking some memory). An alternative to that specific issue could be to have a single Stack Pool for the process, but one per execution context sounds more optimal (they should usually do the same thing, and thus roughly use the same stack space), while different contexts may keep larger stacks allocated, despite not needing that much.
I'll welcome nicer heuristics than the blunt "deallocate half the stacks every 5 seconds".
The text was updated successfully, but these errors were encountered: