Constrain memory usage in the in-memory backend #209
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Currently, originally intended for simple development and testing, this backend consisted solely out of a map containing all events. As it would never remove old/completed invocations/workflows this meant that over time the event store would hog up more and more memory until an inevitable OOM occurred.
This PR has as a goal to make the in-memory event store feasible for longer-term or more intensive deployments, such as benchmarks and use cases that don't require persistence of events. The solution in this PR is to use an approach akin to TinyLFU for caches. The store is assigned specific limits to the number of entities (
n
) it contains, which are contained in two segments:completed
flag). The size of the buffer is dynamic; it fills all available space between the store andn
, evicting entities if the space is exceeded.Concretely, this PR...