-
Notifications
You must be signed in to change notification settings - Fork 132
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Excessive memory consumption? #112
Comments
After some investigations, it seems to be related to Badger. Running pprof on a local (workstation) sloop process gives : When profiling a sloop running inside a container in the k8s instance gives : This seems related to these Badger issues : |
Thanks @looztra for raising the issue. We know of this issue and its related to garbage collection. We are currently working in the fix which is almost ready. A PR would be coming next week with the fix. |
Hey @sana-jawad and @looztra, I work on badger and I'm trying to reduce the memory consumption. I have a PR dgraph-io/badger#1308 which I expect would reduce the memory used by decompression but I haven't been able to reproduce the high memory usage issue. It would be very kind of you if you can test my PR in sloop and confirm if the memory usage was reduced. Or if you have some steps that I can follow to reproduce the high memory usage, I'd be happy to do that. |
Hard for me to provide a way to reproduce without a kubernetes cluster running. I'd be happy to test this PR inside sloop (and run it against the cluster I used previously), but as I'm not a go dev, I'm not sure how to produce a sloop binary that would integrate the badger version associated to this PR. Any hints on the steps needed to do that? |
@looztra, I can help with that. Please look at https://github.com/salesforce/sloop#build-from-source . Follow all the steps mentioned over there but before you run
This will update the badger version in sloop. If this runs successfully, you should have change in go.mod and go.sum file.
diff --git a/pkg/sloop/store/untyped/store.go b/pkg/sloop/store/untyped/store.go
index 7bb098e..eaa8e7a 100644
--- a/pkg/sloop/store/untyped/store.go
+++ b/pkg/sloop/store/untyped/store.go
@@ -9,11 +9,12 @@ package untyped
import (
"fmt"
+ "os"
+ "time"
+
badger "github.com/dgraph-io/badger/v2"
"github.com/golang/glog"
"github.com/salesforce/sloop/pkg/sloop/store/untyped/badgerwrap"
- "os"
- "time"
)
type Config struct {
@@ -51,10 +52,6 @@ func OpenStore(factory badgerwrap.Factory, config *Config) (badgerwrap.DB, error
opts = badger.DefaultOptions(config.RootPath)
}
- if config.BadgerEnableEventLogging {
- opts = opts.WithEventLogging(true)
- }
-
if config.BadgerMaxTableSize != 0 {
opts = opts.WithMaxTableSize(config.BadgerMaxTableSize)
} After this, you can run |
@jarifibrahim I have tested the PR and it has reduced the memory consumption. Thanks for the pointer. I have noticed that the memory consumption is directly proportional to the rate of incoming data. I am going to try setting the flag for badger-keep-l0-in-memory to false. Any other pointers that can help in memory reduction? @looztra try following values for sloop flags for less memory consumption. |
We are especially monitoring the value of container_memory_working_set_bytes as it is the value watched by the OOM killer. The value was growing of 300Mo every 2 hours, up to 6Go without the patch. Now we observe values staying around 300Mo (with the same amount of watchable update count) so we are pretty happy without having to play with the flags you mentioned. On the graph, the usage (container_memory_usage_bytes) value is what looks like the closest to the process_resident_memory_bytes |
Hey @looztra and @sana-jawad, thank you for testing my PR. It was definitely helpful. However, my change, I do this So my PR shouldn't cause any reduction in memory usage. The reduction in memory was because of commit dgraph-io/badger@c3333a5 which disabled compression by default in badger. I noticed that the I would suggest you update badger in sloop and use the latest version of badger. |
Thanks @jarifibrahim. Yes the upgrade to 2.0.2 was already in review. I will update it to move to 2.0.3. |
For the record, the last infos in the README regarding memory tuning associated to the latest version published were really useful as we can now run sloop within the memory limits we chose (1Gi) without having to lower the |
Thats great to know @looztra! |
We are currently experimenting to use sloop.
We find it very useful but we found out that it was very greedy regarding memory.
After less than a full day, it is currently using 5Gb of memory :(
Is it the normal behaviour?
The last 3 hours
The last 24 hours
Here is our current configuration (no memory limits on purpose to see what's needed without being OOM killed)
The text was updated successfully, but these errors were encountered: