-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory ballooning issues with standby instance in v0.9.x #3798
Comments
What is the "still" part of the title here? This issue isn't ringing any bells. |
From v0.9.1 CHANGELOG
I don't know if my client connections are classified as 'going away', but the memory ballooning seems only to happen when the instance is in standby, and forwarding client requests to the active instance. |
That's a separate issue where a connection would be made, fail to authenticate, and then drop, due to status checks being run against the port. |
Understood! Should I close & re-open, or just fix the title? |
Fixing the title is fine! |
After looking into this a little I believe it's an issue with the etcd v3 storage backend. @xiang90, wondering if you have any ideas here. Every forwarded request first calls |
is that true that every forwarded request needs a lock? i though standby will only hold a long lived lock. /cc @jefferai |
@xiang90 It uses the Lines 891 to 903 in f320f00
The |
ok. i see. i think you are right. would you like to get it fixed by lazily creating the session? |
@xiang90 Yes please :-) |
Environment:
Vault Config File:
Startup Log Output:
Standby instance:
Active instance:
Expected Behavior:
That an instance can run in standby mode without consuming more and more memory.
Actual Behavior:
Standby instances gradually consume more memory until they reach a preset limit and fail with OOM.
You can see that when the instance gets restarted due to memory consumption, it returns to a base line level and doesn't increase. I believe this correlates with the standby instance forwarding requests in standby mode, and refusing connections in sealed mode.
Memory use of the active instance over the same time period:
Steps to Reproduce:
Deploy a HA Vault cluster in Kubernetes.
Important Factoids:
Running on a Kubernetes cluster. Each instance has a imposed 200MB memory limit, more than enough for the active instance to work.
Also observed (but not recorded) with v0.9.0.
The text was updated successfully, but these errors were encountered: