-
Notifications
You must be signed in to change notification settings - Fork 417
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory leak #2020
Comments
@vladiksun Yes, the VLE component does cache data for multiple contexts (graphs). It will only free them when the -
But, it shouldn't crash. I will take a look. |
@vladiksun I have not been able to reproduce a crash, but will keep trying. However, I am using the latest |
@vladiksun I do see the memory leak you are talking about and have narrowed it down to a few commands. |
@vladiksun PR #2028 Should address most of the issue with this memory leak. Although, there might be more, more subtle leaks. |
@jrgemignani thanks for taking a look. We are waiting when it is ready then. |
@vladiksun It is in the master branch right now. I'll be making PRs for the other branches today. |
@vladiksun This fix is now in all branches. If you want to test the latest for the branch you are using, try |
@jrgemignani I checked the fix. It works within one session. As I understand with more sessions there would be more memory consumption to keep the cache. For example using 100 connections via pg_bouncer means we would be out of memory pretty soon. Is it right ? In our case we have approx. 40 thousand vertices with hierarchical relations. So typical hierarchical query before the cache takes about 5 seconds wich is too long anyway. I am curious why |
@vladiksun There are some other memory leaks, for example #2046 which I am currently working on, that could be contributing factors. That particular one highlights quite a few other areas that need corrective action as well. This week I will be focusing on getting patches out to deal with those. I am hesitant to suggest that this will, or will not, fix memory issues, due to the complexity of PostgreSQL's memory system (contexts and caches). However, I will note that in my debugging of these memory leaks, I have found that PostgreSQL doesn't like to give back memory even after it is freed. It seems that once it gets it from the system, it may hold onto it for a while, just in case. Also, keep in mind that PostgreSQL itself will cache items if it feels caching will improve performance. |
This issue is stale because it has been open 60 days with no activity. Remove "Abondoned" label or comment or this will be closed in 14 days. |
This issue was closed because it has been stalled for further 14 days with no activity. |
Describe the bug
Potential memory leak
How are you accessing AGE (Command line, driver, etc.)?
What data setup do we need to do?
What is the command that caused the error?
Expected behavior
No server crash happens
Environment (please complete the following information):
Additional context
It looks like any Cypher script's result is being cached after the first run.
The second run and all subsequent runs are faster unless any data has been created/updated/deleted.
If create/update/delete happens the scripts executes slower again.
This leads to increased DB memory consumption that never released.
For our real data (about 20 vertex labels with different connectedness over 3 million edges) similar script such as below executes
around 5 seconds first time, around 300ms second time until create/update/delete happens.
We used docker container with max 512 RAM available for it to reproduce this behaviour with provided test case.
Please take a look at the attached screenshot.
The text was updated successfully, but these errors were encountered: