-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
*127975840 [lua] responses.lua:107: do_authentication(): failed to get from node cache: could not write to lua_shared_dict: no memory #3105
Comments
Hello and thanks for reporting this. That message seems to indicate that you need to increase the value of the mem_cache_size variable. It's 128 MB by default. |
Because we do not use the "safe" setter in our shdict cache, we should expect the shdict to evict older items via its LRU mechanism, so I don't think this is related to the size of It seems to me @Zeous9 like the NGINX slab allocator failed to allocate more pages to the shared memory zone, and you might be running into this code path. Are you running NGINX in some peculiar environment - memory limitations, containerized, or anything like so? |
@Zeous9 Another possibility is that you may have a single value that is too large (over the Could you also share the list of plugins you are using, that'd be great! |
plugin configuration:
This is a test environment, so there are lots of consumers coming and going as test users are added and removed -- |
@derrley We recently hit this issue internally as well and we found out that in our case, it would happen when our VMs' memory was full, preventing NGINX to allocate any additional memory, as I was suspecting in my above comment. Could you investigate this possibility on your side as well? I am having a hard time making any sense of the graph you posted, or what it is supposed to mean. |
@thibaultcha Kong pod is holding steady at 549 out of its allocated 576M. We'll try increasing the limitation to see if that resolves the issue. |
Is this related: #3124 ? I see you are using rate limiting as well. |
@jeremyjpj0916 -- I'm not sure. We only use one worker process. Does that mitigate the leak issue? |
I am curious if you temp turned off rate limiting if your problem disappears. |
@jeremyjpj0916 I've already lifted Kong's memory limit, which forced Kong to redeploy, which, itself, massively reduced its memory consumption. If the problem reproduces in our test environment I will turn off rate limiting and see if that fixes it. |
@jeremyjpj0916 is the bug only in the shared memory rate limiting? If we used redis rate limiting would that fix the issue? |
@derrley shared memory is used for many purposes, so if you temporarily get away with using another rate limiting mechanism, then the issue will return sooner or later in another spot. As such I'd not like to classify it as a 'bug' just yet, when it might be an out-of-memory issue |
NOTE: GitHub issues are reserved for bug reports only.
Please read the CONTRIBUTING.md guidelines to learn on which channels you can
seek for help and ask general questions:
https://github.com/Kong/kong/blob/master/CONTRIBUTING.md#where-to-seek-for-help
Summary
use default settings shows:no memory error
Steps To Reproduce
1.create 4 api
2.create 100 consumers
3.consumers access those apis
4.error
*127975840 [lua] responses.lua:107: do_authentication(): failed to get from node cache: could not write to lua_shared_dict: no memory
Additional Details & Logs
$ kong start --vv
)<KONG_PREFIX>/logs/error.log
)The text was updated successfully, but these errors were encountered: