-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(cache): replace in-memory table with an lru-cache to limit memory consumption #2246
Conversation
How hard would it be to pull our our implementation of TTLs with lua-resty-lrucache's TTL mechanism? |
@@ -2,12 +2,19 @@ local utils = require "kong.tools.utils" | |||
local resty_lock = require "resty.lock" | |||
local json_encode = require("cjson.safe").encode | |||
local json_decode = require("cjson.safe").decode | |||
local lrucache = require "resty.lrucache" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what do we expect the average cache hit ratio to look like? if we expect a lot of cache misses and purges, should we consider resty.lrucache.pureffi
(or making it configurable?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Depends on customer use. I'd expect it to be relatively high, based on "sessions", what I mean is that you'll most likely get multiple requests for a specific use. In that case the apis, cunsomers, credentials etc. will be the same for that session. Only when done the next session comes up.
but as said very speculative.
Impossible without modifying the lru library. We need to share the ttl between the lru cache and the shm cache. But if we get an existing entry from shm, we cannot get its remaining ttl. Hence we must have our own ttl field inserted. |
The size should still be configurable. We currently support |
How about using the same value as |
The value of |
Also,
|
The same way we compute the number of items in the current patch with a hard-coded estimated item size, we could just as well do that by replacing this PR's In any case, I'm even more in favor of simply hard-coding this value. |
That's exactly why I picked the current size. The item size of 1024 is probably a lot bigger than we actually use. The growth we suspect is mostly based on the db-cache misses (standard data does not grow, it, at its maximum, consists of the entire datastore contents, and never more)
so am I. |
As the intent is to replace the caching all together, I did not make its size configurable
See code comments on some rough calculations. Configurability can be added if required, feedback welcome