Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(cache): replace in-memory table with an lru-cache to limit memory consumption #2246

Merged
merged 1 commit into from
Mar 24, 2017
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 14 additions & 7 deletions kong/tools/database_cache.lua
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,19 @@ local utils = require "kong.tools.utils"
local resty_lock = require "resty.lock"
local json_encode = require("cjson.safe").encode
local json_decode = require("cjson.safe").decode
local lrucache = require "resty.lrucache"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what do we expect the average cache hit ratio to look like? if we expect a lot of cache misses and purges, should we consider resty.lrucache.pureffi (or making it configurable?)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Depends on customer use. I'd expect it to be relatively high, based on "sessions", what I mean is that you'll most likely get multiple requests for a specific use. In that case the apis, cunsomers, credentials etc. will be the same for that session. Only when done the next session comes up.

but as said very speculative.

local cache = ngx.shared.cache
local ngx_log = ngx.log
local gettime = ngx.now
local pack = utils.pack
local unpack = utils.unpack

-- Lets calculate our LRU cache size based on some memory assumptions...
local ITEM_SIZE = 1024 -- estimated bytes used per cache entry (probably less)
local MEM_SIZE = 500 -- megabytes to allocate for maximum cache size
-- defaults 1024/500 above ---> size = 512000 entries
local LRU_SIZE = math.floor((MEM_SIZE * 1024 * 1024) / ITEM_SIZE)

local TTL_EXPIRE_KEY = "___expire_ttl"

local CACHE_KEYS = {
Expand Down Expand Up @@ -65,7 +72,7 @@ end

-- Local Memory

local DATA = {}
local DATA = lrucache.new(LRU_SIZE)

function _M.set(key, value, exptime)
exptime = exptime or 0
Expand All @@ -77,7 +84,7 @@ function _M.set(key, value, exptime)
}
end

DATA[key] = value
DATA:set(key, value)

-- Save into Shared Dictionary
local _, err = _M.sh_set(key, json_encode(value), exptime)
Expand All @@ -90,7 +97,7 @@ function _M.get(key)
local now = gettime()

-- check local memory, and verify ttl
local value = DATA[key]
local value = DATA:get(key)
if value ~= nil then
if type(value) ~= "table" or not value[TTL_EXPIRE_KEY] then
-- found non-ttl value, just return it
Expand All @@ -100,7 +107,7 @@ function _M.get(key)
return value.value
end
-- value with expired ttl, delete it
DATA[key] = nil
DATA:delete(key)
end

-- nothing found yet, get it from Shared Dictionary
Expand All @@ -110,7 +117,7 @@ function _M.get(key)
return nil
end
value = json_decode(value)
DATA[key] = value -- store in memory, so we don't need to deserialize next time
DATA:set(key, value) -- store in memory, so we don't need to deserialize next time

if type(value) ~= "table" or not value[TTL_EXPIRE_KEY] then
-- found non-ttl value, just return it
Expand All @@ -122,12 +129,12 @@ function _M.get(key)
end

function _M.delete(key)
DATA[key] = nil
DATA:delete(key)
_M.sh_delete(key)
end

function _M.delete_all()
DATA = {}
DATA = lrucache.new(LRU_SIZE)
_M.sh_delete_all()
end

Expand Down