-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(cache) in-memory and shared dict caching strategy #1688
Conversation
ae34b3e
to
b816750
Compare
b816750
to
7ce8982
Compare
Awesome improvement! One huge caveat 🚫 ; we used to get a COPY of data from the cache. Now that it is cached in Lua, we get the ACTUAL CACHE. So whatever the code consuming the data does, it may no longer change anything on this data, as it would actually change the data in the cache! we might want to set some metatables that enforce read-only behaviour on the cache data. Maybe with a debug flag, such that it will only check read-only behaviour when the debug flag is set. |
@thibaultcha Using local pl_tablex = require "pl.tablex"
local t = {
hello = "world"
}
print(require("inspect")(t))
t = pl_tablex.readonly(t)
print(require("inspect")(t))
print(t.hello) returns:
While we would expect:
This behavior breaks the Admin API when we return a JSON encoded value at https://github.com/Mashape/kong/blob/feat/cache/kong/tools/responses.lua#L120 since it always returns |
4dc230e
to
f0b3974
Compare
I have disabled the readonly cache value because it turns out it's harder to implement right. The readonly wouldn't work for the child tables (unless iterating over each value, which is not performant over large tables) and because it actually hides the content of the table since it sets a new metatable on top of an Another solution would be to deep copy the value, but again, on large tables it's not feasible. |
Unless you have a better solution this PR is complete. |
wait_max = 0.5, -- max wait time before discarding event | ||
} | ||
if not ok then | ||
ngx.log(ngx.ERR, "failed to start event system: ", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this should be a hard error, not just logging. If this fails, Kong will not be stable, so it must error out and stop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately this is init_worker
, and not init
. There is no clean way to shutdown a worker and even less to shutdown all of the other workers, too. We could send a sigterm
signal to the master process, but we're already out of the CLI at this point, so error handling might no be consistent with the CLI (shutting down other services) out of the box.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good point
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we could still log this at a higher log level to give it more importance though, like ngx.CRIT
or even maybe ngx.ALERT
.
This PR implements both a per-worker and a per-process caching strategy, as opposed to the previous per-process only strategy.
The performance is much faster because the JSON serialization/deserialization is not done every time.
Full changelog
Benchmark
Benchmarking of the proxy functionality of an API without any plugin, to a local API that returns
hello world
.Plain nginx (~524.81 req/s with one worker):
With this PR (~520.87 req/s with one worker):
Kong 0.9.2 (~281.41 req/s with one worker):