-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feature: added the "exptime" argument to the pure C function of the shdict:incr() API. #1226
Conversation
src/ngx_http_lua_shdict.c
Outdated
@@ -2623,11 +2623,12 @@ ngx_http_lua_ffi_shdict_get(ngx_shm_zone_t *zone, u_char *key, | |||
int | |||
ngx_http_lua_ffi_shdict_incr(ngx_shm_zone_t *zone, u_char *key, | |||
size_t key_len, double *value, char **err, int has_init, double init, | |||
int *forcible) | |||
int exptime, int *forcible) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be init_ttl
to avoid any ambiguities here. Also, we should use long
type here. int
is 32-bit usually and can easily get overflown.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds like a saner name to me 👍
Should we do the same modification to this value's type to the already existing implementations of it? Like ngx_http_lua_ffi_shdict_set_expire
, ngx_http_lua_ffi_shdict_store
, etc... In another PR of course.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@thibaultcha No, they do not have such ambiguities as the incr()
method.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I mean is that they do use the int
type as well (I agree their naming is fine).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initial comment addressed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@thibaultcha Oh, right. Better in a separate PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@thibaultcha Thanks for the catch!
shdict:incr() API.
9626013
to
944d4a1
Compare
@thibaultcha I've already merged this. Will you create another PR to document this feature properly? Thanks! |
Will do |
Addresses the issue discussed in #3124 and #3241. This is part of a series of fixes to address those errors. Context ------- In the `local` mode of the rate-limiting plugins, storing the rate-limiting counters in the same shm used by Kong's database cache is too invasive for the underlying shm, especially when the rate-limiting plugins are used with a `seconds` precision. On top of exhausting the database cache slots, this approach also generates some form of fragmentation in the shm. This is due to the side-by-side storage of values with sizes of different orders of magnitude (JSON strings vs. an incremented double) and the LRU eviction mechanism. When the shm is full and LRU kicks-in, it is highly probable that several rate-limiting counters will be evicted (due to their proliferation), thus not freeing enough space to store the retrieved data, causing a `no memory` error to be reported by the shm. Solution -------- Declaring shms that are only used by some plugins is not very elegant. Now, all users (even those not using rate-limiting plugins) have to pay a memory cost (although small). Unfortunately, and in the absence of a more dynamic solution to shm configuration such as a more dynamic templating engine, or a `configure_by_lua` phase, this is the safest solution. Size rationale -------------- Running a script generating similar keys and storing similar values (double) indicates that an shm with 12Mb should be able to store about ~48,000 of those values at once. It is important to remind ourselves that one Consumer/IP address might use more than one key (in fact, one per period configured on the plugin), and both the rate-limiting and response-ratelimiting plugins at once, and they use the same shms. Even considering the above statements, ~48,000 keys per node seems somewhat reasonable, considering keys of `second` precision will most likely fill up the shm and be candidates for LRU eviction. Our concern lies instead around long-lived limits (and thus, keys) set by the user. Additionally, a future improvement upon this will be the setting of the `init_ttl` argument for the rate-limiting keys, which will help **quite considerably** in reducing the footprint of the plugins on the shm. As of this day, this feature has been contributed to ngx_lua but not released yet: openresty/lua-nginx-module#1226 See also -------- Another piece of the fixes for the `no memory` errors resides in the behavior of the database caching module upon a full shm. See: thibaultcha/lua-resty-mlcache#41 This patch reduces the likeliness of a full shm (by a lot!), but does not remove it. The above patch ensures a somewhat still sane behavior would the shm happen to be full again. Fix #3124 Fix #3241
Addresses the issue discussed in #3124 and #3241. This is part of a series of fixes to address those errors. Context ------- In the `local` mode of the rate-limiting plugins, storing the rate-limiting counters in the same shm used by Kong's database cache is too invasive for the underlying shm, especially when the rate-limiting plugins are used with a `seconds` precision. On top of exhausting the database cache slots, this approach also generates some form of fragmentation in the shm. This is due to the side-by-side storage of values with sizes of different orders of magnitude (JSON strings vs. an incremented double) and the LRU eviction mechanism. When the shm is full and LRU kicks-in, it is highly probable that several rate-limiting counters will be evicted (due to their proliferation), thus not freeing enough space to store the retrieved data, causing a `no memory` error to be reported by the shm. Solution -------- Declaring shms that are only used by some plugins is not very elegant. Now, all users (even those not using rate-limiting plugins) have to pay a memory cost (although small). Unfortunately, and in the absence of a more dynamic solution to shm configuration such as a more dynamic templating engine, or a `configure_by_lua` phase, this is the safest solution. Size rationale -------------- Running a script generating similar keys and storing similar values (double) indicates that an shm with 12Mb should be able to store about ~48,000 of those values at once. It is important to remind ourselves that one Consumer/IP address might use more than one key (in fact, one per period configured on the plugin), and both the rate-limiting and response-ratelimiting plugins at once, and they use the same shms. Even considering the above statements, ~48,000 keys per node seems somewhat reasonable, considering keys of `second` precision will most likely fill up the shm and be candidates for LRU eviction. Our concern lies instead around long-lived limits (and thus, keys) set by the user. Additionally, a future improvement upon this will be the setting of the `init_ttl` argument for the rate-limiting keys, which will help **quite considerably** in reducing the footprint of the plugins on the shm. As of this day, this feature has been contributed to ngx_lua but not released yet: openresty/lua-nginx-module#1226 Again, this limit only applies when using the **local** strategy, which also likely means that a load-balancer is distributing traffic to a pool of Kong nodes with some sort of consistent load-balancing technique. Thus considerably reducing the number of concurrent Consumers a given node needs to handle at once. See also -------- Another piece of the fixes for the `no memory` errors resides in the behavior of the database caching module upon a full shm. See: thibaultcha/lua-resty-mlcache#41 This patch reduces the likeliness of a full shm (by a lot!), but does not remove it. The above patch ensures a somewhat still sane behavior would the shm happen to be full again. Fix #3124 Fix #3241
This is part of a series of fixes: - thibaultcha/lua-resty-mlcache#41 - thibaultcha/lua-resty-mlcache#42 - #3311 - #3341 Context ------- In the `local` mode of the rate-limiting plugins, storing the rate-limiting counters in the same shm used by Kong's database cache is too invasive for the underlying shm, especially when the rate-limiting plugins are used with a `seconds` precision. On top of exhausting the database cache slots, this approach also generates some form of fragmentation in the shm. This is due to the side-by-side storage of values with sizes of different orders of magnitude (JSON strings vs. an incremented double) and the LRU eviction mechanism. When the shm is full and LRU kicks-in, it is highly probable that several rate-limiting counters will be evicted (due to their proliferation), thus not freeing enough space to store the retrieved data, causing a `no memory` error to be reported by the shm. Solution -------- Declaring shms that are only used by some plugins is not very elegant. Now, all users (even those not using rate-limiting plugins) have to pay a memory cost (although small). Unfortunately, and in the absence of a more dynamic solution to shm configuration such as a more dynamic templating engine, or a `configure_by_lua` phase, this is the safest solution. Size rationale -------------- Running a script generating similar keys and storing similar values (double) indicates that an shm with 12Mb should be able to store about ~48,000 of those values at once. It is important to remind ourselves that one Consumer/IP address might use more than one key (in fact, one per period configured on the plugin), and both the rate-limiting and response-ratelimiting plugins at once, and they use the same shms. Even considering the above statements, ~48,000 keys per node seems somewhat reasonable, considering keys of `second` precision will most likely fill up the shm and be candidates for LRU eviction. Our concern lies instead around long-lived limits (and thus, keys) set by the user. Additionally, a future improvement upon this will be the setting of the `init_ttl` argument for the rate-limiting keys, which will help **quite considerably** in reducing the footprint of the plugins on the shm. As of this day, this feature has been contributed to ngx_lua but not released yet: openresty/lua-nginx-module#1226 Again, this limit only applies when using the **local** strategy, which also likely means that a load-balancer is distributing traffic to a pool of Kong nodes with some sort of consistent load-balancing technique. Thus considerably reducing the number of concurrent Consumers a given node needs to handle at once. See also -------- Another piece of the fixes for the `no memory` errors resides in the behavior of the database caching module upon a full shm. See: thibaultcha/lua-resty-mlcache#41 This patch reduces the likeliness of a full shm (by a lot!), but does not remove it. The above patch ensures a somewhat still sane behavior would the shm happen to be full again. Fix #3124 Fix #3241
This is part of a series of fixes: - thibaultcha/lua-resty-mlcache#41 - thibaultcha/lua-resty-mlcache#42 - #3311 - #3341 Context ------- In the `local` mode of the rate-limiting plugins, storing the rate-limiting counters in the same shm used by Kong's database cache is too invasive for the underlying shm, especially when the rate-limiting plugins are used with a `seconds` precision. On top of exhausting the database cache slots, this approach also generates some form of fragmentation in the shm. This is due to the side-by-side storage of values with sizes of different orders of magnitude (JSON strings vs. an incremented double) and the LRU eviction mechanism. When the shm is full and LRU kicks-in, it is highly probable that several rate-limiting counters will be evicted (due to their proliferation), thus not freeing enough space to store the retrieved data, causing a `no memory` error to be reported by the shm. Solution -------- Declaring shms that are only used by some plugins is not very elegant. Now, all users (even those not using rate-limiting plugins) have to pay a memory cost (although small). Unfortunately, and in the absence of a more dynamic solution to shm configuration such as a more dynamic templating engine, or a `configure_by_lua` phase, this is the safest solution. Size rationale -------------- Running a script generating similar keys and storing similar values (double) indicates that an shm with 12Mb should be able to store about ~48,000 of those values at once. It is important to remind ourselves that one Consumer/IP address might use more than one key (in fact, one per period configured on the plugin), and both the rate-limiting and response-ratelimiting plugins at once, and they use the same shms. Even considering the above statements, ~48,000 keys per node seems somewhat reasonable, considering keys of `second` precision will most likely fill up the shm and be candidates for LRU eviction. Our concern lies instead around long-lived limits (and thus, keys) set by the user. Additionally, a future improvement upon this will be the setting of the `init_ttl` argument for the rate-limiting keys, which will help **quite considerably** in reducing the footprint of the plugins on the shm. As of this day, this feature has been contributed to ngx_lua but not released yet: openresty/lua-nginx-module#1226 Again, this limit only applies when using the **local** strategy, which also likely means that a load-balancer is distributing traffic to a pool of Kong nodes with some sort of consistent load-balancing technique. Thus considerably reducing the number of concurrent Consumers a given node needs to handle at once. See also -------- Another piece of the fixes for the `no memory` errors resides in the behavior of the database caching module upon a full shm. See: thibaultcha/lua-resty-mlcache#41 This patch reduces the likeliness of a full shm (by a lot!), but does not remove it. The above patch ensures a somewhat still sane behavior would the shm happen to be full again. Fix #3124 Fix #3241 From #3311
This is part of a series of fixes: - thibaultcha/lua-resty-mlcache#41 - thibaultcha/lua-resty-mlcache#42 - #3311 - #3341 Context ------- In the `local` mode of the rate-limiting plugins, storing the rate-limiting counters in the same shm used by Kong's database cache is too invasive for the underlying shm, especially when the rate-limiting plugins are used with a `seconds` precision. On top of exhausting the database cache slots, this approach also generates some form of fragmentation in the shm. This is due to the side-by-side storage of values with sizes of different orders of magnitude (JSON strings vs. an incremented double) and the LRU eviction mechanism. When the shm is full and LRU kicks-in, it is highly probable that several rate-limiting counters will be evicted (due to their proliferation), thus not freeing enough space to store the retrieved data, causing a `no memory` error to be reported by the shm. Solution -------- Declaring shms that are only used by some plugins is not very elegant. Now, all users (even those not using rate-limiting plugins) have to pay a memory cost (although small). Unfortunately, and in the absence of a more dynamic solution to shm configuration such as a more dynamic templating engine, or a `configure_by_lua` phase, this is the safest solution. Size rationale -------------- Running a script generating similar keys and storing similar values (double) indicates that an shm with 12Mb should be able to store about ~48,000 of those values at once. It is important to remind ourselves that one Consumer/IP address might use more than one key (in fact, one per period configured on the plugin), and both the rate-limiting and response-ratelimiting plugins at once, and they use the same shms. Even considering the above statements, ~48,000 keys per node seems somewhat reasonable, considering keys of `second` precision will most likely fill up the shm and be candidates for LRU eviction. Our concern lies instead around long-lived limits (and thus, keys) set by the user. Additionally, a future improvement upon this will be the setting of the `init_ttl` argument for the rate-limiting keys, which will help **quite considerably** in reducing the footprint of the plugins on the shm. As of this day, this feature has been contributed to ngx_lua but not released yet: openresty/lua-nginx-module#1226 Again, this limit only applies when using the **local** strategy, which also likely means that a load-balancer is distributing traffic to a pool of Kong nodes with some sort of consistent load-balancing technique. Thus considerably reducing the number of concurrent Consumers a given node needs to handle at once. See also -------- Another piece of the fixes for the `no memory` errors resides in the behavior of the database caching module upon a full shm. See: thibaultcha/lua-resty-mlcache#41 This patch reduces the likeliness of a full shm (by a lot!), but does not remove it. The above patch ensures a somewhat still sane behavior would the shm happen to be full again. Fix #3124 Fix #3241 From #3311
This is part of a series of fixes: - thibaultcha/lua-resty-mlcache#41 - thibaultcha/lua-resty-mlcache#42 - #3311 - #3341 Context ------- In the `local` mode of the rate-limiting plugins, storing the rate-limiting counters in the same shm used by Kong's database cache is too invasive for the underlying shm, especially when the rate-limiting plugins are used with a `seconds` precision. On top of exhausting the database cache slots, this approach also generates some form of fragmentation in the shm. This is due to the side-by-side storage of values with sizes of different orders of magnitude (JSON strings vs. an incremented double) and the LRU eviction mechanism. When the shm is full and LRU kicks-in, it is highly probable that several rate-limiting counters will be evicted (due to their proliferation), thus not freeing enough space to store the retrieved data, causing a `no memory` error to be reported by the shm. Solution -------- Declaring shms that are only used by some plugins is not very elegant. Now, all users (even those not using rate-limiting plugins) have to pay a memory cost (although small). Unfortunately, and in the absence of a more dynamic solution to shm configuration such as a more dynamic templating engine, or a `configure_by_lua` phase, this is the safest solution. Size rationale -------------- Running a script generating similar keys and storing similar values (double) indicates that an shm with 12Mb should be able to store about ~48,000 of those values at once. It is important to remind ourselves that one Consumer/IP address might use more than one key (in fact, one per period configured on the plugin), and both the rate-limiting and response-ratelimiting plugins at once, and they use the same shms. Even considering the above statements, ~48,000 keys per node seems somewhat reasonable, considering keys of `second` precision will most likely fill up the shm and be candidates for LRU eviction. Our concern lies instead around long-lived limits (and thus, keys) set by the user. Additionally, a future improvement upon this will be the setting of the `init_ttl` argument for the rate-limiting keys, which will help **quite considerably** in reducing the footprint of the plugins on the shm. As of this day, this feature has been contributed to ngx_lua but not released yet: openresty/lua-nginx-module#1226 Again, this limit only applies when using the **local** strategy, which also likely means that a load-balancer is distributing traffic to a pool of Kong nodes with some sort of consistent load-balancing technique. Thus considerably reducing the number of concurrent Consumers a given node needs to handle at once. See also -------- Another piece of the fixes for the `no memory` errors resides in the behavior of the database caching module upon a full shm. See: thibaultcha/lua-resty-mlcache#41 This patch reduces the likeliness of a full shm (by a lot!), but does not remove it. The above patch ensures a somewhat still sane behavior would the shm happen to be full again. Fix #3124 Fix #3241 From #3311
I hereby granted the copyright of the changes in this pull request
to the authors of this lua-nginx-module project.
Sister PR for openresty/lua-resty-core#165