You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While we do have a TtlCache<Counter, i64>, the Set of Counter in limits_for_namespace: RwLock<HashMap<Namespace, HashMap<Limit, HashSet<Counter>>>> will grow unbounded. The TtlCache<Counter, i64>, limited to 1000 entries by default, will evict expired entries (when at capacity I guess?), the Set of Counters in our "main" datastructure is unbounded. Not necessarily a big issue, as it "only" will grow for "qualified" (probably not the best name) Limits (i.e. ones that have a counter per some arbitrary variable).
If we use only a single set of counters (i.e. the one in the main datastructure), we and once we have a non-blocking way of maintaining these, we probably should implement some eviction/expiry mechanism to reclaim memory of these counters.
I've debugged this using this test below… while the rate_limiter.get_counters("foo".to_owned()).unwrap().len() will return 0, going deeper in the InMemoryStore you'd see that the value of the <Limit, HashSet<Counter>> mapping for that Limit is a HashSet of size a 1000. It "merely" gets cleaned up in using the TTL of the cache in:
for counter inself.counters_in_namespace(namespace){// 1000 countersifletSome(counter_val) = self.counters.read().unwrap().get(&counter){// which are all expired hereletmut counter_with_val = counter.clone();
counter_with_val.set_remaining(*counter_val);
res.insert(counter_with_val);}}
#[cfg(test)]mod tests {use std::collections::HashMap;use std::thread;use std::time::Duration;usecrate::{Limit,RateLimiter};#[test]fntest_leak(){let rate_limiter = RateLimiter::default();let nothing:Vec<String> = vec![];let limit = Limit::new("foo",2,1,
nothing.clone(),vec!["id"],);
rate_limiter.add_limit(&limit).unwrap();for o in0..3{for l in0..3{for i in0..1000{// grow the store even bigger, by making this >1000...letmut map = HashMap::new();
map.insert("id".to_owned(), i.to_string());assert_eq!(l == 2, rate_limiter // ... but requires removing this assertion, as limits won't be enforced, despite we holding a complete datastructure to be able to do so... but the TtlCache being used for expired counters, when at capacity, it considers these counters "expired". .check_rate_limited_and_update("foo",&map,1).unwrap());}}if o == 2{
thread::sleep(Duration::from_secs(2));}else{
thread::sleep(Duration::from_secs(1));}}println!("Counters: {}", rate_limiter.get_counters("foo".to_owned()).unwrap().len());println!("Counters: {}", rate_limiter.get_counters("foo".to_owned()).unwrap().len());}}
The text was updated successfully, but these errors were encountered:
While we do have a
TtlCache<Counter, i64>
, theSet
ofCounter
inlimits_for_namespace: RwLock<HashMap<Namespace, HashMap<Limit, HashSet<Counter>>>>
will grow unbounded. TheTtlCache<Counter, i64>
, limited to 1000 entries by default, will evict expired entries (when at capacity I guess?), theSet
ofCounter
s in our "main" datastructure is unbounded. Not necessarily a big issue, as it "only" will grow for "qualified" (probably not the best name)Limit
s (i.e. ones that have a counter per some arbitrary variable).If we use only a single set of counters (i.e. the one in the main datastructure), we and once we have a non-blocking way of maintaining these, we probably should implement some eviction/expiry mechanism to reclaim memory of these counters.
I've debugged this using this test below… while the
rate_limiter.get_counters("foo".to_owned()).unwrap().len()
will return0
, going deeper in theInMemoryStore
you'd see that the value of the<Limit, HashSet<Counter>>
mapping for thatLimit
is a HashSet of size a1000
. It "merely" gets cleaned up in using the TTL of the cache in:The text was updated successfully, but these errors were encountered: