Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Keeps all Limits in memory only #78

Merged
merged 5 commits into from
Jun 16, 2022
Merged

Keeps all Limits in memory only #78

merged 5 commits into from
Jun 16, 2022

Conversation

alexsnaps
Copy link
Member

@alexsnaps alexsnaps commented Jun 13, 2022

Part of addressing #74

I think the "easiest" way of reviewing this, is by starting with the changes to Storage and AsyncStorage here. The state of the Limits is now in the Storage itself… The InMemoryStore as somewhat of a copy of that state currently still, as I can't just reduce it to the TtlCache usage…

I tried to keep the diff small… for some definition of small. There are other smaller improvements that can be made here and there, but I'll keep those for some other change. I thought it'd be better to keep this "focused". So first step towards getting #74 done

@alexsnaps alexsnaps mentioned this pull request Jun 13, 2022
4 tasks
@alexsnaps alexsnaps force-pushed the issue_74 branch 3 times, most recently from 9e5c17f to 25bb776 Compare June 13, 2022 18:42
@alexsnaps alexsnaps marked this pull request as ready for review June 13, 2022 18:56
Copy link
Contributor

@eguzki eguzki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

small doc fix should be done

//! Limitador can store the limits in memory or in Redis. Storing them in memory

@eguzki
Copy link
Contributor

eguzki commented Jun 16, 2022

Before merging this, I would tag the current HEAD of main branch. Optionally create a branch to support existing deployments.

Alternatively, we could use the existing v0.5.1 tag

Copy link
Contributor

@eguzki eguzki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking super good.

I failed to understand every bit, but I got the main idea. Limits live in the Storage struct (not a trait any more) and counters accessor is a trait (well, a smart pointer to an object implementing CounterStorage). Redis, InMemmory and Infinispam all implement CounterStorage

}

pub struct AsyncStorage {
limits: RwLock<HashMap<Namespace, HashSet<Limit>>>,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice. HashSet enforces uniqueness for limits, doesn't it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alexsnaps Looks like we including the limit's name into the hash for a Limit. I think two limits with everything same except name should be same as well right?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@eguzki yes… but has it did before, as Limit was used as the key in a HashMap.
@rahulanand16nov I didn't change anything there really… but I tend to agree with you indeed. name doesn't semantically mean anything to the Limit, otoh the max_value is part of the PartialEq#eq impl. which I think is wrong. I'd rather not conflate these changes with this PR tho. But I think we should address these issues too!

}
}

impl AsyncStorage {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just for curiosity. Does it make sense to keep both async and sync storage version? If I am not wrong it is all about using the redis sync or async client. Shouldn't we choose one and stick to it?

Copy link
Member Author

@alexsnaps alexsnaps Jun 16, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we can, no. Well... maybe we could, but we'd want to keep the async version for the server usage. But when all in-memory there is no added value (more on that in a second tho), as all operations are sync. So when using it as a lib, you should just be able to use the sync version and not require a reactor to run this in.

That being said, there is an async RwLock in tokio that we could use in lieu of the blocking one that we currently use. That'd make for different impl. of the two Storages. So, I guess I'm arguing to not only keep them, but have them drift further more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, got it... I think.

Thanks for the explanation

limits.insert(limit.clone());
self.counters.delete_counters(limits)?;

let mut limits = self.limits.write().unwrap();
Copy link
Contributor

@rahulanand16nov rahulanand16nov Jun 16, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am always a little hesitant with so many unwrap but I understand likelihood is low. Do you think we should manage results instead of panicking? Maybe I am being too conservative.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well… again, nothing new here. But these .unwrap() are on LockResult<RwLockReadGuard> which would return an Err if the lock is "poisoned". That happens when a thread panic'ed while holding an exclusive lock. This indicates that whatever this lock was guarding probably is "corrupted". But so that would mean: something went real bad in the past, for which we didn't cleaning recover from and some other thread is now trying to use that (possibly) corrupted state.
Until we find a strategy to recover from a poisoned state, I think panic'ing completely is probably the best option anyhow. One way would be to explicitly table flip and process::exit(1) on such a situation… unsure how much I want to litter the code with these conditions tho.

Again, if we want to change that (which I'm certainly open to, if only to start the discussion on how to achieve that), I wouldn't include these functional changes in this PR.

Copy link
Contributor

@rahulanand16nov rahulanand16nov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good and added my thoughts. I am not able to review redis related parts yet but will do in the future as I get more accustomed to the code.

@alexsnaps
Copy link
Member Author

Before merging this, I would tag the current HEAD of main branch. Optionally create a branch to support existing deployments.

Alternatively, we could use the existing v0.5.1 tag

I'd rather use the the v0.5.1 or whatever version users were using if we need to fork. And possibly cherry-pick further changes along the way. But I might be overly conservative

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
No open projects
Archived in project
Development

Successfully merging this pull request may close these issues.

3 participants