Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory store clean command is a blocker #86

Closed
gadelkareem opened this issue Mar 27, 2020 · 9 comments · Fixed by #127
Closed

Memory store clean command is a blocker #86

gadelkareem opened this issue Mar 27, 2020 · 9 comments · Fixed by #127

Comments

@gadelkareem
Copy link
Contributor

Limiter is running on a high traffic server so while debugging some problems I saw high letancy from the store cleaner. I know that we need to remove the expired records but maybe there is a better way to do it?

Screenshot 2020-03-27 at 8 07 56 PM

@novln
Copy link
Contributor

novln commented Mar 30, 2020

Hello,

Do you have any suggestions by any chances ?
Also, have you tried using redis store ? The in-memory store was more or less designed for debug and/or development.

Cheers,

@novln
Copy link
Contributor

novln commented Mar 30, 2020

Could I have an example of your expiration duration ?

@gadelkareem
Copy link
Contributor Author

gadelkareem commented Mar 30, 2020

Hello,

Do you have any suggestions by any chances ?
Also, have you tried using redis store ? The in-memory store was more or less designed for debug and/or development.

Cheers,

I think redis store would remove the the bottleneck since there is no cleaner on the app but for a single deployment it would create unneeded latency.
For the in-memroy the deletion of expired records could be done like this https://github.com/gadelkareem/cachita/blob/master/memory.go#L82 because of this issue golang/go#20135 but that would not be the best solution yet since cleaner will block again. I am not sure how to tackle the in-memory cleaner but maybe if there is a replica of the counters hash map that is used only for reading and the writing could be done through channels in go routines. But I am not sure how to do increment in this way.

Could I have an example of your expiration duration ?

generalLimiter = newLimiter("4-S")
apiLimiter = newLimiter("50-M")
searchLimiter = newLimiter("30-M")
func newLimiter(rate string) *limiter.Limiter {
    r, err := limiter.NewRateFromFormatted(rate)
    h.PanicOnError(err)
    return limiter.New(memory.NewStore(), r, limiter.WithTrustForwardHeader(true))
}

@novln
Copy link
Contributor

novln commented Mar 31, 2020

I may have a solution using a ring buffer, would you like to experiment it ?

@gadelkareem
Copy link
Contributor Author

Sure!

@novln
Copy link
Contributor

novln commented Apr 14, 2020

@gadelkareem Could I have an example on how many keys you have in your store, so I could have a good telemetry on how my idea improve (or not) things ?!

Thank you.

@gadelkareem
Copy link
Contributor Author

@novln I am afraid that would be hard to calculate because it is production data but we can estimate it through the number of requests which is around 1.2 Million request per day.

@novln
Copy link
Contributor

novln commented Apr 16, 2020

Hello,

Could you try this commit 3cd0e5a?

I haven't done a benchmark nor a ring buffer in order to try to keep the initial architecture.
Also, it was quicker to try this and find a quick and good enough solution for your problem.

@gadelkareem
Copy link
Contributor Author

I tested it with stack impact and I do not see that blocker anymore but i made a PR so you can test it more intensely.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants