-
Notifications
You must be signed in to change notification settings - Fork 141
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A question about slots size #205
Comments
The numbers during the load look a bit strange to me. Unless you are shrinking the pool during the load the Could you please share the exact versions you're using and maybe your pool configuration?
With the It is quite possible that there are ~70 workers waiting for an object from the pool, the workers currently in possession of an object get cancelled first and return the now broken objects. The waiting workers are cancelled as well but not 100% parallel thus a few might end up grabbing those just returned objects from the pool just to find them broken and discarding them. When finally all workers are cancelled most of the dead connections are gone and only a few connections remain in the pool - possibly broken, too. It's by pure chance how many objects are left in the pool after that load. Most of them dead. Are you load testing the service with I'm only concerned about the |
Oh, I'm sorry, didn't mention versions: deadpool-redis = { version = "0.10.0", features = ["serde"] }
redis = { version = "0.21.3", features = ["cluster", "tokio-comp", "connection-manager", "streams"] }
actix-http = "=3.0.0"
actix-rt = "2"
actix-web = "=4.0.1"
tokio = { version = "1.12.0", features = ["full"] } There is nothing special with the pool: use deadpool_redis::{Config, Pool, Runtime};
let cfg = Config::from_url(address);
let pool = cfg.create_pool(Some(Runtime::Tokio1)).unwrap()
// usage like
let mut connection = pool.get().await.unwrap(); Test load was made with simple tool locust - it just sends http requests like wrk and doesn't deal with connections directly. |
After several hours of trying to reproduce this issue I finally managed to write a test reproducing the bug you ran into: 1b09998 Funnily enough this bug was already fixed by a recent refactoring and clean-up of the pool internals (7ffa964). I just didn't publish a new version to crates.io as I considered it mainly a refactoring without fixing any known bugs. I just release Could you please run |
Yep, the bug is fixed, good job, thanks!
Btw, do you consider the option to keep full buffer with max_size elements to optimize allocations? |
I need to explain how deadpool works so it makes more sense why it behaves like that:
Let's assume you have a pool with a If you timed this perfectly you can end up with a pool status claiming a size of There are multiple ways to solve this. One is changing the strategy to recycle connections when they are returned. This would require the pool to have one or more active workers that take returned connections, recycle them and push them back to the queue. This however causes more delay and just because the connection was recycled when it was returned to the pool doesn't mean it is safe to use when it is retrieved again. In an application that only serves a few requests a day it could happen easily that the connection in the pool is dead long before it is retrieved again. My thinking here is:
I opted for a pool implementation that favors the high load scenario over the low latency one. And as nice side effect the implementation ended up simpler than other solutions. I do plan on adding a way to add some ahead of time object creation and recycling. I won't bake it into the core of deadpool but add some extension points so users can pick and choose what strategy they want to use: |
Got it. Very appreciate for the explanation. Thanks! |
Hi, folk! Thanks for your contribution!
I have actix-web service and deadpool-redis shows strange values of slots veqdeque size:
As I understood - allocated slots vecdeque is dropping inside
return_object
and new connections to redis will be allocated again after. Is it possible not to deallocate them and keep the pool's size with max_size elements?The text was updated successfully, but these errors were encountered: