-
Notifications
You must be signed in to change notification settings - Fork 593
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
getting rid of a Func allocation #841
Conversation
You learn something new every day, so you can't field assign a lambda function that calls an instance method. :) |
Speaking of, read the "Remarks" here - |
True, that's actually a new bug that you've fixed as well. 👍 |
@lukebakken Taking a quick look at this, I'm not sure that remark is really relevant here. Seems like It would only matter if we expect the same model instance to be starting work pools from different threads. We definitely should be avoiding adding a lock in this path, but it's not clear to me that we need it. |
OK, I'll take your word for it @bording ... I just traced the code a bit and it made my eyes cross. Only connections are supposed to be thread-safe so people abusing |
Of course, the question is why is a |
This was just a quick look, so don't take what I said as gospel! I'm just trying to think of how you'd have the same model instance calling |
We do need to protect against multiple models adding work at the same time and writing to a regular dictionary from multiple threads would require a lock to avoid corrupting the collection. |
You can see why that is confusing... "No need for a lock here, but we do in this other place in the same class" |
It seems like we could look at just removing a shared work service as a concept and let each model have its own work pool, removing the need for a collection at all. |
Where is there a lock already? |
The use of a I'll add the per-model work pools as a future idea. |
So we removed some allocations but introduced a lock. @stebet can you please run your profiling workload and share if this is a net positive change in terms of CPU usage and allocations? |
The lock is already gone! |
Hmm, I wouldn't have classified that as a lock, but I see what you mean. The distinction here is that we need a In this case, we'd end up creating more than one work pool, only one of which would be stored in the collection. The other one would never have work queued to it, so it doesn't seem like it could interfere in any way. It would await forever for the dequeue to return something. |
Would the other work pool be leaked or, since there would be no reference to it, would it be cleaned up? My guess is it would be leaked, or the resources used by it leaked... emphasis on the word "guess" 🤷 |
Yeah, I've wondered about this code a few times. I like @bording suggestion to just have each Model keep it's own work queue. I'll run the PR through the profiling tool now though. |
This does indeed work and ends up in a reduction. Good spot there @bollhals. It wasn't apparent to me where that sneaky Func<> allocation was coming from. |
And on that note, I was thinking for 7.0 that it'd be a good idea to get rid of these dispatchers, and just have dedicated Channel instances on the models that an async task would simply be reading from. That way the message deliveries would always run asynchronously and there is no need for a collection to keep track of these instances. |
Thanks @bollhals and everyone else. Interesting discussion! |
Proposed Changes
The AsyncConsumerWorkService was allocating a new Func for each call to Schedule (Also spotted in the images of #824 due to this roslyn bug
Types of Changes
Checklist
CONTRIBUTING.md
documentFurther Comments
I haven't verified these changes myself, as I just stumbled upon this while looking for some other answer and I quickly edited it in Github only.