Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lock contention is extremely high when the request rate is high #129

Open
mikelchai opened this issue Nov 8, 2018 · 3 comments
Open

lock contention is extremely high when the request rate is high #129

mikelchai opened this issue Nov 8, 2018 · 3 comments
Milestone

Comments

@mikelchai
Copy link

Timer is used by many functions in the library. What we found is if the request rate is very high, around 30% of CPU is for lock contention. Is there a way to optimize? I know the issue with Timer is fixed in .net core and HashedWheelTimer is implemented in dotnetty.

@mikelchai mikelchai changed the title contention lock is extremely high when the request rate is high lock contention is extremely high when the request rate is high Nov 8, 2018
@xinchen10 xinchen10 added this to the 3.0.0 milestone Jun 7, 2019
@miso-ms
Copy link

miso-ms commented Nov 5, 2021

Are there any updates to this issue? We've observed in some of our scenarios similar lock contention by the Amqp producer when targeting .Net Framework 4.7.2.

image

Event Hubs SDK Version: 5.4.1
Target Platform: .Net Framework 4.7.2
Target Event Hubs: >500 partitions, high event rate, low latency

@miso-ms
Copy link

miso-ms commented Nov 12, 2021

Update, learned about a .Net Framework patch that fixes the problematic timer call stack and had some small improvements.

We are still observing (and investigating) lock contention in the amqp WriteBuffer call in this scenario, where a single connection is writing to many partitions (>500):
image

@xinchen10
Copy link
Member

The I/O operations on a connection have to be synchronized. There may be some improvements we can do for it, but can you create more connections and distribute the partitions among them? I assume you have 500 senders to the partitions but they share the same connection. If most of them are busy, eventually the connection will be the bottleneck, even if we improve the lock contention for the I/O operations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants