-
Notifications
You must be signed in to change notification settings - Fork 7.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance issues after upgrading from 1.0.4 -> 1.0.8 #2857
Comments
I guess you are scheduling a lot of work on the computation scheduler. We changed the tracking of tasks from using synchronized to j.u.c.Lock because it gives better throughput according to our JMH benchmarks. It appears task addition takes longer while holding the lock and the unsubscribe part spins and parks; most of the time, spin should be enough. Do you measure performance degradation? |
We do not have formal performance benchmarks on our jobs. We autoscale our cluster based on resource utilization and we saw our cluster size go up by about 30% - 100% depending on the workload. |
Sounds like your data rate reaches a critical frequency where the submission of new values in observeOn overlaps its drain and thus causes extra contention. The change from 1.0.4 to 1.0.8 consists of two parts: lock in SubscriptionList and the use of SubscriptionList instead of CompositeSubscription for non-timed tasks inside the computation scheduler. What is the java version you are running and can you name the virtualization environment? |
Hey David, We are on Java 8 and we running in a Mesos container inside an AWS instance Thanks On Mon, Apr 6, 2015 at 11:56 AM, David Karnok [email protected]
|
Thanks. Not sure what's the cause, but you could try shifting the contention by using batching before the observeOn and unbatch after it; source.batch(4).observeOn(Scheduler.computation()).concatMap(v -> Observable.from(v)) |
That would be buffer rather than batch, right? I was going to suggest the Thanks. Not sure what's the cause, but you could try shifting the source.batch(4).observeOn(Scheduler.computation()).concatMap(v -> — |
@davidmoten Sure. |
Hey David, On Mon, Apr 6, 2015 at 1:02 PM, David Karnok [email protected]
|
This is why I suggested using concatMap to flatten the batches again after the observeOn so your subsequent computation chain doesn't need to change. |
Oops hit the send button too soon. As I was saying adding a batch or buffer would change our public API from T You mentioned earlier you saw significant throughput improvement in the JMH Thanks On Mon, Apr 6, 2015 at 1:04 PM, Neeraj Joshi [email protected] wrote:
|
Ah ok, I see now, let me try out your suggestion. Thanks! |
There are 2 PRs that did perf enhancements:
|
I've run into this performance degradation and indeed, for some concurrency level (4+ in my case) the degradation was enormous. Could you check if PR #2912 fixes your case? |
Hey David, On Thu, Apr 23, 2015 at 8:56 AM, David Karnok [email protected]
|
Hey David, Looks like the performance is back to what it used to be with release 1.0.4 after we upgraded to RxJava 1.0.10! Looks like your fixes worked |
@neerajrj Hi and thanks for confirming. |
Hello,
We are seeing high levels of lock contention after our upgrade to RxJava 108
Apologize for not having a unit test to reproduce this we have a fairly complex system and we are having trouble figuring out which areas to dig deeper to find a reproducible case.
This is a paste from a JMC view. As far as we know nothing should be getting unsubscribed in our application.
We would appreciate if anyone can shed some light on what kind of behavior would trigger the stack below.
The text was updated successfully, but these errors were encountered: