Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance benefits? #419

Closed
jorenham opened this issue May 9, 2022 · 8 comments
Closed

Performance benefits? #419

jorenham opened this issue May 9, 2022 · 8 comments

Comments

@jorenham
Copy link

jorenham commented May 9, 2022

So I've been testing the performance a bit, varying e.g.

  • 1:1, 16:1, 64:1 producer : consumer ratio
  • normal/lifo/priority queue etc
  • asyncio/uvloop event loop
  • async -> async, async -> sync, sync -> sync
  • (probably more)

I found that, janus queues are ~5x slower in sync->sync, ~9x slower in sync->async, and ~15x slower in async->async. This is pretty much consistent across all parameter sets.

This confirmed my suspicion that the performance gain of parallel computation is often less than the cost of using e.g. threading.Lock a lot (the GIL certainly doesn't help either).

Right now, I can imagine that many users have incorrect expectations of janus. To avoid this, you could add an example that shows how janus can outperform single-threaded asyncio, by employing multiple threads. Additionally, a caveat about janus' performance would be helpful.

@willstott101
Copy link
Contributor

Interesting, can you clarify what you're comparing to in each of these cases?

A cursory glance over Janus' code which I have't really looked at before shows that there'd be some pretty easy performance gains by splitting Janus up into different queue types depending on the sender/receiver thread types. That'd be harder to both use and maintain so might be a hard sell.

@jettify
Copy link
Member

jettify commented Mar 18, 2023

This library is about sync/async and async/sync communication, there is price to pay for extra synchronization. Use appropriate queue for your case.

(Also without benchmarks it is hard to discuss this issue in the first place, not clear what are you measuring.)

@jettify jettify closed this as completed Mar 18, 2023
@jorenham
Copy link
Author

I fully agree that it's very difficult to correctly bridge async <-> sync code, and that Janus attempts to solve this in the context of producers - consumers, with a queue as medium.

The producer/consumer pattern is often used for performance reasons, e.g. map-reduce, wsgi, websockets, etc.

Janus is presented as a generic solution for bridging async/sync code using the producer/consumer pattern, and provides no specific use-cases, or one of those "what is this project / what this project is not" sections.

This makes is easy to think it's a good idea to use Janus in your a performance-sensitive project.

But my quick-and-dirty benchmarks showed that, in a typical producer / consumer context, the Janus queue is significantly slower than conventional queues, likely overshadowing the performance that can be gained from employing the producer-consumer pattern altogether.

So, considering the amount of users, I believe it is very important that describe in the readme:

  • the precise problem that Janus aims to solve,
  • an realistic situation where using Janus is better (cleaner code, no significant performance impact) than the non-Janus alternative,
  • when it is not a good idea, (e.g. high-throughput performance bottleneck), and
  • transparant benchmarks.

@jettify
Copy link
Member

jettify commented Mar 19, 2023

Library was created exactly for reason stated in read:

Mixed sync-async queue, supposed to be used for communicating between classic synchronous (threaded) code and asynchronous (in terms of asyncio) one.

I do not think we ever calmed any performance gains, or anything. I agree that docs could be better and we are happy to accept any contributions there.

@x42005e1f
Copy link
Contributor

I created Culsans, which should be more suitable for performance-sensitive applications. I would be glad if you, @jorenham, could test the performance of my library on the same tests and tell me if it is acceptable to you. My queues also behave as fair: if you replace Queue() in the example with culsans.Queue().sync_q, the numbers in the output will alternate, which eliminates resource starvation.

However, I, and others, would be interested to know exactly what you are measuring, because if you are simply comparing async-aware queues with synchronous queues that block the event loop, such tests are meaningless and irrelevant to this project. But if you are comparing with naive implementations that use event loop methods, and that are very popularized on StackOverflow (which is bad because they have pitfalls), then such a comparison makes sense.

@jorenham
Copy link
Author

jorenham commented Nov 8, 2024

I believe that I was simply trying to figure out if janus could be a better alternative to the queue I was using at that moment, as I was dealing with hundreds of high-throughput websocket streams that needed to be synced and inserted in a database.

It was quite a while back, and I wasn't able to find the benchmark code I used back then. But if I remember correctly, the tests were very simple, and used a simple pub/sub pattern:

  • The "producer" function filled up the queue as fast as it could
  • The "consumer" function removed items from the queue as fast as it could

So as I explained in the issue here, I tested this with different producer-to-consumer ratios, and reported the differences between the different queue implementations.

I'm not involved in that project anymore for now, so hopefully you'll be able to replicate my results with this.

@x42005e1f
Copy link
Contributor

Well, thank you for the information. From the description, it sounds like you were testing mostly non-blocking methods (whether explicitly or implicitly), since there is no active interaction between consumers and producers in this scenario. Janus is really bad in these tests, but Culsans is not, so I can assume I have solved this issue.

@x42005e1f
Copy link
Contributor

x42005e1f commented Nov 8, 2024

Since such tests actually test the speed of put_nowait() and get_nowait() methods, let's represent the pure test as a regular loop with put-get in one thread. Below is the corresponding benchmark for the async -> async case.

janus & culsans benchmark
import asyncio
import time

import janus  # import culsans as janus


async def main():
    queue = janus.Queue()

    put = queue.async_q.put_nowait
    get = queue.async_q.get_nowait

    start = time.monotonic()

    for _ in range(100000):
        put(42)
        get()

    print(time.monotonic() - start)

    queue.close()


asyncio.run(main())

I took CPython 3.10 to match the year of this issue and ran this benchmark on it. With Janus this test for me runs in almost 15 seconds + slows down the closing of the event loop. With Culsans it runs in 0.15 seconds and the event loop closes without delay. These numbers are real, I only rounded them up to the second non-zero digit. And on PyPy, the difference is 10 times bigger (Culsans is faster than Janus by almost a thousand times).

Update: since 4a57895, no-wait tests do not call notification methods, so in those tests, Janus performance became comparable to Culsans performance. But Janus can still perform badly on blocking calls, so this issue is only half solved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants