Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Perfomance overhead induced by tapir, when using websockets #3048

Closed
kamilkloch opened this issue Jul 19, 2023 · 6 comments
Closed

Perfomance overhead induced by tapir, when using websockets #3048

kamilkloch opened this issue Jul 19, 2023 · 6 comments

Comments

@kamilkloch
Copy link
Contributor

kamilkloch commented Jul 19, 2023

Hello everyone,

I would like to bring up the topic of the performance overhead induced by the tapir interpreter.

Spoiler alert: the performance penalty is quite significant, much higher than our initial expectations. We began by measuring performance for REST requests and, using async-profiler, observed a substantial overhead in routing the requests. Additionally, we conducted a websocket test with a minimal server setup (a single route), and even then, http4s wrapped in tapir performed significantly worse compared to the manually created http4s server. You can find the results here: https://github.com/kamilkloch/websocket-benchmark. To view the overhead, please refer to the async-profiler flame graphs.

@adamw
Copy link
Member

adamw commented Jul 19, 2023

For the record, there's a similar issue - #2636 - with a different experience report, but probably your usage patterns and endpoints are different.

Are the performance problems you encountered strictly connected to web sockets, or are they more general?

@adamw
Copy link
Member

adamw commented Jul 19, 2023

Do you have any clues or ideas as to what might be the causes? Looking at the flamegraphs it's mostly cats's run-loop, so maybe we could optimize our usage of that part.

@kamilkloch
Copy link
Contributor Author

We might do similar tests for rest, if this could help spot the bottleneck. I suspect a comparable performance penalty, at least this was the picture that could be observed from the async-profiler results and CPU usage spikes.

Do you have any clues or ideas as to what might be the causes? Looking at the flamegraphs it's mostly cats's run-loop, so maybe we could optimize our usage of that part.

I would not know :/ For rest I was suspecting that finding a matching route in the tapir interpreter was the cause of the overhead, but the overhead persists in the websocket case and a single route.

@adamw
Copy link
Member

adamw commented Jul 20, 2023

I took a brief look at the async-profiler results, but didn't see any immediate culprits - do you have any clue as to what we might be doing wrong?

I think it's quite probable that results for HTTP endpoints would be different, at least from my experience - I did some performance tests using the akka interpreter vs vanilla akka, and while there was a performance overhead, it was far from dramatic. That would also be confirmed by the issue I linked.

Also, finding a matching endpoint should be at least as fast as "natively" - as when you interpret multiple endpoints at once, there's a special filter that is being created, for fast path matching.

So I'll change the title of the issue to mention websockets for now, one as that's what the test project demonstrates (thanks, by the way, for providing a test case :) ), and second that this differentiates the scope from #2636

@adamw adamw changed the title Perfomance overhead induced by tapir Perfomance overhead induced by tapir, when using websockets Jul 20, 2023
@kamilkloch
Copy link
Contributor Author

kamilkloch commented Jul 26, 2023

I am happy to share that #3068 seems to have helped: tapir-blaze is head-to-head with http4s-blaze in a websocket test. https://github.com/kamilkloch/websocket-benchmark#benchmarks

There is still overhead in the endpoint interpreter for the rest part, but that is a separate matter.

@adamw
Copy link
Member

adamw commented Jul 26, 2023

So that was it ... great work @kamilkloch, thanks for the PR! :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants