-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Explore the use of StackWalker in REST Client #42508
Comments
/cc @cescoffier (rest-client) |
Gotcha, thanks. |
We can also look into caching the part of the stacktrace we care about |
This is done by utilizing the StackWalker API and limiting the number of frames captured Closes: quarkusio#42508
This is done by utilizing the StackWalker API and limiting the number of frames captured Closes: quarkusio#42508
@johnaohara would you be able to test #42544? The reason I ask is that I wasn't able to reproduce the initial results so I don't have a baseline |
Hi @geoand I did not have chance last week. I public holiday today, I can verify the patch tomorrow |
Thanks! |
@johnaohara when you get a chance to look at #42544, I'd be happy to hear about the results :) |
@geoand checking now |
🙏🏽 |
@geoand quick update on the testing so far. I tried with your PR branch, but saw some extra call stacks that I was not expecting, so backported to 3.12.2 (this was the version that produced the above baseline), and I am still seeing the unexpected stack, see; I am looking at the test now to see why these stacks are showing in the flamegraph |
@johnaohara you talking about the ones from If so, I reported it here: #42858 and it should be fixed in current |
@gsmet yeah, that is what I was talking about. But I cherry-picked onto 3.12.2, so expected it to disappear. I just realized what I have done wrong, I will try running the test again. |
This particular issue is in all 3.13.x and in 3.14.1. It should be fixed in 3.14.2. |
@geoand sorry about the noise yesterday. The cpu time for processing the stack frames (in my particular test) has dropped approx 10% with the new implementation (691 cpu samples -> 623 cpu samples) Application code stack went from 21.2% of cpu time processing stack to 19.2% with the change. Although it is improved, there is still a considerable overhead on each invocation Before: After: |
Thanks a lot @johnaohara!
Indeed.... I wonder if we should change the default to not capture the stack... @cescoffier @gsmet WDYT? |
obvious setting idk if the stack traces were intended to be used in a prod deployment or just for development |
They were intended for both |
How confusing is the stack trace without it? I wonder if we should have the following behavior:
Not sure if it's feasible but that might help? |
It's almost meaningless...
Right, I thought of that one as well, but I am still not convinced it's a good idea.
We can do that yeah. Actually we have the class and the method - so although there would be no real stacktrace, at least you would know which method is at fault... |
Yeah better than nothing and better than slowing down the whole app. Now, that's not the only place where our stack traces are border line useless unfortunately :/ |
True, so if @cescoffier is also on board, I can make the change |
Actually, we already print the method that causes the failure, so nothing needs to be done on that front |
What's the gist of the work that need to be done? |
We would only change the default or the property that controls whether or not we capture the real stack. |
Ah ok, makes sense. |
I'll do it tomorrow |
#43037 changes the default |
I'm going to close this as |
Description
Currently in the REST Client when
quarkus.rest-client.capture-stacktrace
is set to true (which is the default), we capture the entire stacktrace of the Thread in order to enhance the debugging experience when something goes wrong.@johnaohara has found however that in some cases up to 20% of the users application code callstack in capturing a stack trace, which is a waste of cpu cycles, but also forces the JVM to safepoint the mutating threads during every call to the api
Implementation ideas
No response
The text was updated successfully, but these errors were encountered: