Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HTTP2 Plugin don't perform over larges tests #24

Closed
ahlongas07 opened this issue Feb 11, 2022 · 13 comments
Closed

HTTP2 Plugin don't perform over larges tests #24

ahlongas07 opened this issue Feb 11, 2022 · 13 comments
Assignees
Labels
bug Something isn't working

Comments

@ahlongas07
Copy link

Hello, currently I'm testing an API using this plugin, our goal is reach 5000VU's, but when the injector reach 300 VU's start to face a problems due the concurrency. Reviewing the jmeter.log I saw this errors:

QueuedThreadPool: QueuedThreadPool[HttpClient@ccf2232]@5c7b55fd{STOPPING,8<=0<=200,i=7,r=-1,q=0}[NO_TRY] Couldn't stop Thread[HttpClient@ccf2232-152214,5,main]

o.e.j.i.ManagedSelector: Could not create EndPoint java.nio.channels.SocketChannel[closed]: org.eclipse.jetty.io.RuntimeIOException: javax.net.ssl.SSLHandshakeException

In order to discard an injector problem, I repeated the test using the native jmeter HTTP 1.1 sampler and follow the execution with visual vm, the injector works properly and reach up to 15000 req/seg.

To discard a problem from the app I repeated the tests using K6 that have native support to HTTP2 and the behavior was the same as JMeter and the native HTTP 1.1 sampler.

My understanding regarding the plug-in, is jetty works as a proxy and is performing the requests, so I think is flooded trying to process the requests from the threads.

My environment is JMeter 5.4.3 and Java 17 running on a c5.12xlarge.

Any advice about jetty or alpn library? Exist a way to tweak jetty?

@RicardoPoleo
Copy link
Contributor

Hello @ahlongas07,

Thanks for taking the time to report this behaviour/inquiry and doing some testing with alternatives to have a comparison point (more issues like this 🎉 ).

I tested our implementation with some of our platforms and couldn't replicate the issue (multiple users, steps, embedded resources, and so on) but, maybe it has to do with the configurations (maybe is not jetty).

Would you like to give us some information about the script so we can try it out? If the data is sensible (and you don't want to share it with the community) you can also reach us by email ([email protected]).

Let us know what do you think,

Regards

@syampol
Copy link

syampol commented Sep 19, 2022

Hi,
I've observed similar issue to the original posted.
Depending on machine I've used jvm kind of stack at ~150-300rps (400-500 opened connections; 1000-1500 threads)

regarding some observations:

  • Jmeter open a new connection on each new iteration within either thread group or loop/while controller.
  • Jmeter sends two 'User-agent' headers. One manually defined and another 'Jetty/11.0.6'
  • CPU usage is no more than 20-25%
  • No blocked threads observed
  • No GC issues observed

Here how the report for 1.5K treads look like
image
main response degradation observed on the requests where new iteration started (i.e. new connection opened)

It's not the server throughput limit, as I was able to reach much more rps using several load generators.
However there can be some throttling mechanism to limit traffic from single client. Can't tell if this is true.. At least this is definitely not IP based, as I was able to get much higher RPS and opened connections values with K6 from the same load generator.

Also here how does the profiler look like when I start getting real high responses (up to several sec and more):
image
Only ~13% cpu is spent on 'real' work. Vast majority is related to some native calls. (overall CPU at that level is still no more than 20-25%).
Active Thread count at this level is ~4-6K.
Xmx - 18Gb out of 23Gb total.
Xss - 512Kb
Tried also reduce Xmx to 8Gb and remove Xss limit. This hasn't change situation much..

Note: K6 uses single connection per VU and keep using it even with a new thread iteration. While Jmeter open a new connection per each thread of loop iteration. (The only option I found for Jmeter to reuse connection is to use default "Thread Group" and check 'Same user on each iteration' for it. But I can't run test with such configuration to see it this solve the throughput issue itself)

@syampol
Copy link

syampol commented Sep 20, 2022

depending on what TG to use problem may or may not occur.
image
so with default TG and 'same user on each iteration' selected only one connection is opened per VU (main thread).
otherwise (at least for 'ultimate thread group') - a new connection and bunch of httpClient@... threads (10 if I'm not mistaken) are opened on each new iteration (within TG, Loop controller, While controller etc.)
This additional complexity results in higher amount of thread/connection creation. In our case Thread.start() took 60-70% of Jmeter's processSampler. That results in really high response and thus lower throughput. And all this with quite low CPU utilization.

disabling above mentioned closeConnections() - improves overall throughput a lot.

One more open question - is number of opened httpClient@... threads per each VU's thread. According to profiler - those are mainly idle. I suppose this is some jetty's feature. But I don't see a reason to have 10 opened threads per each user.
In my case - 1.5K VUs - results in 15K active threads on JVM. 15K it's quite high number. At least from the memory usage perspective. So would be nice to reduce that either, if possible

@RicardoPoleo RicardoPoleo added the bug Something isn't working label Sep 28, 2022
@3dgiordano
Copy link
Contributor

Hi @ahlongas07 and @syampol

I made a pre-release with some changes in the plugin.
https://github.com/Blazemeter/jmeter-http2-plugin/releases/tag/v2.0.2

This pre-release solve the problem some problems with handling connections, threads and memory handling.
This version should work much better than the previous one.

Before its release, I need your feedback.
Your analysis and feedback is very useful to us.
Thanks.

@3dgiordano 3dgiordano self-assigned this Nov 15, 2022
@ahlongas07
Copy link
Author

ahlongas07 commented Nov 16, 2022 via email

@3dgiordano
Copy link
Contributor

Thanks @ahlongas07 and @syampol for all the provided information.

The final release 2.0.2 is here https://github.com/Blazemeter/jmeter-http2-plugin/releases/tag/v2.0.2
In hours it will be available in Plugin Manager.

I understand that @ahlongas07 will not be able to test the new version.
We leave the release documented here awaiting a response from @syampol

Regards

@syampol13
Copy link

Hi @3dgiordano
Sorry but I have about the same situation.. Don't have a possibility play with a new version as I've switched to another project. Also it's hard to plan some activity when you are under chaotic power outage...
Will try to check once have time for it.

@frale98
Copy link

frale98 commented Dec 1, 2022

Hi! We're playing with pre-release now. Rough comparison shows improvements.

@3dgiordano
Copy link
Contributor

Thanks @frale98

The final release that is already public in plugin manager.
That version has some extra tweaks that the pre-release didn't have.

Thank you very much for sharing that you notice improvements compared to the previous version.
Any find you can share with us will be welcome.

@frale98
Copy link

frale98 commented Dec 1, 2022

Very good news! We'll switch to latest official right away and keep you posted on any news (good or bad)!

@3dgiordano
Copy link
Contributor

Hi, @frale98
Any news with the new version?

@RicardoPoleo
Copy link
Contributor

Hello everyone,

We see no recent activity in this issue, so we are assuming all is good. I'll be closing the issue but, if you need more assistance regarding this behavior, please re-open it again.

Once again, thanks for taking the time.

@frale98
Copy link

frale98 commented Jan 24, 2023

Hi all and apologies for late reply. Plugin was extensively tested in our environment with no issues. We used 100 Threads maximum reaching a maximum throughput of 3000TPS for a single JMeter instance. Our setup mimics a Telco Core Network so there's no need of having Thousand of threads since Nodes are establishing the minimum number of HTTP2 connections to reach the requested load and HTTP2 (+ TLS) was introduces mainly to reduce connection overhead.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

6 participants