-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
http.batch
socket: too many open files
#296
Comments
There should definitely be a limit to how many requests are executed in parallel. |
Note: the current solution doesn't do per-host limiting, because that adds a significant amount of complexity for something I honestly can't see being very useful. If you have a use case where a granular limit trumps a general one, please comment. |
What will the expected behavior be for example, if the user sets the |
All of them get ran in parallel. |
Wait, why would all 20 requests, in @ppcano's example above, run in parallel? Per-host limiting is needed as a tweakable knob to more realistically simulate the connection management of a browser (common use case when testing websites that need to support browsers with varying configs for number of connections per host). I guess this CL (as a solution to golang/go#13957) will make implementing this easy when it lands (hopefully in Go 1.10, Feb next year) :) |
Oh, I misread it. At most 10 requests will be ran in parallel, a new one is started every time a previous one is finished. |
I'll toss in a |
|
The http.batch does not describe how many resources could be loaded in parallel in the same
batch
statement.@liclac
What could it be one possible max of requests in an
http.batch
statement? We need to know this to improve HAR converter WIP #291 . I have tested a little bit more and the error is also related to the number of VUs and sleep periods.For users experiencing the same error, could we help them with a better error message or avoiding this error? Thoughts?
The text was updated successfully, but these errors were encountered: