-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
#1109 Make queue size of MetricJsonListener configurable #1114
Conversation
NetflixOSS » Hystrix » Hystrix-pull-requests #369 SUCCESS |
MetricJsonListener jsonListener = new MetricJsonListener(); | ||
int queueSize = defaultMetricListenerQueueSize.get(); | ||
try { | ||
String q = request.getParameter("queueSize"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why would you let the request set the queue size?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If a system talks to multiple, dynamically scaleable endpoints, it could be cumbersome to change the queue size only via a property. Providing a request parameter would allow an easy way for a visualization service (hystrix-dashboard) or perhaps an aggregation service (turbine) to reconfigure the queue size by just updating parameters.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO, the velocity of metrics is independent of who is consuming the metrics. What problem do you intend to solve by allowing the request to set the queue size?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I mention in my previous comment, there doesn't seem to be a way of scaling the queue size if the number of backend endpoints is scaled up or down.
For instance, a user-facing email service may only need 25 instances of a backend email retrieval service during off-peak but auto-scale to 100-150 during peak periods. In such a case, we would need to estimate the peak load, and set the queue size accordingly. If the volume of metrics exceeded that, the hystrix stream servlet would need to be reconfigured each time.
The consumer of metrics would be in a better position to anticipate the typical patterns in load, and adjust accordingly if it noticed the stats collection to be failing due to hitting queue size caps.
@anuragw I don't follow your reasoning. The The The metrics consumption happens in the servlet thread and also has the same delay. It reads all values in the queue. So the |
@mattrjacobs your last line says it all, "unless you're adding commands/threadpools/collapsers dynamically", our system does have this behavior. One other option might be to configure the queueSize property to 100K or 1M, something huge, then use a very tiny delay, say 1ms. But wouldn't this tiny delay be very close to normal processing overheads in the servlet, making it hard to interpret the reported metrics? |
NetflixOSS » Hystrix » Hystrix-pull-requests #378 SUCCESS |
NetflixOSS » Hystrix » Hystrix-pull-requests #379 SUCCESS |
…tional queueSize=<int> parameter in stream query
…y hystrix.stream.defaultMetricListenerQueueSize (still defaults to 1000 if unspecified)
NetflixOSS » Hystrix » Hystrix-pull-requests #380 FAILURE |
NetflixOSS » Hystrix » Hystrix-pull-requests #382 SUCCESS |
I've fixed the issue I had on my forked branch, and have pushed again. The PR got auto-closed when I reset to the base fork before replaying my two commits, and so I've reopened it. I'm working on the build failure now and will update once that's done. Can we make a call on whether or not to include the queueSize in the request parameters soon? I wouldn't mind keeping the property as the sole way of setting the queueSize, but I do think the request parameter could be useful too, in some scenarios. |
I'm not sure what the issue is, the gradle build seems to pass on my system. Any ideas why this may be failing? |
Even in the case where commands are being added dynamically, there's still a single set for the JVM. So I still don't see how defining a queueSize per request helps you. I'm trying to clean up a bunch of the unit tests to get rid of the flaky failures of unit tests. They almost always have to do with Travis running things more slowly than our local machines. I hope to have that significantly cleaned up in the next couple of days |
@mattrjacobs Let's agree to disagree on whether or not including the queueSize in the request makes sense. I'll remove the request parameter logic and push out the new code today. Once the unit tests are fixed, is it safe to assume that the PR merge shouldn't take much longer? |
NetflixOSS » Hystrix » Hystrix-pull-requests #408 SUCCESS |
#1109 Make queue size of MetricJsonListener configurable
Thanks for the good discussion, and contribution, @anuragw. I plan on getting a release with this change out early next week |
@mattrjacobs, I'm always looking forward to contributing! Thanks for your help too, and I'll keep an eye out for the next 1.4.x release. |
The queue size of the MetricJsonListener used by the HystrixMetricsStreamServlet has been made configurable in two ways:
The changes have been made on the 1.4.x, but can also be propagated to the 1.5.x RC branch. Please let me know if you need any further information or would like to suggest changes.