-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Infra UI] Improve waffle map custom "group by" fields #31405
Comments
Pinging @elastic/infrastructure-ui |
ping @simianhacker for input |
Other questions that came to mind:
|
In a recent discussion the following querying procedure seemed feasible. As in the original proposal above, it relies on performing two queries, but reduces the risk of overly large request body sizes. Procedure For a waffle map of node type
Limitations
Advantages
|
@weltenwort I had planned to change this over to a composite aggregation soon because it would simplify the code some along with reducing the load on the ES cluster for the request. Instead of sending With some creative work on the aggregation, we could use the composite aggregation to get all the data in one pass. I would key off the node id (eg The aggregation tree would look something like:
If we don't apply the filter to the metric the user won't be able to do things like I would avoid moving these requests & processing to the client. Testing stuff in the browser is a total nightmare, IMHO. This stuff gets pretty complex and dealing with that complexity on the server is just easier. Furthermore, if you have to make multiple round trips, which this requires, you usually have lower latency from the server to ES then you do with the client since they typically sit on the same network. @weltenwort What's the motivation to move this to the client? |
Applying the filter to the metric would mean it the results might be incorrect if the grouping field is not present on every metric field (which is exactly what causes the initial problem). So wrapping only the grouping aggregation into the filter makes sense 👍 You're right, there is no need to move it to the browser (unless we absolutely need different refresh rates or incremental rendering). I forgot to make the distinction between browser/"elasticsearch client" in my description. |
That sample query in elasticsearch syntax (thanks @simianhacker):
|
I wonder whether the inner |
I'm not sure repeating the whole query would be the way to go if we want to add pagination to a single group box. I don't have a better idea right now but it feels wrong. |
Also, with the query above, the |
I was talking about pagination on the ES api level, not the UI. There is a limit on the number of buckets ES will return per aggregation, so we might have to do multiple round-trips if we exceed that number. Looks like we would have to perform three levels of pagination. Or we detect that and somehow communicate to the user that we can't display all data. |
How likely will the grouping be high enough cardinality that we need to paginate the groupings in ES to get all of them? I haven't seen anything that was more than 5~10 at each level, and typically they only belong to 1 group at each level. The high cardinality grouping is a pretty rare edge case and will likely result in an unusable UX experience. I think we could safely limit those terms aggs to 10 and be just fine. |
I think it wouldn't work anyway, I get this when I try:
|
I can easily see someone wanting to group containers by host with more than 10 containers running on each host. |
Then we should include the group by's in the |
Both could be sent at once using an |
Currently, it is possible to enter a custom value in the "Group by" input field. Depending on what the user enters, it is possible that the documents that are returned don't contain the metrics selected for display in the "Metric" input field.
Example:
service.type
as field to group byservice.type
other than "System" do not contain system metrics, so the waffle map is not displayed as expectedProposed solution:
Caveats:
The text was updated successfully, but these errors were encountered: