-
Notifications
You must be signed in to change notification settings - Fork 712
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Show aggregated process metrics in containers/k8s pods #1257
Comments
According to @tomwilkie we probably want to do this sort of aggregations through prometheus. |
Aggregation ProposalProposal on how to implementation data sources aggregation in scope. Aggregation Architecture for ScopeTo describe an aggregation type we need 2 attributes:
All the data source types that will allow aggregation must be extended to include these 2 attributes.
Allowing a list of AggregationType makes the aggregation more flexible, it is possible to specify many policies for the same level (or different policies for different levels) for the same data source and as result, the UI could show multiple different aggregate values accordingly with the specified values. List of data structure to extend: scope/report/metadata_template.go Line 22 in b27dc99
scope/report/metric_template.go Line 9 in 7b0f0cb
Line 83 in 7b0f0cb
JSON ExampleThis example is based on the structure of the HTTP statistics report
At first we can implement the "sum" policy, but the json types are expressive enough to be able to add more later without breaking compatibility (e.g. max, min, avarage, list, ...). /cc @fons, @alban, @tomwilkie |
That should be singular, right? i.e. the value is the name of a single level. |
Yes, that is a typo. Sorry about that. |
As a user I would like to see things such as open file descriptors (or plugin metrics), at the container level (and kubernetes pod level) instead of clicking on each process inside the container to view them.
In fact, obtaining the CPU/memory consumption of the container by aggregating the CPU/memory of processes would also fix #1133
The text was updated successfully, but these errors were encountered: