-
Notifications
You must be signed in to change notification settings - Fork 149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create token metrics only when they are available #1092
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
|
b059539
to
d36281e
Compare
Rebased to |
Why will this happen? Metrics are only updated when calling |
@Spycsh Because Prometheus client will start providing metrics after they've been created. In current code, all metrics are created when Orchestrator/OrhestratorMetrics is instantiated: https://github.com/opea-project/GenAIComps/blob/main/comps/cores/mega/orchestrator.py#L33
Those methods only update the value of the metric, they do not create them. This PR changes Histogram metric creation to be delayed until first call of the update methods. |
I dropped pending metric doc update & rebased to main. I'll have it in separate PR where I fix additional issues I noticed, which require pending requests metric type / name change. |
Has |
Could not find any good fix for it, so I just filed a ticket on it: #1121 |
OK so what you mean it that the dummy metrics will show zeros after initialization and before the first request and users should not see wrong values of request number... But you think the k8s will scrape the metrics even there are no requests and it is resource-consuming so you decide to delay the initialization only when there are requests. I agree with this approach. |
dataprep microservice itself should not generate |
Technically the zero counts are not wrong, but presence of token / LLM metrics is misleading for services that will never generate tokens (or use LLM). That's the main reason for this PR.
Visibility All OPEA originated services use HttpService i.e. provide HTTP access metrics [1]. To see those, Perf I doubt skipping generation of extra metrics has any noticeable perf impact on the service providing the metrics (currently Each Prometheus Histogram type provides about dozen different metrics, and in larger clusters, amount of metrics needs to be reduced to keep telemetry stack resource usage & perf reasonable. Telemetry stack resource usage should be significant concern only when there's larger number of such pods though. [1] There's large number of HTTP metrics, and some Python ones too. It would be good to have controls for limiting those in larger clusters, but I did not see any options for that in |
@Spycsh from you comment in the bug #1121 (comment) I realized that changing the method on first metric access is racy. It's possible that multiple threads end up in create method, before that method is changed to update one. Meaning that multiple identical metrics would be created, and Prometheus would barf on that. => I'll add lock & check to handle that. |
8655d3e
to
7f81fff
Compare
This avoids generation of useless token/request histogram metrics for services that use Orchestrator class, but never call its token processing functionality. (Helps in differentiating frontend megaservice metrics from backend megaservice ones, especially when multiple OPEA applications run in the same cluster.) Also change Orchestrator CI test workaround to use unique prefix for each metric instance, instead of metrics being (singleton) class variables. Signed-off-by: Eero Tamminen <[email protected]>
As that that could be called from multiple request handling threads. Signed-off-by: Eero Tamminen <[email protected]>
23cd2c5
to
0a4e313
Compare
Description
This avoids generating useless token / request histogram metrics for services that use Orchestrator class, but never call its token processing functionality. Such dummy metrics can confuse telemetry users.
(It also helps in differentiating frontend megaservice metrics from backend megaservice ones, especially when multiple OPEA applications with wrapper microservices run in the same cluster.)
Issues
n/a
.Type of change
Dependencies
n/a
.Tests
Manual testing with latest versions, to verify that: