-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Telemetry] Collector telemetry not including opencensus metrics along with internal metrics when feature flag useOtelForInternalMetrics is true #5687
Comments
I can reproduce this. http://localhost:8888/metrics is empty with @open-telemetry/collector-approvers does anyone know how Otel-based metrics are supposed to work? With the feature flag enabled are the metrics still supposed to be exposed on 8888? |
have talked to tigran and we will move to use opencensus. |
Reopening since the issue is still not resolved. |
I think this bug is due to this #3816, #2204 In #3816, @kirbyquerby has started trying to export internal metrics via OpenTelemetry, and made an excellent first step:
But there still exists several works waiting to be done. All internal metrics are still instrumented with OC(OpenCensus) not OTEL(OpenTelemetry). So when you start with feature gate In order to transfer internal metrics from OC to OTEL, @dashpole always provided a mature propose --- through OpenCensus bridge for OpenTelemetry
But this propose is blocked by issue #2204, the prometheus exporter does not satisfy the requirement for OpenCensus metric bridge for OpenTelemetry, which is described in https://pkg.go.dev/go.opentelemetry.io/otel/bridge/[email protected]#readme-metrics |
Howdy from the GO Sig. Has anyone attempted to register both the OC SDK, and the Otel SDK with the same prometheus I ask because the Go Sig is exploring a few options to unblock your development in this area.
|
I like that idea (3) and that is totally possible, just tried it. The one thing that must be noted, is that otel and opencensus has different ways to deal with resource attributes. If we went to that path, I think we would need to get the otel prometheus exporter to emit the Or we can just ignore this for now and continue adding these as constant labels to this unified c.c. @bogdandrutu since this is related to our latest discussion. |
To clarify what I was trying to get across at the SIG meeting today:
I have a few concerns/thoughts either way: Side-by-side:
Replacement:
|
That is a great point @dashpole, we need to make sure that we have everything protected by the feature gate like you described.
The idea for doing a side-by-side instrumentation is that we can start the migration and early adopters would not lose any metrics by doing so. It doesn't make much sense for users to enable the otel featuregate if they're only getting the And with that we still have the guarantee that if something is not working as expected, they can always just disable the featuregate and use OC only. Which is basically the reason for not going with replacement from the beginning. |
Based on Aaron's suggestion to use the Caveat that was brought during the sig meeting: This blocks us from exporting collector's telemetry with other formats for the moment, but that is ok since prometheus is currently the only option to export metrics. Migration Plan
Notes:
DiagramsIt was asked on during the sig meeting that this was discussed, that people wanted to some diagrams to help visualize the path and the data flow for the migration. @smithclay have done these to help: Another important piece of the migration is how data gets generated during this migration. For core's migration where we will have a For contrib's migration, each OC metric should be replaced with an equivalent OTel metric. |
+1 on this
I think we should flip the "feature-gate" as soon as a significant amount of metrics in core are switched to otel. Then wait in this state (continuing to migrate everything) for 2-3 releases. |
Following up on enabling otel metrics by default as per the comment open-telemetry#5687 (comment) Signed-off-by: Alex Boten <[email protected]>
The original issue was fixed in #8716, and the feature gate is now marked as beta. |
Describe the bug
using flag --feature-gates=telemetry.useOtelForInternalMetrics using demo or through
, calling http://localhost:8888/metrics does not displays the metrics mentioned in monitoring documentation.
Steps to reproduce
What did you expect to see?
I expected metrics from monitoring documentation and any internal metrics added using otel metrics sdk.
EXAMPLE:
What did you see instead?
empty 200 response
OR
What version did you use?
demo: docker image : (
fb056cba11cd
)code: Version: (
v0.48.0
)What config did you use?
Environment
OS: (e.g., "M1 MAC")
Docker
Edit: define featuregate calls
Edit2: formatting
The text was updated successfully, but these errors were encountered: