You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current opencensus internal telemetry solution requires that some global state be kept in the process. This causes complex lifecycle management if a Collector instance in that process is stoped and a new one started. The global state must be cleared by the first instance shutting down and reinitiated by the second instance starting up. This also adds additional complexity in a config reload scenario where the Collector creates a new service but keeps the same telemetry instance.
The global state that I found seems to be related to the Metric Registry and Views that are registered here.
I'd like to open this for discussion on possible ways to make the internal telemetry solution feel more natural with the collector instance lifecycle.
The text was updated successfully, but these errors were encountered:
We will retire the OpenCensus approach to embrace the Otel go SDK and use it moving forward. See #816 for reference and next steps. Using the go SDK addresses some of your concerns as no globals will be used. Closing as not planned.
The current opencensus internal telemetry solution requires that some global state be kept in the process. This causes complex lifecycle management if a Collector instance in that process is stoped and a new one started. The global state must be cleared by the first instance shutting down and reinitiated by the second instance starting up. This also adds additional complexity in a config reload scenario where the Collector creates a new service but keeps the same telemetry instance.
The global state that I found seems to be related to the Metric Registry and Views that are registered here.
I'd like to open this for discussion on possible ways to make the internal telemetry solution feel more natural with the collector instance lifecycle.
The text was updated successfully, but these errors were encountered: