You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
When the newService function returns an error, as a result of or after srv.telemetryInitializer.init executes, it doesn't clean up any telemetry views that were registered. The means if a new collector instance is started within the same process that would attempt to register those same views will fail with a failed to initialize telemetry error.
Steps to reproduce
Have a binary that wraps the collector and starts an instance of the collector. Start the wrapped instance with an invalid config. This will cause newService to return an error after initializing telemetry views. Without stopping the process start another instance of the collector with any config. newService will return get an error similar to:
failed to initialize telemetry: processor/batch/batch_send_size: cannot register view \"processor/batch/batch_send_size\"; a different view with the same name is already registered\nprocessor/batch/batch_send_size_bytes: cannot register view \"processor/batch/batch_send_size_bytes\"; a different view with the same name is already registered
What did you expect to see?
A separate instance of the collector able to be started within the same process with a valid config.
What did you see instead?
Failure of the telemetry initialization
Environment
OS: macOS 12.5.1
Compiler(if manually compiled): go version go1.19.1 darwin/arm64
Additional context
This seems to be a side effect of #6138, which made a telemetryInitializer per instance of the collector. The problem with this is on telemetryInitializer.init registers views globally to the process. Before #6138 the telemetryInitializer was a onetime use per process so views would never attempt to be registered multiple times.
I think there's two paths to consider based on intent of use for the collector:
Can a single process is meant to run several instances of the Collector concurrently?
If only a single instance of the Collector is meant to used at a time in a process should the telemetryInitializer act like it is tied to the lifetime of that instance?
For number 1 I think this is not the intended use case. It seems undefined how multiple instances of the collector could coexist, especially if they had configurations for internal telemetry that contradicted one another.
For number 2 from a user standpoint it seems reasonable that you could configure the first instance of the collector with specific internal telemetry settings. Then stop that instance, start a new one with different internal telemetry settings and expect them to be respected and work.
As a proposed solution I think the telemetryInitializer should try to be cleaned up if a collector startup failure occurs. It will be shutdown cleanly as part of normal collector shutdown or runtime error currently.
The text was updated successfully, but these errors were encountered:
Describe the bug
When the
newService
function returns an error, as a result of or aftersrv.telemetryInitializer.init
executes, it doesn't clean up any telemetry views that were registered. The means if a new collector instance is started within the same process that would attempt to register those same views will fail with afailed to initialize telemetry
error.Steps to reproduce
Have a binary that wraps the collector and starts an instance of the collector. Start the wrapped instance with an invalid config. This will cause
newService
to return an error after initializing telemetry views. Without stopping the process start another instance of the collector with any config.newService
will return get an error similar to:What did you expect to see?
A separate instance of the collector able to be started within the same process with a valid config.
What did you see instead?
Failure of the telemetry initialization
What version did you use?
v0.61.0
What config did you use?
Config:
Environment
OS: macOS 12.5.1
Compiler(if manually compiled): go version go1.19.1 darwin/arm64
Additional context
This seems to be a side effect of #6138, which made a
telemetryInitializer
per instance of the collector. The problem with this is ontelemetryInitializer.init
registers views globally to the process. Before #6138 thetelemetryInitializer
was a onetime use per process so views would never attempt to be registered multiple times.I think there's two paths to consider based on intent of use for the collector:
telemetryInitializer
act like it is tied to the lifetime of that instance?For number 1 I think this is not the intended use case. It seems undefined how multiple instances of the collector could coexist, especially if they had configurations for internal telemetry that contradicted one another.
For number 2 from a user standpoint it seems reasonable that you could configure the first instance of the collector with specific internal telemetry settings. Then stop that instance, start a new one with different internal telemetry settings and expect them to be respected and work.
As a proposed solution I think the
telemetryInitializer
should try to be cleaned up if a collector startup failure occurs. It will be shutdown cleanly as part of normal collector shutdown or runtime error currently.The text was updated successfully, but these errors were encountered: