-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[feature request] performance: otlp http json support jsoniter/sonic for better performance #4817
Comments
I think this code is owned by the upstream collector. https://github.com/open-telemetry/opentelemetry-collector/tree/main/receiver/otlpreceiver |
We use a custom jsonpb.Marshaler. I'm not sure how this work but if you want to take a look go here. https://github.com/sethAmazon/opentelemetry-collector/blob/main/receiver/otlpreceiver/internal/marshal_jsonpb.go |
@open-telemetry/collector-maintainers this is related to OTLP receiver perf - please comment. |
I think better performance is always welcome, however given that OTLP/JSON is unstable I would prefer personally not to spend time on improving the performance just yet. I don't mind if someone else wants to work on this. |
…etry#4817) goos: darwin goarch: amd64 pkg: go.opentelemetry.io/collector/model/otlp cpu: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz BenchmarkTracesJSONUnmarshal-12 10000 102486 ns/op 78030 B/op 1151 allocs/op BenchmarkTracesJSONiterUnmarshal-12 512875 2431 ns/op 4002 B/op 185 allocs/op
…etry#4817) goos: darwin goarch: amd64 pkg: go.opentelemetry.io/collector/model/otlp cpu: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz BenchmarkTracesJSONUnmarshal-12 10000 102486 ns/op 78030 B/op 1151 allocs/op BenchmarkTracesJSONiterUnmarshal-12 512875 2431 ns/op 4002 B/op 185 allocs/op
…etry#4817) goos: darwin goarch: amd64 pkg: go.opentelemetry.io/collector/model/otlp cpu: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz BenchmarkTracesJSONUnmarshal-12 10000 102486 ns/op 78030 B/op 1151 allocs/op BenchmarkTracesJSONiterUnmarshal-12 512875 2431 ns/op 4002 B/op 185 allocs/op
…etry#4817) goos: darwin goarch: amd64 pkg: go.opentelemetry.io/collector/model/otlp cpu: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz BenchmarkTracesJSONUnmarshal-12 10000 102486 ns/op 78030 B/op 1151 allocs/op BenchmarkTracesJSONiterUnmarshal-12 512875 2431 ns/op 4002 B/op 185 allocs/op
…open-telemetry#4817) goos: darwin goarch: amd64 pkg: go.opentelemetry.io/collector/model/otlp cpu: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz BenchmarkTracesJSONUnmarshal-12 10000 102486 ns/op 78030 B/op 1151 allocs/op BenchmarkTracesJSONiterUnmarshal-12 512875 2431 ns/op 4002 B/op 185 allocs/op
…open-telemetry#4817) goos: darwin goarch: amd64 pkg: go.opentelemetry.io/collector/model/otlp cpu: Intel(R) Core(TM) i7-9750H CPU @ 2.60GHz BenchmarkTracesJSONUnmarshal-12 10000 102486 ns/op 78030 B/op 1151 allocs/op BenchmarkTracesJSONiterUnmarshal-12 512875 2431 ns/op 4002 B/op 185 allocs/op
Hi @sethAmazon @tigrannajaryan , I submit PR #4986 to implement unmarshal traces with |
…4817) (#4986) * Implement unmarshal traces with jsoniter, make 40x faster than jsonpb. Signed-off-by: Jimmie Han <[email protected]> * ptrace: json unmarshaller use defer * Use jsoniter unmarshaller as default trace unmarshaler. Update unit test style design. * Add kvlist support
Hi @sethAmazon @tigrannajaryan , I submit PR #5433 to implement unmarshal metrics with jsoniter for better performance in the OTLP HTTP JSON case. |
Is your feature request related to a problem? Please describe.
I foud collector (which use otlp http json) cpu profile cost largely in json unmarshal, I know protobuf have better preformance but json is more frendly to frontend.
Describe the solution you'd like
A clear and concise description of what you want to happen.
Could we have a better performance via jsoniter or sonic?
Like prometheus/loki/elastic-apm project does.
https://github.com/prometheus/prometheus/blob/main/web/api/v1/api.go#L196
https://github.com/grafana/loki/blob/main/pkg/storage/chunk/json_helpers.go#L13
https://github.com/elastic/apm-server/blob/master/model/modeldecoder/nullable/nullable.go#L30
If the feature request is reasonable, I think we could add config field to oltpreceiver to choose another unmarshaller, or just make a new component.
Describe alternatives you've considered
Additional context
The text was updated successfully, but these errors were encountered: