Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[exporter/splunkhec] fix: overcapacity error when MaxContentLength is 0 #17043

Merged
merged 2 commits into from
Dec 16, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions .chloggen/splunkhec-over-capacity-error-fix.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix'
change_type: bug_fix

# The name of the component, or a single word describing the area of concern, (e.g. filelogreceiver)
component: splunkhecexporter

# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`).
note: Fix isssue where splunkhec exporter always returns over capacity error when compression is enabled and MaxContentLength is 0

# One or more tracking issues related to the change
issues: [17035]

# (Optional) One or more lines of additional information to render under the primary note.
# These lines will be padded with 2 spaces and then inserted directly into the document.
# Use pipe (|) for multiline entries.
subtext:
6 changes: 3 additions & 3 deletions exporter/splunkhecexporter/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,13 +30,13 @@ The following configuration options can also be configured:
- `max_content_length_logs` (default: 2097152): Maximum log payload size in bytes. Log batches of bigger size will be
broken down into several requests. Default value is 2097152 bytes (2 MiB). Maximum allowed value is 838860800
(~ 800 MB). Keep in mind that Splunk Observability backend doesn't accept requests bigger than 2 MiB. This
configuration value can be raised only if used with Splunk Core/Cloud.
configuration value can be raised only if used with Splunk Core/Cloud. When set to 0, it will treat as infinite length and it will create only 1 request per batch.
- `max_content_length_metrics` (default: 2097152): Maximum metric payload size in bytes. Metric batches of bigger size
will be broken down into several requests. Default value is 2097152 bytes (2 MiB). Maximum allowed value is 838860800
(~ 800 MB).
(~ 800 MB). When set to 0, it will treat as infinite length and it will create only one request per batch.
- `max_content_length_metrics` (default: 2097152): Maximum trace payload size in bytes. Trace batches of bigger size
will be broken down into several requests. Default value is 2097152 bytes (2 MiB). Maximum allowed value is 838860800
(~ 800 MB).
(~ 800 MB). When set to 0, it will treat as infinite length and it will create only one request per batch.
- `splunk_app_name` (default: "OpenTelemetry Collector Contrib") App name is used to track telemetry information for Splunk App's using HEC by App name.
- `splunk_app_version` (default: Current OpenTelemetry Collector Contrib Build Version): App version is used to track telemetry information for Splunk App's using HEC by App version.
- `log_data_enabled` (default: true): Specifies whether the log data is exported. Set it to `false` if you want the log
Expand Down
4 changes: 4 additions & 0 deletions exporter/splunkhecexporter/client.go
Original file line number Diff line number Diff line change
Expand Up @@ -107,6 +107,10 @@ func (b *bufferState) accept(data []byte) (bool, error) {
gzipWriterPool: b.gzipWriterPool,
}

if b.bufferMaxLen == 0 {
zipWriter.maxCapacity = 0
}

// the new data is so big, even with a zip writer, we are over the max limit.
// abandon and return false, so we can send what is already in our buffer.
if _, err2 := zipWriter.Write(b.buf.Bytes()); err2 != nil {
Expand Down
47 changes: 47 additions & 0 deletions exporter/splunkhecexporter/client_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -462,6 +462,21 @@ func TestReceiveTracesBatches(t *testing.T) {
numBatches: 2,
compressed: true,
},
}, {
name: "100 events, make sure that we produce only one compressed batch when MaxContentLengthTraces is 0",
traces: createTraceData(100),
conf: func() *Config {
cfg := NewFactory().CreateDefaultConfig().(*Config)
cfg.MaxContentLengthTraces = 0
return cfg
}(),
want: wantType{
batches: [][]string{
{`"start_time":1`, `"start_time":2`, `"start_time":3`, `"start_time":4`, `"start_time":7`, `"start_time":8`, `"start_time":9`, `"start_time":20`, `"start_time":40`, `"start_time":85`, `"start_time":98`, `"start_time":99`},
},
numBatches: 1,
compressed: true,
},
},
}

Expand Down Expand Up @@ -609,6 +624,22 @@ func TestReceiveLogs(t *testing.T) {
compressed: true,
},
},
{
name: "150 events, make sure that we produce only one compressed batch when MaxContentLengthLogs is 0",
logs: createLogData(1, 1, 150),
conf: func() *Config {
cfg := NewFactory().CreateDefaultConfig().(*Config)
cfg.MaxContentLengthLogs = 0
return cfg
}(),
want: wantType{
batches: [][]string{
{`"otel.log.name":"0_0_0"`, `"otel.log.name":"0_0_90"`, `"otel.log.name":"0_0_110"`, `"otel.log.name":"0_0_149"`},
},
numBatches: 1,
compressed: true,
},
},
}

for _, test := range tests {
Expand Down Expand Up @@ -745,6 +776,22 @@ func TestReceiveBatchedMetrics(t *testing.T) {
compressed: true,
},
},
{
name: "200 events, make sure that we produce only one compressed batch when MaxContentLengthMetrics is 0",
metrics: createMetricsData(100),
conf: func() *Config {
cfg := NewFactory().CreateDefaultConfig().(*Config)
cfg.MaxContentLengthMetrics = 0
return cfg
}(),
want: wantType{
batches: [][]string{
{`"time":1.001`, `"time":2.002`, `"time":3.003`, `"time":4.004`, `"time":5.005`, `"time":6.006`, `"time":85.085`, `"time":99.099`},
},
numBatches: 1,
compressed: true,
},
},
}

for _, test := range tests {
Expand Down