diff --git a/.chloggen/add-initial-buffer.yaml b/.chloggen/add-initial-buffer.yaml new file mode 100644 index 000000000000..c473c0ad5e9e --- /dev/null +++ b/.chloggen/add-initial-buffer.yaml @@ -0,0 +1,27 @@ +# Use this changelog template to create an entry for release notes. + +# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix' +change_type: enhancement + +# The name of the component, or a single word describing the area of concern, (e.g. filelogreceiver) +component: pkg/stanza + +# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`). +note: Allow users to configure initial buffer size + +# Mandatory: One or more tracking issues related to the change. You can use the PR number here if no issue exists. +issues: [37786] + +# (Optional) One or more lines of additional information to render under the primary note. +# These lines will be padded with 2 spaces and then inserted directly into the document. +# Use pipe (|) for multiline entries. +subtext: + +# If your change doesn't affect end users or the exported elements of any package, +# you should instead start your pull request title with [chore] or use the "Skip Changelog" label. +# Optional: The change log or logs in which this entry should be included. +# e.g. '[user]' or '[user, api]' +# Include 'user' if the change is relevant to end users. +# Include 'api' if there is a change to a library API. +# Default: '[user]' +change_logs: [user] diff --git a/pkg/stanza/docs/operators/file_input.md b/pkg/stanza/docs/operators/file_input.md index 77be99e986f5..34f0deb8dbe7 100644 --- a/pkg/stanza/docs/operators/file_input.md +++ b/pkg/stanza/docs/operators/file_input.md @@ -4,35 +4,36 @@ The `file_input` operator reads logs from files. It will place the lines read in ### Configuration Fields -| Field | Default | Description | -|---------------------------------| --- |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| `id` | `file_input` | A unique identifier for the operator. | -| `output` | Next in pipeline | The connected operator(s) that will receive all outbound entries. | -| `include` | required | A list of file glob patterns that match the file paths to be read. | -| `exclude` | [] | A list of file glob patterns to exclude from reading. | -| `poll_interval` | 200ms | The duration between filesystem polls. | -| `multiline` | | A `multiline` configuration block. See below for details. | -| `force_flush_period` | `500ms` | Time since last read of data from file, after which currently buffered log should be send to pipeline. Takes `time.Time` as value. Zero means waiting for new data forever. | -| `encoding` | `utf-8` | The encoding of the file being read. See the list of supported encodings below for available options. | -| `include_file_name` | `true` | Whether to add the file name as the attribute `log.file.name`. | -| `include_file_path` | `false` | Whether to add the file path as the attribute `log.file.path`. | -| `include_file_name_resolved` | `false` | Whether to add the file name after symlinks resolution as the attribute `log.file.name_resolved`. | -| `include_file_path_resolved` | `false` | Whether to add the file path after symlinks resolution as the attribute `log.file.path_resolved`. | -| `include_file_owner_name` | `false` | Whether to add the file owner name as the attribute `log.file.owner.name`. Not supported for windows. | -| `include_file_owner_group_name` | `false` | Whether to add the file group name as the attribute `log.file.owner.group.name`. Not supported for windows. | -| `include_file_record_number` | `false` | Whether to add the record's record number in the file as the attribute `log.file.record_number`. | -| `preserve_leading_whitespaces` | `false` | Whether to preserve leading whitespaces. | -| `preserve_trailing_whitespaces` | `false` | Whether to preserve trailing whitespaces. | -| `start_at` | `end` | At startup, where to start reading logs from the file. Options are `beginning` or `end`. This setting will be ignored if previously read file offsets are retrieved from a persistence mechanism. | -| `fingerprint_size` | `1kb` | The number of bytes with which to identify a file. The first bytes in the file are used as the fingerprint. Decreasing this value at any point will cause existing fingerprints to forgotten, meaning that all files will be read from the beginning (one time). | -| `max_log_size` | `1MiB` | The maximum size of a log entry to read before failing. Protects against reading large amounts of data into memory |. -| `max_concurrent_files` | 1024 | The maximum number of log files from which logs will be read concurrently (minimum = 2). If the number of files matched in the `include` pattern exceeds half of this number, then files will be processed in batches. | -| `max_batches` | 0 | Only applicable when files must be batched in order to respect `max_concurrent_files`. This value limits the number of batches that will be processed during a single poll interval. A value of 0 indicates no limit. | -| `delete_after_read` | `false` | If `true`, each log file will be read and then immediately deleted. Requires that the `filelog.allowFileDeletion` feature gate is enabled. | -| `acquire_fs_lock` | `false` | Whether to attempt to acquire a filesystem lock before reading a file (Unix only). | -| `attributes` | {} | A map of `key: value` pairs to add to the entry's attributes. | -| `resource` | {} | A map of `key: value` pairs to add to the entry's resource. | -| `header` | nil | Specifies options for parsing header metadata. Requires that the `filelog.allowHeaderMetadataParsing` feature gate is enabled. See below for details. | +| Field | Default | Description | +|---------------------------------|--------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `id` | `file_input` | A unique identifier for the operator. | +| `output` | Next in pipeline | The connected operator(s) that will receive all outbound entries. | +| `include` | required | A list of file glob patterns that match the file paths to be read. | +| `exclude` | [] | A list of file glob patterns to exclude from reading. | +| `poll_interval` | 200ms | The duration between filesystem polls. | +| `multiline` | | A `multiline` configuration block. See below for details. | +| `force_flush_period` | `500ms` | Time since last read of data from file, after which currently buffered log should be send to pipeline. Takes `time.Time` as value. Zero means waiting for new data forever. | +| `encoding` | `utf-8` | The encoding of the file being read. See the list of supported encodings below for available options. | +| `include_file_name` | `true` | Whether to add the file name as the attribute `log.file.name`. | +| `include_file_path` | `false` | Whether to add the file path as the attribute `log.file.path`. | +| `include_file_name_resolved` | `false` | Whether to add the file name after symlinks resolution as the attribute `log.file.name_resolved`. | +| `include_file_path_resolved` | `false` | Whether to add the file path after symlinks resolution as the attribute `log.file.path_resolved`. | +| `include_file_owner_name` | `false` | Whether to add the file owner name as the attribute `log.file.owner.name`. Not supported for windows. | +| `include_file_owner_group_name` | `false` | Whether to add the file group name as the attribute `log.file.owner.group.name`. Not supported for windows. | +| `include_file_record_number` | `false` | Whether to add the record's record number in the file as the attribute `log.file.record_number`. | +| `preserve_leading_whitespaces` | `false` | Whether to preserve leading whitespaces. | +| `preserve_trailing_whitespaces` | `false` | Whether to preserve trailing whitespaces. | +| `start_at` | `end` | At startup, where to start reading logs from the file. Options are `beginning` or `end`. This setting will be ignored if previously read file offsets are retrieved from a persistence mechanism. | +| `fingerprint_size` | `1kb` | The number of bytes with which to identify a file. The first bytes in the file are used as the fingerprint. Decreasing this value at any point will cause existing fingerprints to forgotten, meaning that all files will be read from the beginning (one time). | +| `initial_buffer_size` | `16KiB` | The initial size of the to read buffer for headers and logs, the buffer will be grown as necessary. Larger values may lead to unnecessary large buffer allocations, and smaller values may lead to lots of copies while growing the buffer. | +| `max_log_size` | `1MiB` | The maximum size of a log entry to read before failing. Protects against reading large amounts of data into memory. | +| `max_concurrent_files` | 1024 | The maximum number of log files from which logs will be read concurrently (minimum = 2). If the number of files matched in the `include` pattern exceeds half of this number, then files will be processed in batches. | +| `max_batches` | 0 | Only applicable when files must be batched in order to respect `max_concurrent_files`. This value limits the number of batches that will be processed during a single poll interval. A value of 0 indicates no limit. | +| `delete_after_read` | `false` | If `true`, each log file will be read and then immediately deleted. Requires that the `filelog.allowFileDeletion` feature gate is enabled. | +| `acquire_fs_lock` | `false` | Whether to attempt to acquire a filesystem lock before reading a file (Unix only). | +| `attributes` | {} | A map of `key: value` pairs to add to the entry's attributes. | +| `resource` | {} | A map of `key: value` pairs to add to the entry's resource. | +| `header` | nil | Specifies options for parsing header metadata. Requires that the `filelog.allowHeaderMetadataParsing` feature gate is enabled. See below for details. | | `header.pattern` | required for header metadata parsing | A regex that matches every header line. | | `header.metadata_operators` | required for header metadata parsing | A list of operators used to parse metadata from the header. | diff --git a/pkg/stanza/fileconsumer/config.go b/pkg/stanza/fileconsumer/config.go index 648347add86a..e4744c04772b 100644 --- a/pkg/stanza/fileconsumer/config.go +++ b/pkg/stanza/fileconsumer/config.go @@ -59,6 +59,7 @@ func NewConfig() *Config { MaxConcurrentFiles: defaultMaxConcurrentFiles, StartAt: "end", FingerprintSize: fingerprint.DefaultSize, + InitialBufferSize: scanner.DefaultBufferSize, MaxLogSize: reader.DefaultMaxLogSize, Encoding: defaultEncoding, FlushPeriod: reader.DefaultFlushPeriod, @@ -77,6 +78,7 @@ type Config struct { MaxBatches int `mapstructure:"max_batches,omitempty"` StartAt string `mapstructure:"start_at,omitempty"` FingerprintSize helper.ByteSize `mapstructure:"fingerprint_size,omitempty"` + InitialBufferSize helper.ByteSize `mapstructure:"initial_buffer_size,omitempty"` MaxLogSize helper.ByteSize `mapstructure:"max_log_size,omitempty"` Encoding string `mapstructure:"encoding,omitempty"` SplitConfig split.Config `mapstructure:"multiline,omitempty"` @@ -154,7 +156,7 @@ func (c Config) Build(set component.TelemetrySettings, emit emit.Callback, opts TelemetrySettings: set, FromBeginning: startAtBeginning, FingerprintSize: int(c.FingerprintSize), - InitialBufferSize: scanner.DefaultBufferSize, + InitialBufferSize: int(c.InitialBufferSize), MaxLogSize: int(c.MaxLogSize), Encoding: enc, SplitFunc: splitFunc, diff --git a/pkg/stanza/fileconsumer/internal/reader/reader.go b/pkg/stanza/fileconsumer/internal/reader/reader.go index 4cd90a9d5703..4cd3e910c2a9 100644 --- a/pkg/stanza/fileconsumer/internal/reader/reader.go +++ b/pkg/stanza/fileconsumer/internal/reader/reader.go @@ -167,7 +167,6 @@ func (r *Reader) readHeader(ctx context.Context) (doneReadingFile bool) { } r.headerReader = nil r.HeaderFinalized = true - r.initialBufferSize = scanner.DefaultBufferSize // Reset position in file to r.Offest after the header scanner might have moved it past a content token. if _, err := r.file.Seek(r.Offset, 0); err != nil { diff --git a/receiver/filelogreceiver/README.md b/receiver/filelogreceiver/README.md index 354884266622..8bdc92bbaedc 100644 --- a/receiver/filelogreceiver/README.md +++ b/receiver/filelogreceiver/README.md @@ -37,6 +37,7 @@ Tails and parses logs from files. | `include_file_record_number` | `false` | Whether to add the record number in the file as the attribute `log.file.record_number`. | | `poll_interval` | 200ms | The [duration](#time-parameters) between filesystem polls. | | `fingerprint_size` | `1kb` | The number of bytes with which to identify a file. The first bytes in the file are used as the fingerprint. Decreasing this value at any point will cause existing fingerprints to forgotten, meaning that all files will be read from the beginning (one time) | +| `initial_buffer_size` | `16KiB` | The initial size of the to read buffer for headers and logs, the buffer will be grown as necessary. Larger values may lead to unnecessary large buffer allocations, and smaller values may lead to lots of copies while growing the buffer. | | `max_log_size` | `1MiB` | The maximum size of a log entry to read. A log entry will be truncated if it is larger than `max_log_size`. Protects against reading large amounts of data into memory. | | `max_concurrent_files` | 1024 | The maximum number of log files from which logs will be read concurrently. If the number of files matched in the `include` pattern exceeds this number, then files will be processed in batches. | | `max_batches` | 0 | Only applicable when files must be batched in order to respect `max_concurrent_files`. This value limits the number of batches that will be processed during a single poll interval. A value of 0 indicates no limit. | diff --git a/receiver/otlpjsonfilereceiver/file_test.go b/receiver/otlpjsonfilereceiver/file_test.go index dbb04dfdcc0c..896c1ddd722a 100644 --- a/receiver/otlpjsonfilereceiver/file_test.go +++ b/receiver/otlpjsonfilereceiver/file_test.go @@ -206,6 +206,7 @@ func testdataConfigYamlAsMap() *Config { Encoding: "utf-8", StartAt: "end", FingerprintSize: 1000, + InitialBufferSize: 16 * 1024, MaxLogSize: 1024 * 1024, MaxConcurrentFiles: 1024, FlushPeriod: 500 * time.Millisecond,