Skip to content

Commit

Permalink
Update 8.13.0 queue documentation and release notes (#983) (#985)
Browse files Browse the repository at this point in the history
* Port output changes from Beats.

* Sync queue docs from Beats.

* Update docs/en/ingest-management/elastic-agent/configuration/outputs/output-logstash.asciidoc

Co-authored-by: David Kilfoyle <[email protected]>

* Add backticks to other outputs.

---------

Co-authored-by: David Kilfoyle <[email protected]>
(cherry picked from commit ac080b9)

Co-authored-by: Craig MacKenzie <[email protected]>
  • Loading branch information
mergify[bot] and cmacknz authored Mar 27, 2024
1 parent 82ff85f commit 32337df
Show file tree
Hide file tree
Showing 5 changed files with 98 additions and 55 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -486,24 +486,34 @@ exports the API under a custom prefix.

The memory queue keeps all events in memory.

The memory queue waits for the output to acknowledge or drop events. If
the queue is full, no new events can be inserted into the memory queue. Only
after the signal from the output will the queue free up space for more events to be accepted.

The memory queue is controlled by the parameters `queue.mem.flush.min_events` and `queue.mem.flush.timeout`. If
`queue.mem.flush.timeout` is `0s` or `queue.mem.flush.min_events` is `0` or `1` then events can be sent by the output as
soon as they are available. If the output supports a `bulk_max_size` parameter it controls the
maximum batch size that can be sent.

If `queue.mem.flush.min_events` is greater than `1` and `queue.mem.flush.timeout` is greater than `0s`, events will only
be sent to the output when the queue contains at least `queue.mem.flush.min_events` events or the
`queue.mem.flush.timeout` period has expired. In this mode the maximum size batch that that can be sent by the
output is `queue.mem.flush.min_events`. If the output supports a `bulk_max_size` parameter, values of
`bulk_max_size` greater than `queue.mem.flush.min_events` have no effect. The value of `queue.mem.flush.min_events`
should be evenly divisible by `bulk_max_size` to avoid sending partial batches to the output.

This sample configuration forwards events to the output if 512 events are available or the oldest
available event has been waiting for 5s in the queue:
The memory queue waits for the output to acknowledge or drop events. If the queue is full, no new
events can be inserted into the memory queue. Only after the signal from the output will the queue
free up space for more events to be accepted.

The memory queue is controlled by the parameters `flush.min_events` and `flush.timeout`.
`flush.min_events` gives a limit on the number of events that can be included in a single batch, and
`flush.timeout` specifies how long the queue should wait to completely fill an event request. If the
output supports a `bulk_max_size` parameter, the maximum batch size will be the smaller of
`bulk_max_size` and `flush.min_events`.

`flush.min_events` is a legacy parameter, and new configurations should prefer to control batch size
with `bulk_max_size`. As of 8.13, there is never a performance advantage to limiting batch size with
`flush.min_events` instead of `bulk_max_size`.

In synchronous mode, an event request is always filled as soon as events are available, even if
there are not enough events to fill the requested batch. This is useful when latency must be
minimized. To use synchronous mode, set `flush.timeout` to 0.

For backwards compatibility, synchronous mode can also be activated by setting `flush.min_events` to 0
or 1. In this case, batch size will be capped at 1/2 the queue capacity.

In asynchronous mode, an event request will wait up to the specified timeout to try and fill the
requested batch completely. If the timeout expires, the queue returns a partial batch with all
available events. To use asynchronous mode, set `flush.timeout` to a positive duration, for example 5s.

This sample configuration forwards events to the output when there are enough events to fill the
output's request (usually controlled by `bulk_max_size`, and limited to at most 512 events by
`flush.min_events`), or when events have been waiting for

[source,yaml]
------------------------------------------------------------------------------
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -149,24 +149,34 @@ output, {agent} can use SSL/TLS. For a list of available settings, refer to

The memory queue keeps all events in memory.

The memory queue waits for the output to acknowledge or drop events. If
the queue is full, no new events can be inserted into the memory queue. Only
after the signal from the output will the queue free up space for more events to be accepted.

The memory queue is controlled by the parameters `queue.mem.flush.min_events` and `flush.timeout`. If
`flush.timeout` is `0s` or `queue.mem.flush.min_events` is `0` or `1` then events can be sent by the output as
soon as they are available. If the output supports a `bulk_max_size` parameter it controls the
maximum batch size that can be sent.

If `queue.mem.flush.min_events` is greater than `1` and `flush.timeout` is greater than `0s`, events will only
be sent to the output when the queue contains at least `queue.mem.flush.min_events` events or the
`flush.timeout` period has expired. In this mode the maximum size batch that that can be sent by the
output is `queue.mem.flush.min_events`. If the output supports a `bulk_max_size` parameter, values of
`bulk_max_size` greater than `queue.mem.flush.min_events` have no effect. The value of `queue.mem.flush.min_events`
should be evenly divisible by `bulk_max_size` to avoid sending partial batches to the output.

This sample configuration forwards events to the output if 512 events are available or the oldest
available event has been waiting for 5s in the queue:
The memory queue waits for the output to acknowledge or drop events. If the queue is full, no new
events can be inserted into the memory queue. Only after the signal from the output will the queue
free up space for more events to be accepted.

The memory queue is controlled by the parameters `flush.min_events` and `flush.timeout`.
`flush.min_events` gives a limit on the number of events that can be included in a single batch, and
`flush.timeout` specifies how long the queue should wait to completely fill an event request. If the
output supports a `bulk_max_size` parameter, the maximum batch size will be the smaller of
`bulk_max_size` and `flush.min_events`.

`flush.min_events` is a legacy parameter, and new configurations should prefer to control batch size
with `bulk_max_size`. As of 8.13, there is never a performance advantage to limiting batch size with
`flush.min_events` instead of `bulk_max_size`.

In synchronous mode, an event request is always filled as soon as events are available, even if
there are not enough events to fill the requested batch. This is useful when latency must be
minimized. To use synchronous mode, set `flush.timeout` to 0.

For backwards compatibility, synchronous mode can also be activated by setting `flush.min_events` to 0
or 1. In this case, batch size will be capped at 1/2 the queue capacity.

In asynchronous mode, an event request will wait up to the specified timeout to try and fill the
requested batch completely. If the timeout expires, the queue returns a partial batch with all
available events. To use asynchronous mode, set `flush.timeout` to a positive duration, for example 5s.

This sample configuration forwards events to the output when there are enough events to fill the
output's request (usually controlled by `bulk_max_size`, and limited to at most 512 events by
`flush.min_events`), or when events have been waiting for

[source,yaml]
------------------------------------------------------------------------------
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -169,23 +169,34 @@ For more information, refer to <<secure-logstash-connections>>.

The memory queue keeps all events in memory.

The memory queue waits for the output to acknowledge or drop events. If
the queue is full, no new events can be inserted into the memory queue. Only
after the signal from the output will the queue free up space for more events to be accepted.

The memory queue is controlled by the parameters `queue.mem.flush.min_events` and `queue.mem.flush.timeout`. If
`queue.mem.flush.timeout` is `0s` or `queue.mem.flush.min_events` is `0` or `1` then events can be sent by the output as
soon as they are available. If the output supports a `bulk_max_size` parameter it controls the
maximum batch size that can be sent.

If `queue.mem.flush.min_events` is greater than `1` and `queue.mem.flush.timeout` is greater than `0s`, events will only
be sent to the output when the queue contains at least `queue.mem.flush.min_events` events or the
`queue.mem.flush.timeout` period has expired. In this mode the maximum size batch that that can be sent by the
output is `queue.mem.flush.min_events`. If the output supports a `bulk_max_size` parameter, values of
`bulk_max_size` greater than `queue.mem.flush.min_events` have no effect. The value of `queue.mem.flush.min_events`
should be evenly divisible by `bulk_max_size` to avoid sending partial batches to the output.

This sample configuration forwards events to the output if 512 events are available or the oldest
The memory queue waits for the output to acknowledge or drop events. If the queue is full, no new
events can be inserted into the memory queue. Only after the signal from the output will the queue
free up space for more events to be accepted.

The memory queue is controlled by the parameters `flush.min_events` and `flush.timeout`.
`flush.min_events` gives a limit on the number of events that can be included in a single batch, and
`flush.timeout` specifies how long the queue should wait to completely fill an event request. If the
output supports a `bulk_max_size` parameter, the maximum batch size will be the smaller of
`bulk_max_size` and `flush.min_events`.

`flush.min_events` is a legacy parameter, and new configurations should prefer to control batch size
with `bulk_max_size`. As of 8.13, there is never a performance advantage to limiting batch size with
`flush.min_events` instead of `bulk_max_size`.

In synchronous mode, an event request is always filled as soon as events are available, even if
there are not enough events to fill the requested batch. This is useful when latency must be
minimized. To use synchronous mode, set `flush.timeout` to 0.

For backwards compatibility, synchronous mode can also be activated by setting `flush.min_events` to 0
or 1. In this case, batch size will be capped at 1/2 the queue capacity.

In asynchronous mode, an event request will wait up to the specified timeout to try and fill the
requested batch completely. If the timeout expires, the queue returns a partial batch with all
available events. To use asynchronous mode, set `flush.timeout` to a positive duration, for example 5s.

This sample configuration forwards events to the output when there are enough events to fill the
output's request (usually controlled by `bulk_max_size`, and limited to at most 512 events by
`flush.min_events`), or when events have been waiting for 5s without filling the requested size:f 512 events are available or the oldest
available event has been waiting for 5s in the queue:

[source,yaml]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ escaping.
[id="{type}-queue.mem.events-setting"]
`queue.mem.events`

| The number of events the queue can store. This value should be evenly divisible by `queue.mem.flush.min_events` to avoid sending partial batches to the output.
| The number of events the queue can store. This value should be evenly divisible by the smaller of `queue.mem.flush.min_events` or `bulk_max_size` to avoid sending partial batches to the output.

*Default:* `3200 events`
// end::queue.mem.events-setting[]
Expand All @@ -60,7 +60,7 @@ escaping.
[id="{type}-queue.mem.flush.min_events-setting"]
`queue.mem.flush.min_events`

| The minimum number of events required for publishing. If this value is set to 0 or 1, events are available to the output immediately. If this value is greater than 1 the output must wait for the queue to accumulate this minimum number of events or for `queue.mem.flush.timeout` to expire before publishing. When greater than 1 this value also defines the maximum possible batch that can be sent by the output.
| `flush.min_events` is a legacy parameter, and new configurations should prefer to control batch size with `bulk_max_size`. As of 8.13, there is never a performance advantage to limiting batch size with `flush.min_events` instead of `bulk_max_size`

*Default:* `1600 events`
// end::queue.mem.flush.min_events-setting[]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,17 @@ Managed content relating to specific visualization editors such as Lens, TSVB, a
For more information, refer to ({kibana-pull}172393[#172393]).
====

// copied from Beats release notes: https://github.com/elastic/beats/pull/37795
[discrete]
[[breaking-37795]]
.The behavior of `queue.mem.flush.min_events` has been simplified.
[%collapsible]
====
*Details* +
The behavior of `queue.mem.flush.min_events` has been simplified. It now serves as a simple maximum on the size of all event batches. There are no longer performance implications in its relationship to `bulk_max_size`.
For more information, refer to ({beats-pull}37795[#37795]).
====

//[discrete]
//[[known-issues-8.13.0]]
Expand Down Expand Up @@ -97,6 +108,7 @@ The 8.13.0 release added the following new and notable features.
* Add a postrm script to {agent} DEB and RPM packages. {agent-pull}4334[#4334] {agent-issue}3784[#3784] {agent-issue}4267[#4267]
* Kubernetes secrets provider has been improved to update a Kubernetes secret when the secret value changes. {agent-pull}4371[#4371] {agent-issue}4168[#4168]
* Upgrade link:https://github.com/elastic/elastic-agent-system-metrics[elastic-agent-system-metrics] to version 0.9.2. {agent-pull}4383[#4383]
* Allow users to configure number of output workers (for outputs that support workers) with either `worker` or `workers`. {beats-pull}38257[38257]

[discrete]
[[enhancements-8.13.0]]
Expand Down

0 comments on commit 32337df

Please sign in to comment.