From 1471521ffa8e1b81783707033ad71853d78904f9 Mon Sep 17 00:00:00 2001 From: Craig MacKenzie Date: Tue, 26 Mar 2024 17:07:36 -0400 Subject: [PATCH] Remove references to min_events in bulk_max_size docs. (#38634) * Remove references to min_events in bulk_max_size docs. As of https://github.com/elastic/beats/pull/37795/files in 8.13.0 queue.flush.min_events is no longer relevant. * Fix whitespace Co-authored-by: Pierre HILBERT --------- Co-authored-by: Pierre HILBERT (cherry picked from commit 989d36fceb5a509b93e7dc508ce87bdce4ed4310) --- libbeat/outputs/elasticsearch/docs/elasticsearch.asciidoc | 6 ++---- libbeat/outputs/logstash/docs/logstash.asciidoc | 6 ++---- libbeat/outputs/redis/docs/redis.asciidoc | 6 ++---- 3 files changed, 6 insertions(+), 12 deletions(-) diff --git a/libbeat/outputs/elasticsearch/docs/elasticsearch.asciidoc b/libbeat/outputs/elasticsearch/docs/elasticsearch.asciidoc index 3b010b7ed368..0f7d73649855 100644 --- a/libbeat/outputs/elasticsearch/docs/elasticsearch.asciidoc +++ b/libbeat/outputs/elasticsearch/docs/elasticsearch.asciidoc @@ -666,10 +666,8 @@ endif::[] The maximum number of events to bulk in a single Elasticsearch bulk API index request. The default is 1600. -Events can be collected into batches. When using the memory queue with `queue.mem.flush.min_events` -set to a value greater than `1`, the maximum batch is is the value of `queue.mem.flush.min_events`. -{beatname_uc} will split batches read from the queue which are larger than `bulk_max_size` into -multiple batches. +Events can be collected into batches. {beatname_uc} will split batches read from the queue which are +larger than `bulk_max_size` into multiple batches. Specifying a larger batch size can improve performance by lowering the overhead of sending events. However big batch sizes can also increase processing times, which might result in diff --git a/libbeat/outputs/logstash/docs/logstash.asciidoc b/libbeat/outputs/logstash/docs/logstash.asciidoc index 5fa2fc5a0285..d5e2e2741a6a 100644 --- a/libbeat/outputs/logstash/docs/logstash.asciidoc +++ b/libbeat/outputs/logstash/docs/logstash.asciidoc @@ -381,10 +381,8 @@ endif::[] The maximum number of events to bulk in a single {ls} request. The default is 2048. -Events can be collected into batches. When using the memory queue with `queue.mem.flush.min_events` -set to a value greater than `1`, the maximum batch is is the value of `queue.mem.flush.min_events`. -{beatname_uc} will split batches read from the queue which are larger than `bulk_max_size` into -multiple batches. +Events can be collected into batches. {beatname_uc} will split batches read from the queue which are +larger than `bulk_max_size` into multiple batches. Specifying a larger batch size can improve performance by lowering the overhead of sending events. However big batch sizes can also increase processing times, which might result in diff --git a/libbeat/outputs/redis/docs/redis.asciidoc b/libbeat/outputs/redis/docs/redis.asciidoc index 0b758e524cb8..366d3cb832a4 100644 --- a/libbeat/outputs/redis/docs/redis.asciidoc +++ b/libbeat/outputs/redis/docs/redis.asciidoc @@ -216,10 +216,8 @@ endif::[] The maximum number of events to bulk in a single Redis request or pipeline. The default is 2048. -Events can be collected into batches. When using the memory queue with `queue.mem.flush.min_events` -set to a value greater than `1`, the maximum batch is is the value of `queue.mem.flush.min_events`. -{beatname_uc} will split batches read from the queue which are larger than `bulk_max_size` into -multiple batches. +Events can be collected into batches. {beatname_uc} will split batches read from the queue which are +larger than `bulk_max_size` into multiple batches. Specifying a larger batch size can improve performance by lowering the overhead of sending events. However big batch sizes can also increase processing times,