Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Doc:Add attribute for kafka client versions #48

Merged
merged 3 commits into from
Aug 13, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 7 additions & 2 deletions docs/index.asciidoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
:plugin: kafka
:type: integration
:no_codec:
:kafka_client: 2.4

///////////////////////////////////////////
START - GENERATED VARIABLES, DO NOT EDIT!
Expand All @@ -21,11 +22,15 @@ include::{include_path}/plugin_header.asciidoc[]

==== Description

The Kafka Integration Plugin provides integrated plugins for working with the https://kafka.apache.org/[Kafka] distributed streaming platform.
The Kafka Integration Plugin provides integrated plugins for working with the
https://kafka.apache.org/[Kafka] distributed streaming platform.

- {logstash-ref}/plugins-inputs-kafka.html[Kafka Input Plugin]
- {logstash-ref}/plugins-outputs-kafka.html[Kafka Output Plugin]

This plugin uses Kafka Client 2.4. For broker compatibility, see the official https://cwiki.apache.org/confluence/display/KAFKA/Compatibility+Matrix[Kafka compatibility reference]. If the linked compatibility wiki is not up-to-date, please contact Kafka support/community to confirm compatibility.
This plugin uses Kafka Client {kafka_client}. For broker compatibility, see the official
https://cwiki.apache.org/confluence/display/KAFKA/Compatibility+Matrix[Kafka
compatibility reference]. If the linked compatibility wiki is not up-to-date,
please contact Kafka support/community to confirm compatibility.

:no_codec!:
25 changes: 18 additions & 7 deletions docs/input-kafka.asciidoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
:plugin: kafka
:type: input
:default_codec: plain
:kafka_client: 2.4
:kafka_client_doc: 24

///////////////////////////////////////////
START - GENERATED VARIABLES, DO NOT EDIT!
Expand All @@ -23,9 +25,14 @@ include::{include_path}/plugin_header.asciidoc[]

This input will read events from a Kafka topic.

This plugin uses Kafka Client 2.4. For broker compatibility, see the official https://cwiki.apache.org/confluence/display/KAFKA/Compatibility+Matrix[Kafka compatibility reference]. If the linked compatibility wiki is not up-to-date, please contact Kafka support/community to confirm compatibility.
This plugin uses Kafka Client {kafka_client}. For broker compatibility, see the
official
https://cwiki.apache.org/confluence/display/KAFKA/Compatibility+Matrix[Kafka
compatibility reference]. If the linked compatibility wiki is not up-to-date,
please contact Kafka support/community to confirm compatibility.

If you require features not yet available in this plugin (including client version upgrades), please file an issue with details about what you need.
If you require features not yet available in this plugin (including client
version upgrades), please file an issue with details about what you need.

This input supports connecting to Kafka over:

Expand All @@ -46,9 +53,9 @@ the same `group_id`.
Ideally you should have as many threads as the number of partitions for a perfect balance --
more threads than partitions means that some threads will be idle

For more information see https://kafka.apache.org/24/documentation.html#theconsumer
For more information see https://kafka.apache.org/{kafka_client_doc}/documentation.html#theconsumer

Kafka consumer configuration: https://kafka.apache.org/24/documentation.html#consumerconfigs
Kafka consumer configuration: https://kafka.apache.org/{kafka_client_doc}/documentation.html#consumerconfigs

==== Metadata fields

Expand All @@ -59,7 +66,11 @@ The following metadata from Kafka broker are added under the `[@metadata]` field
* `[@metadata][kafka][partition]`: Partition info for this message.
* `[@metadata][kafka][offset]`: Original record offset for this message.
* `[@metadata][kafka][key]`: Record key, if any.
* `[@metadata][kafka][timestamp]`: Timestamp in the Record. Depending on your broker configuration, this can be either when the record was created (default) or when it was received by the broker. See more about property log.message.timestamp.type at https://kafka.apache.org/10/documentation.html#brokerconfigs
* `[@metadata][kafka][timestamp]`: Timestamp in the Record.
Depending on your broker configuration, this can be
either when the record was created (default) or when it was received by the
broker. See more about property log.message.timestamp.type at
https://kafka.apache.org/{kafka_client_doc}/documentation.html#brokerconfigs
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yikes! Here's an oldie!


Metadata is only added to the event if the `decorate_events` option is set to true (it defaults to false).

Expand All @@ -73,7 +84,7 @@ This plugin supports these configuration options plus the <<plugins-{type}s-{plu

NOTE: Some of these options map to a Kafka option. Defaults usually reflect the Kafka default setting,
and might change if Kafka's consumer defaults change.
See the https://kafka.apache.org/24/documentation for more details.
See the https://kafka.apache.org/{kafka_client_doc}/documentation for more details.

[cols="<,<,<",options="header",]
|=======================================================================
Expand Down Expand Up @@ -421,7 +432,7 @@ partition ownership amongst consumer instances, supported options are:
* `sticky`
* `cooperative_sticky`

These map to Kafka's corresponding https://kafka.apache.org/24/javadoc/org/apache/kafka/clients/consumer/ConsumerPartitionAssignor.html[`ConsumerPartitionAssignor`]
These map to Kafka's corresponding https://kafka.apache.org/{kafka_client_doc}/javadoc/org/apache/kafka/clients/consumer/ConsumerPartitionAssignor.html[`ConsumerPartitionAssignor`]
implementations.

[id="plugins-{type}s-{plugin}-poll_timeout_ms"]
Expand Down
30 changes: 21 additions & 9 deletions docs/output-kafka.asciidoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
:plugin: kafka
:type: output
:default_codec: plain
:kafka_client: 2.4
:kafka_client_doc: 24

///////////////////////////////////////////
START - GENERATED VARIABLES, DO NOT EDIT!
Expand All @@ -23,9 +25,14 @@ include::{include_path}/plugin_header.asciidoc[]

Write events to a Kafka topic.

This plugin uses Kafka Client 2.4. For broker compatibility, see the official https://cwiki.apache.org/confluence/display/KAFKA/Compatibility+Matrix[Kafka compatibility reference]. If the linked compatibility wiki is not up-to-date, please contact Kafka support/community to confirm compatibility.
This plugin uses Kafka Client {kafka_client}. For broker compatibility, see the
official
https://cwiki.apache.org/confluence/display/KAFKA/Compatibility+Matrix[Kafka
compatibility reference]. If the linked compatibility wiki is not up-to-date,
please contact Kafka support/community to confirm compatibility.

If you require features not yet available in this plugin (including client version upgrades), please file an issue with details about what you need.
If you require features not yet available in this plugin (including client
version upgrades), please file an issue with details about what you need.

This output supports connecting to Kafka over:

Expand All @@ -36,9 +43,12 @@ By default security is disabled but can be turned on as needed.

The only required configuration is the topic_id.

The default codec is plain. Logstash will encode your events with not only the message field but also with a timestamp and hostname.
The default codec is plain. Logstash will encode your events with not only the
message field but also with a timestamp and hostname.

If you want the full content of your events to be sent as json, you should set
the codec in the output configuration like this:

If you want the full content of your events to be sent as json, you should set the codec in the output configuration like this:
[source,ruby]
output {
kafka {
Expand All @@ -47,9 +57,11 @@ If you want the full content of your events to be sent as json, you should set t
}
}

For more information see https://kafka.apache.org/24/documentation.html#theproducer
For more information see
https://kafka.apache.org/{kafka_client_doc}/documentation.html#theproducer

Kafka producer configuration: https://kafka.apache.org/24/documentation.html#producerconfigs
Kafka producer configuration:
https://kafka.apache.org/{kafka_client_doc}/documentation.html#producerconfigs

[id="plugins-{type}s-{plugin}-options"]
==== Kafka Output Configuration Options
Expand All @@ -58,7 +70,7 @@ This plugin supports the following configuration options plus the <<plugins-{typ

NOTE: Some of these options map to a Kafka option. Defaults usually reflect the Kafka default setting,
and might change if Kafka's producer defaults change.
See the https://kafka.apache.org/24/documentation for more details.
See the https://kafka.apache.org/{kafka_client_doc}/documentation for more details.

[cols="<,<,<",options="header",]
|=======================================================================
Expand Down Expand Up @@ -328,9 +340,9 @@ Kafka down, etc).
A value less than zero is a configuration error.

Starting with version 10.5.0, this plugin will only retry exceptions that are a subclass of
https://kafka.apache.org/25/javadoc/org/apache/kafka/common/errors/RetriableException.html[RetriableException]
https://kafka.apache.org/{kafka_client_doc}/javadoc/org/apache/kafka/common/errors/RetriableException.html[RetriableException]
and
https://kafka.apache.org/25/javadoc/org/apache/kafka/common/errors/InterruptException.html[InterruptException].
https://kafka.apache.org/{kafka_client_doc}/javadoc/org/apache/kafka/common/errors/InterruptException.html[InterruptException].
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This link using /25/ is from a recent PR. Seems like we should standardize it, too. WDYT?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 most definitely - for correctness these should point to /24 till we upgrade the client

If producing a message throws any other exception, an error is logged and the message is dropped without retrying.
This prevents the Logstash pipeline from hanging indefinitely.

Expand Down