Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Connector periodically issues warnings: "Commit of offsets timed out" #272

Open
ngc4579 opened this issue Mar 16, 2024 · 2 comments
Open

Comments

@ngc4579
Copy link

ngc4579 commented Mar 16, 2024

We are in the process of evaluating and establishing a CDC system streaming change events from our databases into an Opensearch deployment.

The Opensearch connector eventually pushing aggregated topics into Opensearch periodically issues warnings like this:

...

2024-03-16 22:38:59,742 WARN [connector-opensearch|task-0] WorkerSinkTask{id=connector-opensearch-0} Commit of offsets timed out (org.apache.kafka.connect.runtime.WorkerSinkTask) [task-thread-connector-opensearch-0]

2024-03-16 22:40:01,936 WARN [connector-opensearch|task-0] WorkerSinkTask{id=connector-opensearch-0} Commit of offsets timed out (org.apache.kafka.connect.runtime.WorkerSinkTask) [task-thread-connector-opensearch-0]

2024-03-16 22:41:03,366 WARN [connector-opensearch|task-0] WorkerSinkTask{id=connector-opensearch-0} Commit of offsets timed out (org.apache.kafka.connect.runtime.WorkerSinkTask) [task-thread-connector-opensearch-0]

...

These messages occur roughly every minute.

Our connector config basically looks like this:

spec:
  class: io.aiven.kafka.connect.opensearch.OpensearchSinkConnector
  config:
    batch.size: 1000
    behavior.on.malformed.documents: warn
    behavior.on.null.values: delete
    behavior.on.version.conflict: warn
    connection.password: ${secrets:debezium/opensearch-credentials:password}
    connection.url: https://debezium-opensearch-nodes.debezium.svc:9200
    connection.username: ${secrets:debezium/opensearch-credentials:username}
    errors.deadletterqueue.context.headers.enable: true
    errors.deadletterqueue.topic.name: myjack.index.dl
    errors.deadletterqueue.topic.replication.factor: 3
    errors.tolerance: all
    flush.timeout.ms: 30000
    index.write.method: upsert
    key.converter: org.apache.kafka.connect.json.JsonConverter
    key.converter.schemas.enable: false
    key.ignore: false
    key.ignore.id.strategy: record.key
    max.buffered.records: 20000
    schema.ignore: "true"
    topics: <redacted>
    transforms: extractKey
    transforms.extractKey.field: id
    transforms.extractKey.type: org.apache.kafka.connect.transforms.ExtractField$Key
    type.name: _doc
    value.converter: org.apache.kafka.connect.json.JsonConverter
    value.converter.schemas.enable: false

Is this something noteworthy, does it indicate an issue? What causes these periodic messages?

@reta
Copy link
Contributor

reta commented Mar 18, 2024

Seems to be Kafka related: confluentinc/kafka-connect-jdbc#846

@ngc4579
Copy link
Author

ngc4579 commented Mar 18, 2024

@reta Thanks for the hint. I've actually come across that issue before, yet disregarded it as it had been reported against a different connector and with ambiguous comments. I'll have an eye on that one anyway and meanwhile experiment with higher offset.flush.timeout.ms settings (currently at the default value).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants