Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: BigQuery streaming at-least-once job missing records #31545

Open
1 of 16 tasks
Amar3tto opened this issue Jun 7, 2024 · 2 comments
Open
1 of 16 tasks

[Bug]: BigQuery streaming at-least-once job missing records #31545

Amar3tto opened this issue Jun 7, 2024 · 2 comments
Assignees

Comments

@Amar3tto
Copy link
Collaborator

Amar3tto commented Jun 7, 2024

What happened?

During the execution of BigQuery streaming job (STORAGE_API_AT_LEAST_ONCE method) utilizing SyntheticUnboundedSource, a difference between the number of generated messages and the actual number of rows persisted in the BigQuery table was observed.
It's possible that some records may not be fully processed or persisted before the job is marked as "Succeeded".
Further investigation is required to pinpoint the root cause.

The issue was found during BigQueryIOST stress test (testJsonStorageAPIAtLeastOnce).

Issue Priority

Priority: 2 (default / most bugs should be filed as P2)

Issue Components

  • Component: Python SDK
  • Component: Java SDK
  • Component: Go SDK
  • Component: Typescript SDK
  • Component: IO connector
  • Component: Beam YAML
  • Component: Beam examples
  • Component: Beam playground
  • Component: Beam katas
  • Component: Website
  • Component: Spark Runner
  • Component: Flink Runner
  • Component: Samza Runner
  • Component: Twister2 Runner
  • Component: Hazelcast Jet Runner
  • Component: Google Cloud Dataflow Runner
@Amar3tto
Copy link
Collaborator Author

Amar3tto commented Jun 7, 2024

@Abacn, Could you please provide additional details on this issue since you are familiar with the context of the experiments that helped us determine it?

@Abacn
Copy link
Contributor

Abacn commented Oct 15, 2024

We had issue on this assert before. It appears after #31241 this assert is still flaky, which means the expectation of

rowCount >= numRecords

is not always true.

Actually, in both EXACT_ONCE and AT_LEAST_ONCE, the on-the-fly records processed in Dataflow job can always have duplicates. For EXACT_ONCE the sink has dedup so

rowCount <= numRecords

is expected. For ATLEAST_ONCE, I suspect duplicate records may be introduced in both before and after the Beam counter, so there is not a definite relationship between rowCount and numRecords

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants