You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current implementation of bigqueryio in go is rudementary. It always falls back to running a query and emitting records sequentially. This has the downside that subsequent ParDo steps are not autoscaled.
Add a bigqueryio.UseDirectRead or the like which consumes the table in multiple parallel sources as java and python sdk.
Issue Priority
Priority: 2 (default / most feature requests should be filed as P2)
Issue Components
Component: Python SDK
Component: Java SDK
Component: Go SDK
Component: Typescript SDK
Component: IO connector
Component: Beam YAML
Component: Beam examples
Component: Beam playground
Component: Beam katas
Component: Website
Component: Infrastructure
Component: Spark Runner
Component: Flink Runner
Component: Samza Runner
Component: Twister2 Runner
Component: Hazelcast Jet Runner
Component: Google Cloud Dataflow Runner
The text was updated successfully, but these errors were encountered:
While the implementation may take a while. What is a current strategy to deal with a sequentially emiting source as linked above? Bigqueryio is reading records in loop and emiting them sequentially without also implementing a progress method.
This leads to pipelines on dataflow never being autoscaled .
Can something be done in subsequent step to make the subsequent processing being redistributed to multipe workers.
What would you like to happen?
The current implementation of
bigqueryio
in go is rudementary. It always falls back to running a query and emitting records sequentially. This has the downside that subsequent ParDo steps are not autoscaled.Add a bigqueryio.UseDirectRead or the like which consumes the table in multiple parallel sources as java and python sdk.
Issue Priority
Priority: 2 (default / most feature requests should be filed as P2)
Issue Components
The text was updated successfully, but these errors were encountered: