-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Inconsistent written values for LocalTime types using BQ FileLoads vs StorageWrite API #34038
Comments
cc @Abacn, I'm worried this might be related to recent changes to https://github.com/apache/beam/blob/master/sdks/java/io/google-cloud-platform/src/main/java/org/apache/beam/sdk/io/gcp/bigquery/AvroGenericRecordToStorageApiProto.java. |
could this be related? https://github.com/apache/beam/pull/33422/files#r1891535677 cc: @RustedBones |
hmm maybe, I notice that that branch doesn't do any |
…GenericRecordToStorageApiProto
…GenericRecordToStorageApiProto
…GenericRecordToStorageApiProto
What happened?
On beam 2.63.0, when writing GenericRecords to BQ using
time-millis
logical types over the Java SDK, a different value is written depending on whether I useSTORAGE_WRITE_API
method orFILE_LOADS
method.Repro setup (sorry for Scio code, will work on reproducing in a Beam as well):
The time-millis value for the table written using FILE_LOADS does not match the value written using STORAGE_WRITE_API:
storage_write_api data:

file_loads data:

Tbh, the only value that looks correct is the file-loads micros value 😬 I'm extremely concerned about this as potentially incorrect data is being written using BQ's Avro interface.
Issue Priority
Priority: 2 (default / most bugs should be filed as P2)
Issue Components
The text was updated successfully, but these errors were encountered: