You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
For casting string in a specific time zone to timestamp, Spark first parse string to segments(year, mohth, day, hour, minute, seconds ...), then create a local time from the segments, then convert the local time in the specific time zone to UTC time. Refer to: spark code .
Currently, TimeZoneDB APIs are:
They are not enough, we should implment a new API like:
parseLocalTimestampString(string_column, zoneId)
Describe the solution you'd like
In parseLocalTimestampString kernel
First parse string to segments, refer to this JNI PR, then transform segments in the specific time zone to UTC time. The transform needs to query the time zone transition/transition rule tables. Kernel should provide:
def transform_from_local_time_segments_to_utc(zongId, int year, int month, ... int nanoSeconds) : Long
So the overall logic will be:
parse string to segments(year, month, day ...)
transfrom segments to UTC timestamp long value
Note:
-Parser should provide a option to to handle special string: EPOCH, NOW, ...... for Spark before 320.
For Ansi mode, should investigate how to handle this mode efficiently.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
For casting string in a specific time zone to timestamp, Spark first parse string to segments(year, mohth, day, hour, minute, seconds ...), then create a local time from the segments, then convert the local time in the specific time zone to UTC time. Refer to: spark code .
Currently, TimeZoneDB APIs are:
They are not enough, we should implment a new API like:
Describe the solution you'd like
In
parseLocalTimestampString
kernelFirst parse string to segments, refer to this JNI PR, then transform segments in the specific time zone to UTC time. The transform needs to query the time zone transition/transition rule tables. Kernel should provide:
So the overall logic will be:
Note:
-Parser should provide a option to to handle special string: EPOCH, NOW, ...... for Spark before 320.
The text was updated successfully, but these errors were encountered: