-
Notifications
You must be signed in to change notification settings - Fork 915
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add tz_convert method to convert between timestamps #13328
Conversation
rerun tests |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My comments have been addressed
@@ -589,6 +601,18 @@ def as_string_column( | |||
) -> "cudf.core.column.StringColumn": | |||
return self._local_time.as_string_column(dtype, format, **kwargs) | |||
|
|||
def __repr__(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is really unrelated to the rest of the PR, but a quality-of-life thing for debugging. Pandas always prints the local timestamps when looking at a tz-aware column and pyarrow always prints the UTC timestamps.
def __repr__(self): | ||
# Arrow prints the UTC timestamps, but we want to print the | ||
# local timestamps: | ||
arr = self._local_time.to_arrow().cast( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe a silly question. Why convert to arrow and then attempt to mirror pandas repr conventions instead of just converting it to pandas and using that repr?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mainly because pyarrow has the convenient to_string()
method that lets us assemble the repr with a custom class name.
In [7]: print(dti._column.to_arrow().to_string())
[
2001-01-01 05:00:00.000000000,
2001-01-01 06:00:00.000000000,
2001-01-01 07:00:00.000000000,
2001-01-01 08:00:00.000000000,
2001-01-01 09:00:00.000000000,
2001-01-01 10:00:00.000000000,
2001-01-01 11:00:00.000000000,
2001-01-01 12:00:00.000000000,
2001-01-01 13:00:00.000000000,
2001-01-01 14:00:00.000000000
]
Parameters | ||
---------- | ||
tz: str | ||
Time zone for time. Corresponding timestamps would be converted |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The pandas docstring is worded in a weird way. However, I would leave this as-is to match pandas.
python/cudf/cudf/core/index.py
Outdated
'2018-03-03 14:00:00+00:00'], | ||
dtype='datetime64[ns, Europe/London]') | ||
""" | ||
from cudf.core._internals.timezones import convert, localize |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I forgot, is there a reason we defer this import in each function rather than doing it once at the top?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Circular imports :-(
"to_tz", ["Europe/London", "America/Chicago", "UTC", None] | ||
) | ||
def test_convert(from_tz, to_tz): | ||
ps = pd.Series(pd.date_range("2023-01-01", periods=3, freq="H")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I’d love it if we could add some complexity to our test inputs. Maybe a data fixture that has some times on either side of a DST change, ambiguous times, pre-1900 times, etc. Include some times that we know have raised issues in the past (issue tracker has a few).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree. @mroeschke does Pandas do something like this? Just wondering if there's tooling we can borrow/steal/vendor from Pandas
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately not. The only related fixture we used is a fixed variety of timezones we use where applicable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK - I threw the problem at ChatGPT and it generated some edge case tests that I added here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pre-1900 times
I did find that we return a result different from Pandas for this pre-1900 example:
>>> pd.Series(["1899-01-01 12:00"], dtype="datetime64[s]").dt.tz_localize("Europe/Paris").dt.tz_convert("America/New_York")
0 1899-01-01 06:55:00-04:56
dtype: datetime64[ns, America/New_York]
>>> cudf.Series(["1899-01-01 12:00"], dtype="datetime64[s]").dt.tz_localize("Europe/Paris").dt.tz_convert("America/New_York")
0 1899-01-01 06:50:39-04:56
dtype: datetime64[s, America/New_York]
However, our result is the same as you would get with zoneinfo
:
>>> datetime(1899, 1, 1, 12, 0, tzinfo=ZoneInfo("Europe/Paris")).astimezone(ZoneInfo("America/New_York"))
datetime.datetime(1899, 1, 1, 6, 50, 39, tzinfo=zoneinfo.ZoneInfo(key='America/New_York'))
@mroeschke I'm curious if this aligns with your experience with the difference between Pandas (pytz) and ZoneInfo?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@shwina If you want to add pre-1900 times in a later PR, that's fine. I think you hit a decent number of edge cases for now. But if we know we disagree with pandas for this specific case, I'd like to document that in an issue. I would consider that a bug.
Co-authored-by: Bradley Dice <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion
Co-authored-by: Ashwin Srinath <[email protected]>
/merge |
Description
Closes #13329
Checklist