Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: transform("cumcount") returns a Series instead of a Dataframe. #60551

Open
2 of 3 tasks
ClaudioSalvatoreArcidiacono opened this issue Dec 12, 2024 · 10 comments
Open
2 of 3 tasks
Labels
API - Consistency Internal Consistency of API/Behavior Bug Groupby Needs Discussion Requires discussion from core team before further action

Comments

@ClaudioSalvatoreArcidiacono

Pandas version checks

  • I have checked that this issue has not already been reported.

  • I have confirmed this bug exists on the latest version of pandas.

  • I have confirmed this bug exists on the main branch of pandas.

Reproducible Example

import pandas as pd

df = pd.DataFrame({
    "cat": ["a", "b", "c"] * 30,
    "target": [1, 2, 3] * 30,
})

# Cumcount returns a Series
assert isinstance(df.groupby("cat").transform("cumcount"), pd.Series)

# Other cumulative operations return a DataFrame
for transform in ["cumsum", "cumprod", "cummax", "cummin"]:
    assert isinstance(df.groupby("cat").transform(transform), pd.DataFrame)

Issue Description

The aggregate operation transform("cumcount") returns a Series whether all other cumulative operations return a DataFrame.

Expected Behavior

I would expect all cumulative operations to return a DataFrame.

Installed Versions

INSTALLED VERSIONS

commit : 0691c5c
python : 3.11.1
python-bits : 64
OS : Darwin
OS-release : 24.1.0
Version : Darwin Kernel Version 24.1.0: Thu Oct 10 21:03:15 PDT 2024; root:xnu-11215.41.3~2/RELEASE_ARM64_T6000
machine : arm64
processor : arm
byteorder : little
LC_ALL : None
LANG : None
LOCALE : None.UTF-8

pandas : 2.2.3
numpy : 2.1.3
pytz : 2024.2
dateutil : 2.9.0.post0
pip : 24.3.1
Cython : None
sphinx : None
IPython : 8.30.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.4
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : 8.3.4
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2024.2
qtpy : None
pyqt5 : None

@MarcoGorelli
Copy link
Member

thanks for the report! looks like a duplicate of #5608 ?

@ClaudioSalvatoreArcidiacono
Copy link
Author

Thanks for spotting it, I only checked among opened issues.

I see that the issue #5608 is closed and quite a lot has changed in the pandas API since the original issue was opened, so it might still be worthy to reconsider that decision in order to align cumulative operations.

@MarcoGorelli
Copy link
Member

MarcoGorelli commented Dec 12, 2024

sure, going to ping @rhshadrach on this one then (personally I don't think it's worth changing at this point)

@rhshadrach
Copy link
Member

With it's current behavior, I agree with @MarcoGorelli that this is not worth changing. Namely, cumcount does not actually use data from the columns, its result is independent of the values. So it would be inefficient to produce the same result many times unnecessarily. I agree it's a bit odd considering the behavior of other cumulative methods, but because of the trade-off I also think it is not worth it.

However should the count in cumcount behave similar to count elsewhere in pandas? pandas defaults to not counting NA values, but cumcount does not do this.

df = pd.DataFrame({"a": [1, 1, 1, 2], "b": [1, np.nan, np.nan, np.nan]}).set_index("a")
print(df.groupby("a").cumcount())
# 1    0
# 1    1
# 1    2
# 2    0
# dtype: int64

print(df.groupby("a").count())
#    b
# a
# 1  1
# 2  0

If we were to change cumcount to align better with other counts throughout pandas, it would necessarily need to act on a per-column basis.

@MarcoGorelli - curious if you think this would be more worth the change.

@rhshadrach rhshadrach added Groupby Needs Discussion Requires discussion from core team before further action API - Consistency Internal Consistency of API/Behavior and removed Needs Triage Issue that has not been reviewed by a pandas team member labels Dec 12, 2024
@MarcoGorelli
Copy link
Member

Thanks!

My initial feeling is that indeed if count skips missing values then so should cumcount

And then it could also return a dataframe

@rhshadrach
Copy link
Member

rhshadrach commented Dec 14, 2024

Was implementing this and I noticed cumcount currently has an odd behavior where it does not really count the first row. This is as it is documented:

Number each item in each group from 0 to the length of that group - 1.

To me, this is not really counting - especially if we are to introduce skipna=True. If we're going to go this route, then I think we should also be fixing this. For backwards compatibility:

  1. Introduce skipna defaulting to False. When True, we will count starting at 1 when the first element of the group is not NA.
  2. Introduce future=True, where skipna=False will start counting at 1 and return a DataFrame.
  3. Switch the default of skipna True & remove the future argument after deprecation.

To get current behavior, df.groupby(keys).cumcount() will become df.groupby(keys)["some_column"].cumcount(skipna=False) - 1.

@rhshadrach
Copy link
Member

Would like to get any thoughts on the approach above. cc @pandas-dev/pandas-core

@MarcoGorelli
Copy link
Member

MarcoGorelli commented Dec 20, 2024

Sounds good, I just think the future= keyword could be annoying for users

I think I'd advocate for just making a breaking change in 3.0 here - it would be a loud change anyway (as the type of the return object is completely changing from Series to DataFrame), and so is unlikely to silently catch anyone by surprise

I remember discussing in #49912 about whether to make a breaking change or to introduce something like future= - we went with the breaking change, and by the looks of it, nobody complained, I can't see any angry issues linked. And I think this method (.transform('cumcount')) is probably far less used than value_counts

No objections to going with the future= option and deprecating, I just think this is a case where just breaking would be acceptable

@rhshadrach
Copy link
Member

I just think the future= keyword could be annoying for users

As a maintainer of a large codebase, it's the opposite for me. Upgrading a version of a dependency and seeing a series of test failures (either an outright error or an unexpected result, possibly far from the line that is the root cause) is far more painful than a warning message that points to the exact line telling me what I have to do.

Still, perhaps many users of pandas write code that only needs to work for a short time and do not maintain it across major versions. For this, I agree that future= would be annoying.

I opened #60593 as a way to perhaps satisfy both cases.

@jbrockmendel
Copy link
Member

I'm on board with the behavior discussed here. For the deprecation/change path I tried to form an opinion and all I came up with was "I trust Richard and Marco"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
API - Consistency Internal Consistency of API/Behavior Bug Groupby Needs Discussion Requires discussion from core team before further action
Projects
None yet
Development

No branches or pull requests

4 participants