Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support args in groupby apply #10682

Merged
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 9 additions & 7 deletions python/cudf/cudf/core/groupby/groupby.py
Original file line number Diff line number Diff line change
Expand Up @@ -516,7 +516,7 @@ def pipe(self, func, *args, **kwargs):
"""
return cudf.core.common.pipe(self, func, *args, **kwargs)

def apply(self, function):
def apply(self, function, *args):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason we can't support **kwargs in the same way?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question. I considered this, and the reason I did not add it was really roadmap considerations. I know we want to rethink how this API works in general, and if we start supporting this now, we could back ourselves into a corner where users are reliant upon it and we're limited in the avenues we can pursue to replace this pipeline without complicating things at the outset. The same thing makes me a little uneasy about even adding *args, but at least there's a direct feature request for that and precedent for it elsewhere (DataFrame.apply and Series.apply both support args=).

As you have observed, it would probably not take much work to add so I'm happy to add it if there's strong motivation to do so.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The motivation is to make this API align with pandas. https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.GroupBy.apply.html#pandas.core.groupby.GroupBy.apply

What are we wanting to rethink? The public API, or the internal implementation? I would say that *args and **kwargs go together for this feature and we should implement both or neither -- not just *args.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similarly, we should enable **kwargs for Series.apply and DataFrame.apply if the numba machinery is no more complicated than it is here. Parameters-by-keyword and parameters-by-position serve the same purpose and I don't see a fundamental distinction in complexity between them.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's the internal implementation that I expect to change, since this implementation does a loop over chunks which is what makes it potentially slow. Right now this API isn't backed by Numba at all, which is why we can pass through args and theoretically kwargs rather easily.

kwargs in a numba implementation is not trivial. For context, numba does not support kwargs at all in compiled functions. This is why we don't have it in the other apply APIs. Since we could end up exploring some kind of JIT compiled implementation of this API as well, it might be closing the door on some solutions for us that could otherwise cover a broad swath of use cases.

I would say that *args and **kwargs go together for this feature and we should implement both or neither -- not just *args.

I empathize with the feeling of asymmetry that comes along with having *args and not **kwargs, but not with the notion that we should not have *args unless we can also have **kwargs. For reasons similar to what you noted about there being little fundamental distinction, supporting *args unlocks a space of functions including many logical equivalents to **kwargs functions while still allowing us to tiptoe around this API a bit.

What do you think, @bdice ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, that's the context I was missing. 😄 Let's move forward with this as-is, then. I'll review the rest of the PR.

"""Apply a python transformation function over the grouped chunk.

Parameters
Expand Down Expand Up @@ -594,17 +594,19 @@ def mult(df):
chunks = [
grouped_values[s:e] for s, e in zip(offsets[:-1], offsets[1:])
]
chunk_results = [function(chk) for chk in chunks]

chunk_results = [function(chk, *args) for chk in chunks]
if not len(chunk_results):
return self.obj.head(0)

if cudf.api.types.is_scalar(chunk_results[0]):
result = cudf.Series(chunk_results, index=group_names)
result.index.names = self.grouping.names
elif isinstance(chunk_results[0], cudf.Series):
result = cudf.concat(chunk_results, axis=1).T
result.index.names = self.grouping.names
if isinstance(self.obj, cudf.DataFrame):
result = cudf.concat(chunk_results, axis=1).T
result.index.names = self.grouping.names
else:
result = cudf.concat(chunk_results)
else:
result = cudf.concat(chunk_results)

Expand Down Expand Up @@ -1581,8 +1583,8 @@ def agg(self, func):

return result

def apply(self, func):
result = super().apply(func)
def apply(self, func, *args):
result = super().apply(func, *args)

# apply Series name to result
result.name = self.obj.name
Expand Down
40 changes: 40 additions & 0 deletions python/cudf/cudf/tests/test_groupby.py
Original file line number Diff line number Diff line change
Expand Up @@ -291,6 +291,14 @@ def foo(df):
got = got_grpby.apply(foo)
assert_groupby_results_equal(expect, got)

def foo_args(df, k):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer to see this in a separate test function rather than combined with tests for functions that lack *args. Maybe the data generation can be shared via a helper function (a fixture could work but would be evaluated at collection time, which is undesirable). Similarly for the other tests.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tbh I agree with you here. Let me rework these a bit.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think of doing it kind of like 7b27cc5 ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy with that. 👍

df["out"] = df["val1"] + df["val2"] + k
return df

expect = expect_grpby.apply(foo_args, 2)
got = got_grpby.apply(foo_args, 2)
assert_groupby_results_equal(expect, got)


def test_groupby_apply_grouped():
np.random.seed(0)
Expand Down Expand Up @@ -1626,6 +1634,17 @@ def custom_map_func(x):

assert_groupby_results_equal(expected, actual)

def custom_map_func_args(x, k):
x = x[~x["B"].isna()]
ticker = x.shape[0]
full = ticker / 10 + k
return full + 1.8 / k

expected = pdf.groupby("A").apply(custom_map_func_args, 2)
actual = gdf.groupby("A").apply(custom_map_func_args, 2)

assert_groupby_results_equal(expected, actual)


@pytest.mark.parametrize(
"cust_func",
Expand All @@ -1643,6 +1662,21 @@ def test_groupby_apply_return_series_dataframe(cust_func):
assert_groupby_results_equal(expected, actual)


def test_groupby_apply_return_series_dataframe_args():
pdf = pd.DataFrame(
{"key": [0, 0, 1, 1, 2, 2, 2], "val": [0, 1, 2, 3, 4, 5, 6]}
)
gdf = cudf.from_pandas(pdf)

def cust_func(x, k):
return x - x.min() + k

expected = pdf.groupby(["key"]).apply(cust_func, 2)
actual = gdf.groupby(["key"]).apply(cust_func, 2)

assert_groupby_results_equal(expected, actual)


@pytest.mark.parametrize(
"pdf",
[pd.DataFrame(), pd.DataFrame({"a": []}), pd.Series([], dtype="float64")],
Expand Down Expand Up @@ -2212,6 +2246,12 @@ def foo(x):

assert_groupby_results_equal(expect, got)

def foo_args(x, k):
return x.sum() + k

got = make_frame(DataFrame, 100).groupby("x").y.apply(foo_args, 2)
expect = make_frame(pd.DataFrame, 100).groupby("x").y.apply(foo_args, 2)


@pytest.mark.parametrize("label", [None, "left", "right"])
@pytest.mark.parametrize("closed", [None, "left", "right"])
Expand Down