Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Temporarily disable integration test - test_hash_reduction_pivot_without_nans #4534

Merged
merged 3 commits into from
Jan 14, 2022
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 14 additions & 12 deletions integration_tests/src/main/python/hash_aggregate_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -533,18 +533,20 @@ def fetch_pivot_values(spark):
"PivotFirst",
conf=_nans_float_conf)

@approximate_float
@ignore_order(local=True)
@incompat
@pytest.mark.parametrize('data_gen', _init_list_no_nans, ids=idfn)
@pytest.mark.parametrize('conf', get_params(_confs_with_nans, params_markers_for_confs_nans), ids=idfn)
def test_hash_reduction_pivot_without_nans(data_gen, conf):
assert_gpu_and_cpu_are_equal_collect(
lambda spark: gen_df(spark, data_gen, length=100)
.groupby()
.pivot('b')
.agg(f.sum('c')),
conf=conf)
# https://github.com/NVIDIA/spark-rapids/issues/4514
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be better to skip the test rather than comment it out:

@pytest.mark.skip(reason='https://github.com/NVIDIA/spark-rapids/issues/4514')

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

xfail seems appropriate here, as this is code we fully expect to pass but is failing due to known issues. skip should be used for tests that are never expected to pass if attempted, even after bugfixes (e.g.: tests for a feature that a particular Spark version does not support, etc.)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't seem to be failing in github CI, so we probably want to skip?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

xfail is what we want. xfail is fine if the test actually passes (it reports it as an XPASS result).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah ok, sorry I thought xfail actually failed the tests when it passed. Thanks

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a config for that I wish we had done a good enough job with things that it would fail. But that is not how it is right now :(

# Disabling below test temporarily until we have a fix. Disabling fixes the CI pipeline failures.
#@approximate_float
#ignore_order(local=True)
#@incompat
#@pytest.mark.parametrize('data_gen', _init_list_no_nans, ids=idfn)
#@pytest.mark.parametrize('conf', get_params(_confs_with_nans, params_markers_for_confs_nans), ids=idfn)
#def test_hash_reduction_pivot_without_nans(data_gen, conf):
# assert_gpu_and_cpu_are_equal_collect(
# lambda spark: gen_df(spark, data_gen, length=100)
# .groupby()
# .pivot('b')
# .agg(f.sum('c')),
# conf=conf)

_repeat_agg_column_for_collect_op = [
RepeatSeqGen(BooleanGen(), length=15),
Expand Down