-
Notifications
You must be signed in to change notification settings - Fork 928
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce execution time of Python ORC tests #14776
Conversation
fail_df["col"][500000] = None | ||
# Generate a boolean column longer than a single row group | ||
fail_df = cudf.DataFrame({"col": gen_rand_series("bool", 20000)}) | ||
# Invalidate a row in the first row group |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the old comment was incorrect, the test file had a single stripe
fail_df = cudf.DataFrame({"col": gen_rand_series("bool", 600000)}) | ||
# Invalidate the first row in the second stripe to break encoding | ||
fail_df["col"][500000] = None | ||
# Generate a boolean column longer than a single row group |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Modified this test based on the actual checks we perform on bool columns - all row groups except for the last one in each stripe need to have the number of valid elements divisible by 8. The row group size is 10k, so a single null fails this check and the writer should throw.
I have no idea what I meant with the original comments, they don't match the code at all 🤷♂️
@@ -1130,7 +1131,7 @@ def test_pyspark_struct(datadir): | |||
assert_eq(pdf, gdf) | |||
|
|||
|
|||
def gen_map_buff(size=10000): | |||
def gen_map_buff(size): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
default value was unused
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like a reasonable set of changes - I have a couple questions about the underlying issues that are commented in these tests.
python/cudf/cudf/tests/test_orc.py
Outdated
@@ -604,13 +604,13 @@ def normalized_equals(value1, value2): | |||
|
|||
|
|||
@pytest.mark.parametrize("stats_freq", ["STRIPE", "ROWGROUP"]) | |||
@pytest.mark.parametrize("nrows", [1, 100, 6000000]) | |||
@pytest.mark.parametrize("nrows", [1, 100, 100000]) | |||
def test_orc_write_statistics(tmpdir, datadir, nrows, stats_freq): | |||
from pyarrow import orc | |||
|
|||
supported_stat_types = supported_numpy_dtypes + ["str"] | |||
# Can't write random bool columns until issue #6763 is fixed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's sad that we don't fully support bool columns, but we haven't had any users ask for this (that I know of).
If there's demand, I'll gladly add it to the pilebacklog. Not sure if the issue conveys this, but it's not a trivial feature.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we explicitly disable writing bool columns? This seems like we're writing bad data, and silent corruption isn't something I feel comfortable waiting for users to discover and report.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Already done #7261 ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ahhhhh. That totally changes my perspective. Can you update the comments to say something like "Writing bool columns exceeding one row group are disabled in libcudf until #6763 is fixed"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also we should update this test to check that an error is raised in this case, rather than removing the column!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test is for writing statistics, so we really want it to write a table with multiple stripes and verify the written statistics. Letting a test case throw does not contribute to this.
We do have a separate test for throwing with bool columns (as opposed to silent corruption).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. Apologies, I should have looked first. I have no other concerns.
def test_orc_write_statistics(tmpdir, datadir, nrows, stats_freq): | ||
from pyarrow import orc | ||
|
||
supported_stat_types = supported_numpy_dtypes + ["str"] | ||
# Can't write random bool columns until issue #6763 is fixed | ||
if nrows == 6000000: | ||
if nrows == 100000: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why does this work for nrows=1
or nrows=100
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can write a single row group of random bools, just not multiple (at least not in the way that does not cause issues with other readers). So anything below 10k rows is fine. I know this is very hacky :(
So how much faster is it now? |
Down from 114s to 55s on my system. |
Co-authored-by: Bradley Dice <[email protected]>
/merge |
Description
Reduced size of the excessively large tests, making sure to keep the code coverage.
Also fixed a few tests to provide better coverage (original intent unclear).
Checklist