Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a test for ORC write with more than one stripe #11743

Open
wants to merge 2 commits into
base: branch-25.02
Choose a base branch
from

Conversation

ustcfy
Copy link
Collaborator

@ustcfy ustcfy commented Nov 21, 2024

closes #11735

This new test function ensures that enough rows are written to generate more than one stripe during ORC write process, allowing us to catch the "ORC writes don't fully support Booleans with nulls" bug.

@ustcfy ustcfy self-assigned this Nov 21, 2024
Copy link
Collaborator

@thirtiseven thirtiseven left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some questions

This new test function ensures that enough rows are written to generate more than one stripe during ORC write process, allowing us to catch the #11736 bug.

This bug hasn't been fixed, why does no test seem to fail after adding this test case, since it can catch this bug?

assert_gpu_and_cpu_writes_are_equal_collect(
# Generate a large enough dataframe to produce more than one stripe(typically 64MB)
# Preferably use only one partition to avoid splitting the data
lambda spark, path: gen_df(spark, gen_list, 12800, num_slices=1).write.orc(path),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question: Where does the 12800 number come from? Do we know it will be greater than 64m (orc stripe size) for all the datagens you tested?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This number comes from my experiment.(

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general CUDF will split the data by rows and by size.

https://github.com/rapidsai/cudf/blob/f54c1a5ad34133605d3b5b447d9717ce7eb6dba0/cpp/include/cudf/io/orc.hpp#L585-L587

https://github.com/rapidsai/cudf/blob/f54c1a5ad34133605d3b5b447d9717ce7eb6dba0/cpp/include/cudf/io/orc.hpp#L41-L42

In parquet the split rows in 20,000 but for ORC it is 1,000,000. I am not sure how 12,800 booelan values produces a more than one stripe. I would really like to understand this better because I would expect that to be no where close to the row group count we expect to cause multiple slices.

@pytest.mark.parametrize('orc_gens', orc_write_gens_list, ids=idfn)
@pytest.mark.parametrize('orc_impl', ["native", "hive"])
@allow_non_gpu(*non_utc_allow)
def test_write_more_than_one_stripe_round_trip(spark_tmp_path, orc_gens, orc_impl):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the difference is just to generate more data than test_write_round_trip, why not just make this case generate more?

And another question is if we want to generate more than one stripe for other cases in this file or just this case.

Copy link
Collaborator

@thirtiseven thirtiseven Nov 21, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the difference is just to generate more data than test_write_round_trip, why not just make this case generate more?

Well, if the new case will fail I think it makes sense to add it and keep the previous one so we can test the two behaviours at the same time.

Another option might be to add the length to the @pytest.mark.parametrize too, but not sure if this will bring some tricky if else in orc_gens to xfail the failed cases.

@ustcfy ustcfy marked this pull request as draft November 21, 2024 06:53
@ustcfy ustcfy requested a review from jlowe November 21, 2024 09:46
@ustcfy ustcfy marked this pull request as ready for review November 21, 2024 09:48
@@ -91,6 +91,20 @@ def test_write_round_trip(spark_tmp_path, orc_gens, orc_impl):
data_path,
conf={'spark.sql.orc.impl': orc_impl, 'spark.rapids.sql.format.orc.write.enabled': True})

@pytest.mark.parametrize('orc_gen', [pytest.param(boolean_gen, marks=pytest.mark.xfail(reason='https://github.com/NVIDIA/spark-rapids/issues/11736'))], ids=idfn)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my understanding, we also need to test other kinds of data, not just the ones that failed?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to see more data types so that we are not concerned about what other errors we might be seeing. I would also like to see parquet tests with more that one row group. I am fine if that is a follow on issue too.

Copy link
Collaborator

@revans2 revans2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My main concern is the time this is going to take to run and the memory requirements to run it. But a lot of that is going to depend on what we learn about how to get multiple slices to happen in CUDF, and why they are acting this way.

The slowness comes down to how we validate the data. It is done by a single python thread that is comparing the CPU and GPU data row by row.

The memory issues are from the fact that we slice the GPU very small and I am concerned that if we have too much data both the GPU will run out of memory and the python process trying to do the comparison might also run into some limits on CI.

It might be better to expose the row and size limits from CUDF as configs in the plugin which we could use for testing.

@@ -91,6 +91,20 @@ def test_write_round_trip(spark_tmp_path, orc_gens, orc_impl):
data_path,
conf={'spark.sql.orc.impl': orc_impl, 'spark.rapids.sql.format.orc.write.enabled': True})

@pytest.mark.parametrize('orc_gen', [pytest.param(boolean_gen, marks=pytest.mark.xfail(reason='https://github.com/NVIDIA/spark-rapids/issues/11736'))], ids=idfn)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to see more data types so that we are not concerned about what other errors we might be seeing. I would also like to see parquet tests with more that one row group. I am fine if that is a follow on issue too.

assert_gpu_and_cpu_writes_are_equal_collect(
# Generate a large enough dataframe to produce more than one stripe(typically 64MB)
# Preferably use only one partition to avoid splitting the data
lambda spark, path: gen_df(spark, gen_list, 12800, num_slices=1).write.orc(path),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general CUDF will split the data by rows and by size.

https://github.com/rapidsai/cudf/blob/f54c1a5ad34133605d3b5b447d9717ce7eb6dba0/cpp/include/cudf/io/orc.hpp#L585-L587

https://github.com/rapidsai/cudf/blob/f54c1a5ad34133605d3b5b447d9717ce7eb6dba0/cpp/include/cudf/io/orc.hpp#L41-L42

In parquet the split rows in 20,000 but for ORC it is 1,000,000. I am not sure how 12,800 booelan values produces a more than one stripe. I would really like to understand this better because I would expect that to be no where close to the row group count we expect to cause multiple slices.

@sameerz sameerz added the test Only impacts tests label Nov 22, 2024
@ustcfy ustcfy changed the base branch from branch-24.12 to branch-25.02 November 25, 2024 15:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
test Only impacts tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[BUG] GPU file writes only test writing a single row group or stripe
4 participants