-
Notifications
You must be signed in to change notification settings - Fork 240
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] distributed CI failed in test_read_case_col_name
#8403
Comments
This test was recently added here: #8371 |
This repros against a standalone cluster without the multi-threaded cluster. |
test_read_case_col_name
test_read_case_col_name
Trying to come up with a smaller repro, but I ended up with this: #8411 |
Related (but not the whole) repro:
Where json_test_repro is a folder with a json file with contents:
And partitioned directory structure, as such:
Note the schema passed to cuDF has columns v0, v3 that don't exist in the json itself. The issue with this we get back a table from cuDF with 2 columns, instead of 4 columns (after fixing #8411):
Since the files can have any number of columns (if all rows are null, Spark can omit the column), then we need to make sure cuDF follows suit. |
We need to file a cuDF bug for this. My guess is that the integration test failure is an odd combination of contents of a source table. If the file in question had the following contents:
We get back 4 columns and all seems well:
The CPU consistently returns the above table with all columns and nulls where appropriate. |
Yes, this is a bug in our CUDF JNI layer. If we cannot find a column that matches the name, we return the original data from unchanged. We really should be moving columns over to a new table and generating columns of nulls if we cannot find the column we want. |
From a performance/efficiency standpoint I think we want to modify where we reorder the columns. I think we want to return a TableWithMeta, and let the java code handle the column names. This is so we can avoid copying the table multiple times and possibly making multiple copies of the same columns. |
Crap. I don't know if we can fix it without pushing some of this back to the plugin or have CUDF actually return nulls for columns that were not in the file. I'll file an issue and see what CUDF has to say about it. |
I filed rapidsai/cudf#13473 for the long term fix. In the short term I am going to do my best to make it work without it. We will not be able to make the corner case of a file with no columns, only rows, work. |
I see the following failures:
In the multi-threaded shuffle CI job. But I am seeing the same failure locally against a standalone Spark.
I also saw #8400.
The text was updated successfully, but these errors were encountered: