Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prepare dask_cudf test_parquet.py for upcoming API changes #10709

Merged
merged 67 commits into from
Apr 28, 2022
Merged
Show file tree
Hide file tree
Changes from 61 commits
Commits
Show all changes
67 commits
Select commit Hold shift + click to select a range
19f5174
Merge pull request #4714 from rapidsai/branch-0.13
raydouglass Mar 30, 2020
a2804c3
REL v0.13.0 release
GPUtester Mar 31, 2020
fef2a2b
REL v0.13.0 CHANGELOG Updates
mike-wendt Apr 1, 2020
ab00eb0
Merge pull request #5310 from rapidsai/branch-0.14
raydouglass Jun 3, 2020
b34b838
REL v0.14.0 release
GPUtester Jun 3, 2020
9ff9cdb
update master references
ajschmidt8 Jul 14, 2020
789d19b
REL DOC Updates for main branch switch
mike-wendt Jul 16, 2020
819f514
Merge pull request #6079 from rapidsai/branch-0.15
raydouglass Aug 26, 2020
3a0f214
REL v0.15.0 release
GPUtester Aug 26, 2020
f947393
Merge pull request #6101 from rapidsai/branch-0.15
raydouglass Aug 27, 2020
71cb8c0
REL v0.15.0 release
GPUtester Aug 27, 2020
7ef8174
Merge pull request #6547 from rapidsai/branch-0.16
raydouglass Oct 21, 2020
2b8298f
REL v0.16.0 release
GPUtester Oct 21, 2020
d72b1eb
Merge pull request #6935 from rapidsai/branch-0.17
ajschmidt8 Dec 10, 2020
f56ef85
REL v0.17.0 release
GPUtester Dec 10, 2020
b7e1a85
Merge pull request #7405 from rapidsai/branch-0.18
raydouglass Feb 24, 2021
20778e5
REL v0.18.0 release
GPUtester Feb 24, 2021
042c20f
Merge pull request #7585 from rapidsai/branch-0.18
raydouglass Mar 15, 2021
999be56
REL v0.18.1 release
raydouglass Mar 15, 2021
2391864
Merge pull request #7969 from rapidsai/branch-0.18
raydouglass Apr 15, 2021
3341561
REL v0.18.2 release
raydouglass Apr 15, 2021
6573759
Merge pull request #7626 from rapidsai/branch-0.19
raydouglass Apr 21, 2021
f07b251
REL v0.19.0 release
GPUtester Apr 21, 2021
61e5a20
REL Changelog update
ajschmidt8 Apr 21, 2021
a13e8dc
Merge pull request #8037 from rapidsai/branch-0.19
raydouglass Apr 22, 2021
a9f3453
REL v0.19.1 release
GPUtester Apr 22, 2021
2089fc9
Merge pull request #8100 from rapidsai/branch-0.19
raydouglass Apr 28, 2021
ab3b3f6
REL v0.19.2 release
GPUtester Apr 28, 2021
f9d5e2e
Merge pull request #8418 from rapidsai/branch-21.06
raydouglass Jun 9, 2021
ae44046
REL v21.06.00 release
GPUtester Jun 9, 2021
3b831c3
Merge pull request #8488 from rapidsai/branch-21.06
ajschmidt8 Jun 10, 2021
d56ac1d
Merge pull request #8542 from rapidsai/branch-21.06
raydouglass Jun 17, 2021
cddc64f
REL v21.06.01 release
GPUtester Jun 17, 2021
101fc0f
REL Merge pull request #8544 from rapidsai/branch-21.06
raydouglass Jun 17, 2021
e9dabf8
Merge pull request #8840 from rapidsai/branch-21.08
raydouglass Aug 4, 2021
106039c
REL v21.08.00 release
GPUtester Aug 4, 2021
8055721
Merge pull request #8986 from rapidsai/branch-21.08
raydouglass Aug 6, 2021
e0a8114
REL v21.08.01 release
GPUtester Aug 6, 2021
a7391e6
Merge pull request #8990 from rapidsai/branch-21.08
raydouglass Aug 6, 2021
f6d31fa
REL v21.08.02 release
GPUtester Aug 6, 2021
dff45e5
Merge pull request #9116 from rapidsai/branch-21.08
ajschmidt8 Sep 16, 2021
e4313b6
REL v21.08.03 release
GPUtester Sep 16, 2021
5638329
Merge pull request #9301 from rapidsai/branch-21.10
ajschmidt8 Oct 6, 2021
072fd86
REL v21.10.00 release
GPUtester Oct 6, 2021
8cfb8e5
Merge pull request #9420 from rapidsai/branch-21.10
raydouglass Oct 12, 2021
a1d2d13
REL v21.10.01 release
GPUtester Oct 12, 2021
3ceb0c0
Merge pull request #9689 from rapidsai/branch-21.12
raydouglass Dec 3, 2021
f1ef2d2
REL v21.12.00 release
GPUtester Dec 3, 2021
fd04831
Merge pull request #9880 from rapidsai/branch-21.12
raydouglass Dec 9, 2021
a0a0a3a
REL v21.12.01 release
GPUtester Dec 9, 2021
c74e24f
Merge pull request #9924 from rapidsai/branch-21.12
raydouglass Dec 16, 2021
06540b9
REL v21.12.02 release
GPUtester Dec 16, 2021
f39f559
Merge pull request #10101 from rapidsai/branch-22.02
raydouglass Feb 2, 2022
774d859
REL v22.02.00 release
GPUtester Feb 2, 2022
803c42a
Merge pull request #10512 from rapidsai/branch-22.04
raydouglass Apr 6, 2022
8bf0520
REL v22.04.00 release
GPUtester Apr 6, 2022
0363197
REL Merge pull request #10633 from rapidsai/branch-22.04
raydouglass Apr 11, 2022
f92b0bb
remove row_groups_per_part and clean up divisions and split_row_group…
rjzamora Apr 21, 2022
8b4e11c
clarify comment
rjzamora Apr 21, 2022
bd5b692
add file-size and memory checks to better-inform the user of split_ro…
rjzamora Apr 22, 2022
0b6c7f8
Merge remote-tracking branch 'upstream/main' into remove-row_groups_p…
rjzamora Apr 25, 2022
d0423aa
simplify error and warning messages
rjzamora Apr 26, 2022
3798cf3
improve docstring for read_parquet to discuss split_row_groups argument
rjzamora Apr 26, 2022
d6eb7a6
another tweak
rjzamora Apr 26, 2022
3791b19
inform the user that setting split_row_groups will silence the file-s…
rjzamora Apr 26, 2022
17b0867
Merge remote-tracking branch 'upstream/branch-22.06' into remove-row_…
rjzamora Apr 27, 2022
0c775dc
Merge remote-tracking branch 'upstream/branch-22.06' into remove-row_…
rjzamora Apr 28, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -1408,6 +1408,7 @@ Please see https://github.com/rapidsai/cudf/releases/tag/v22.06.00a for the late
- Fixing empty null lists throwing explode_outer for a loop. ([#7649](https://github.com/rapidsai/cudf/pull/7649)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Fix internal compiler error during JNI Docker build ([#7645](https://github.com/rapidsai/cudf/pull/7645)) [@jlowe](https://github.com/jlowe)
- Fix Debug build break with device_uvectors in grouped_rolling.cu ([#7633](https://github.com/rapidsai/cudf/pull/7633)) [@mythrocks](https://github.com/mythrocks)
- Parquet reader: Fix issue when using skip_rows on non-nested columns containing nulls ([#7627](https://github.com/rapidsai/cudf/pull/7627)) [@nvdbaranec](https://github.com/nvdbaranec)
rjzamora marked this conversation as resolved.
Show resolved Hide resolved
- Parquet reader: Fix issue when using skip_rows on non-nested columns containing nulls ([#7627](https://github.com/rapidsai/cudf/pull/7627)) [@nvdbaranec](https://github.com/nvdbaranec)
- Fix ORC reader for empty DataFrame/Table ([#7624](https://github.com/rapidsai/cudf/pull/7624)) [@rgsl888prabhu](https://github.com/rgsl888prabhu)
- Fix specifying GPU architecture in JNI build ([#7612](https://github.com/rapidsai/cudf/pull/7612)) [@jlowe](https://github.com/jlowe)
Expand Down Expand Up @@ -1507,6 +1508,7 @@ Please see https://github.com/rapidsai/cudf/releases/tag/v22.06.00a for the late
- Add groupby scan operations (sort groupby) ([#7387](https://github.com/rapidsai/cudf/pull/7387)) [@karthikeyann](https://github.com/karthikeyann)
- Add cudf::explode_position ([#7376](https://github.com/rapidsai/cudf/pull/7376)) [@hyperbolic2346](https://github.com/hyperbolic2346)
- Add string conversion to/from decimal values libcudf APIs ([#7364](https://github.com/rapidsai/cudf/pull/7364)) [@davidwendt](https://github.com/davidwendt)
- Add groupby SUM_OF_SQUARES support ([#7362](https://github.com/rapidsai/cudf/pull/7362)) [@karthikeyann](https://github.com/karthikeyann)
rjzamora marked this conversation as resolved.
Show resolved Hide resolved
- Add groupby SUM_OF_SQUARES support ([#7362](https://github.com/rapidsai/cudf/pull/7362)) [@karthikeyann](https://github.com/karthikeyann)
- Add `Series.drop` api ([#7304](https://github.com/rapidsai/cudf/pull/7304)) [@isVoid](https://github.com/isVoid)
- get_json_object() implementation ([#7286](https://github.com/rapidsai/cudf/pull/7286)) [@nvdbaranec](https://github.com/nvdbaranec)
Expand All @@ -1515,6 +1517,7 @@ Please see https://github.com/rapidsai/cudf/releases/tag/v22.06.00a for the late
- Add support for special tokens in nvtext::subword_tokenizer ([#7254](https://github.com/rapidsai/cudf/pull/7254)) [@davidwendt](https://github.com/davidwendt)
- Fix inplace update of data and add Series.update ([#7201](https://github.com/rapidsai/cudf/pull/7201)) [@galipremsagar](https://github.com/galipremsagar)
- Implement `cudf::group_by` (hash) for `decimal32` and `decimal64` ([#7190](https://github.com/rapidsai/cudf/pull/7190)) [@codereport](https://github.com/codereport)
- Adding support to specify "level" parameter for `Dataframe.rename` ([#7135](https://github.com/rapidsai/cudf/pull/7135)) [@skirui-source](https://github.com/skirui-source)
rjzamora marked this conversation as resolved.
Show resolved Hide resolved
- Adding support to specify "level" parameter for `Dataframe.rename` ([#7135](https://github.com/rapidsai/cudf/pull/7135)) [@skirui-source](https://github.com/skirui-source)

## 🛠️ Improvements
Expand Down Expand Up @@ -1655,6 +1658,7 @@ Please see https://github.com/rapidsai/cudf/releases/tag/v22.06.00a for the late
- Adding Interval Dtype ([#6984](https://github.com/rapidsai/cudf/pull/6984)) [@marlenezw](https://github.com/marlenezw)
- Cleaning up `for` loops with `make_(counting_)transform_iterator` ([#6546](https://github.com/rapidsai/cudf/pull/6546)) [@codereport](https://github.com/codereport)


rjzamora marked this conversation as resolved.
Show resolved Hide resolved
# cuDF 0.18.0 (24 Feb 2021)

## Breaking Changes 🚨
Expand Down
184 changes: 107 additions & 77 deletions python/dask_cudf/dask_cudf/io/parquet.py
Original file line number Diff line number Diff line change
Expand Up @@ -177,65 +177,99 @@ def read_partition(
strings_to_cats = kwargs.get("strings_to_categorical", False)
read_kwargs = kwargs.get("read", {})
read_kwargs.update(open_file_options or {})

# Assume multi-piece read
paths = []
rgs = []
last_partition_keys = None
dfs = []

for i, piece in enumerate(pieces):

(path, row_group, partition_keys) = piece
row_group = None if row_group == [None] else row_group

if i > 0 and partition_keys != last_partition_keys:
dfs.append(
cls._read_paths(
paths,
fs,
columns=read_columns,
row_groups=rgs if rgs else None,
strings_to_categorical=strings_to_cats,
partitions=partitions,
partitioning=partitioning,
partition_keys=last_partition_keys,
**read_kwargs,
check_file_size = read_kwargs.pop("check_file_size", None)

# Wrap reading logic in a `try` block so that we can
# inform the user that the `read_parquet` partition
# size is too large for the available memory
try:

# Assume multi-piece read
paths = []
rgs = []
last_partition_keys = None
dfs = []

for i, piece in enumerate(pieces):

(path, row_group, partition_keys) = piece
row_group = None if row_group == [None] else row_group

# File-size check to help "protect" users from change
# to up-stream `split_row_groups` default. We only
# check the file size if this partition corresponds
# to a full file, and `check_file_size` is defined
if check_file_size and len(pieces) == 1 and row_group is None:
file_size = fs.size(path)
if file_size > check_file_size:
warnings.warn(
f"A large parquet file ({file_size}B) is being "
f"used to create a DataFrame partition in "
f"read_parquet. Did you mean to use the "
rjzamora marked this conversation as resolved.
Show resolved Hide resolved
f"split_row_groups argument?"
)
rjzamora marked this conversation as resolved.
Show resolved Hide resolved

if i > 0 and partition_keys != last_partition_keys:
dfs.append(
cls._read_paths(
paths,
fs,
columns=read_columns,
row_groups=rgs if rgs else None,
strings_to_categorical=strings_to_cats,
partitions=partitions,
partitioning=partitioning,
partition_keys=last_partition_keys,
**read_kwargs,
)
)
paths = rgs = []
last_partition_keys = None
paths.append(path)
rgs.append(
[row_group]
if not isinstance(row_group, list)
and row_group is not None
else row_group
)
paths = rgs = []
last_partition_keys = None
paths.append(path)
rgs.append(
[row_group]
if not isinstance(row_group, list) and row_group is not None
else row_group
)
last_partition_keys = partition_keys
last_partition_keys = partition_keys

dfs.append(
cls._read_paths(
paths,
fs,
columns=read_columns,
row_groups=rgs if rgs else None,
strings_to_categorical=strings_to_cats,
partitions=partitions,
partitioning=partitioning,
partition_keys=last_partition_keys,
**read_kwargs,
dfs.append(
cls._read_paths(
paths,
fs,
columns=read_columns,
row_groups=rgs if rgs else None,
strings_to_categorical=strings_to_cats,
partitions=partitions,
partitioning=partitioning,
partition_keys=last_partition_keys,
**read_kwargs,
)
)
)
df = cudf.concat(dfs) if len(dfs) > 1 else dfs[0]
df = cudf.concat(dfs) if len(dfs) > 1 else dfs[0]

# Re-set "object" dtypes align with pa schema
set_object_dtypes_from_pa_schema(df, schema)

if index and (index[0] in df.columns):
df = df.set_index(index[0])
elif index is False and df.index.names != (None,):
# If index=False, we shouldn't have a named index
df.reset_index(inplace=True)
# Re-set "object" dtypes align with pa schema
set_object_dtypes_from_pa_schema(df, schema)

if index and (index[0] in df.columns):
df = df.set_index(index[0])
elif index is False and df.index.names != (None,):
# If index=False, we shouldn't have a named index
df.reset_index(inplace=True)

except MemoryError as err:
raise MemoryError(
"Parquet data was larger than the available GPU memory!\n\n"
"Please try `split_row_groups=True` or set this option "
"to a smaller integer (if applicable).\n\n"
"If you are using dask-cuda workers, this may indicate "
"that the current `device_memory_limit` is too high. "
"If you are not using dask-cuda workers, this may indicate "
"that your workflow requires dask-cuda spilling.\n\n"
"Original Error: " + str(err)
)
raise err
rjzamora marked this conversation as resolved.
Show resolved Hide resolved

return df

Expand Down Expand Up @@ -349,13 +383,7 @@ def set_object_dtypes_from_pa_schema(df, schema):
df._data[col_name] = col.astype(typ)


def read_parquet(
path,
columns=None,
split_row_groups=None,
row_groups_per_part=None,
**kwargs,
):
def read_parquet(path, columns=None, **kwargs):
"""Read parquet files into a Dask DataFrame

Calls ``dask.dataframe.read_parquet`` to cordinate the execution of
Expand All @@ -376,22 +404,24 @@ def read_parquet(
if isinstance(columns, str):
columns = [columns]

if row_groups_per_part:
warnings.warn(
"row_groups_per_part is deprecated. "
"Pass an integer value to split_row_groups instead.",
FutureWarning,
)
if split_row_groups is None:
split_row_groups = row_groups_per_part

return dd.read_parquet(
path,
columns=columns,
split_row_groups=split_row_groups,
engine=CudfEngine,
**kwargs,
)
# Set "check_file_size" option to determine whether we
# should check the parquet-file size. This check is meant
# to "protect" users from `split_row_groups` default changes
check_file_size = kwargs.pop("check_file_size", 2_000_000_000)
if (
check_file_size
and ("split_row_groups" not in kwargs)
and ("chunksize" not in kwargs)
):
# User is not specifying `split_row_groups` or `chunksize`,
# so we should warn them if/when a file is ~>2GB on disk.
# The will be able to set `split_row_groups` explicitly to
# silence/skip this check
if "read" not in kwargs:
kwargs["read"] = {}
kwargs["read"]["check_file_size"] = check_file_size

return dd.read_parquet(path, columns=columns, engine=CudfEngine, **kwargs)


to_parquet = partial(dd.to_parquet, engine=CudfEngine)
Expand Down
Loading