-
Notifications
You must be signed in to change notification settings - Fork 190
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
controls row groups and empty tables #1782
Conversation
✅ Deploy Preview for dlt-hub-docs canceled.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewing the docs first to make sure I understood the topic.
``` | ||
Mind that we must hold the tables in memory. 1 000 000 rows in example above may take quite large amount of it. | ||
|
||
`row_group_size` has limited utility with `pyarrow` writer. It will split large tables into many groups if set below item buffer size. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This sentence is not clear to me yet. Is the instruction to the user something like the following?
`row_group_size` has limited utility with `pyarrow` writer. It will split large tables into many groups if set below item buffer size. | |
For the `pyarrow` parquet writer, ensure to have`row_group_size >= buffer_max_items`. Otherwise, your destination might have more row groups than optimal. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ohhh so actually the reverse is true. row_group_size < buffer_max_item
to have any effect. this is the core of the problem I'm fixing here. pyarrow will create row group of size of parquet table being written or smaller. btw. other, well desgined implementations allow to write batches to the same groups. not here
Co-authored-by: Willi Müller <[email protected]>
Co-authored-by: Willi Müller <[email protected]>
Co-authored-by: Willi Müller <[email protected]>
Co-authored-by: Willi Müller <[email protected]>
Co-authored-by: Willi Müller <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great fix! I put some ideas that would help me read the code more easily and docs refactoring.
Just one blocking comment on a naming. Otherwise fine!
dlt/common/data_writers/buffered.py
Outdated
# flush if max buffer exceeded, the second path of the expression prevents empty data frames to pile up in the buffer | ||
if ( | ||
self._buffered_items_count >= self.buffer_max_items | ||
or len(self._buffered_items) >= self.buffer_max_items | ||
): | ||
self._flush_items() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we eliminate the comment by refactoring it to a method?
# flush if max buffer exceeded, the second path of the expression prevents empty data frames to pile up in the buffer | |
if ( | |
self._buffered_items_count >= self.buffer_max_items | |
or len(self._buffered_items) >= self.buffer_max_items | |
): | |
self._flush_items() | |
self.flush_if_max_buffer_exceeded() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similarly, _update_row_count(item)
might be neat.
``` | ||
Mind that `dlt` holds the tables in memory. Thus, 1,000,000 rows in the example above may consume a significant amount of RAM. | ||
|
||
`row_group_size` has limited utility with `pyarrow` writer. It will split large tables into many groups if set below item buffer size. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it now, thanks! Maybe we can give a recommendation like this:
`row_group_size` has limited utility with `pyarrow` writer. It will split large tables into many groups if set below item buffer size. | |
Setting `row_group_size` has limited utility with the `pyarrow` parquet writer because large source tables can end up fragmented into too many groups. | |
Thus, we recommend setting `row_group_size < buffer_max_items` only when the write_disposition is `"replace"`. | |
For all other write dispositions, we recommend the default `row_group_size` to avoid fragmentation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
row groups do not map to write dispositions like this... I think this is only relevant to advanced users that optimize their parquet files for a particular query engine...
dlt/common/data_writers/writers.py
Outdated
elif isinstance(row, pyarrow.RecordBatch): | ||
self.writer.write_batch(row, row_group_size=self.parquet_row_group_size) | ||
self.items_count += row.num_rows | ||
if isinstance(row, pyarrow.RecordBatch): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Blocking: I find this surprising How can a row be a RecordBatch or Table? How can a row have num_rows
?
Could we call it item
like in the docs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right! this is how this class evolved, we started with lists of values to insert and now we deal with tables and other objects. I can rename to items and type properly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Full typing requires using Generic classes, which is a good idea but we have no time to do it now
dlt/common/data_writers/writers.py
Outdated
# concat batches and tables into a single one, preserving order | ||
# pyarrow writer starts a row group for each item it writes (even with 0 rows) | ||
# it also converts batches into tables internally. by creating a single table | ||
# we allow the user rudimentary control over row group size via max buffered items | ||
batches = [] | ||
tables = [] | ||
for row in rows: | ||
if not self.writer: | ||
self.writer = self._create_writer(row.schema) | ||
if isinstance(row, pyarrow.Table): | ||
self.writer.write_table(row, row_group_size=self.parquet_row_group_size) | ||
elif isinstance(row, pyarrow.RecordBatch): | ||
self.writer.write_batch(row, row_group_size=self.parquet_row_group_size) | ||
self.items_count += row.num_rows | ||
if isinstance(row, pyarrow.RecordBatch): | ||
batches.append(row) | ||
elif isinstance(row, pyarrow.Table): | ||
if batches: | ||
tables.append(pyarrow.Table.from_batches(batches)) | ||
batches = [] | ||
tables.append(row) | ||
else: | ||
raise ValueError(f"Unsupported type {type(row)}") | ||
# count rows that got written | ||
self.items_count += row.num_rows | ||
if batches: | ||
tables.append(pyarrow.Table.from_batches(batches)) | ||
|
||
table = pyarrow.concat_tables(tables, promote_options="none") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would find it easier to understand if we extract this into a method: self._concat_items(items)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good idea, moved that to libs
@willi-mueller I fixed most of the suggested code changes and some docs changes... I think we are good for now. thanks, good review :) |
1d54e1f
to
f31e686
Compare
Description
See commit list and docs