Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MINOR]Avoid large over allocate buffer in async reader #2537

Merged
merged 1 commit into from
Aug 20, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions parquet/src/arrow/arrow_reader/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,11 @@ impl<T> ArrowReaderBuilder<T> {
}

/// Set the size of [`RecordBatch`] to produce. Defaults to 1024
/// If the batch_size more than the file row count, use the file row count.
pub fn with_batch_size(self, batch_size: usize) -> Self {
// Try to avoid allocate large buffer
let batch_size =
batch_size.min(self.metadata.file_metadata().num_rows() as usize);
Self { batch_size, ..self }
}

Expand Down
36 changes: 35 additions & 1 deletion parquet/src/arrow/async_reader.rs
Original file line number Diff line number Diff line change
Expand Up @@ -236,6 +236,10 @@ impl<T: AsyncFileReader + Send + 'static> ArrowReaderBuilder<AsyncReader<T>> {
None => (0..self.metadata.row_groups().len()).collect(),
};

// Try to avoid allocate large buffer
let batch_size = self
.batch_size
.min(self.metadata.file_metadata().num_rows() as usize);
let reader = ReaderFactory {
input: self.input.0,
filter: self.filter,
Expand All @@ -245,7 +249,7 @@ impl<T: AsyncFileReader + Send + 'static> ArrowReaderBuilder<AsyncReader<T>> {

Ok(ParquetRecordBatchStream {
metadata: self.metadata,
batch_size: self.batch_size,
batch_size,
row_groups,
projection: self.projection,
selection: self.selection,
Expand Down Expand Up @@ -914,4 +918,34 @@ mod tests {

assert_eq!(&requests[..], &expected_page_requests)
}

#[tokio::test]
async fn test_batch_size_overallocate() {
let testdata = arrow::util::test_util::parquet_test_data();
// `alltypes_plain.parquet` only have 8 rows
let path = format!("{}/alltypes_plain.parquet", testdata);
let data = Bytes::from(std::fs::read(path).unwrap());

let metadata = parse_metadata(&data).unwrap();
let file_rows = metadata.file_metadata().num_rows() as usize;
let metadata = Arc::new(metadata);

let async_reader = TestReader {
data: data.clone(),
metadata: metadata.clone(),
requests: Default::default(),
};

let builder = ParquetRecordBatchStreamBuilder::new(async_reader)
.await
.unwrap();

let stream = builder
.with_projection(ProjectionMask::all())
.with_batch_size(1024)
.build()
.unwrap();
assert_ne!(1024, file_rows);
assert_eq!(stream.batch_size, file_rows as usize);
}
}