-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ballista: Implement map-side shuffle #543
Conversation
Codecov Report
@@ Coverage Diff @@
## master apache/arrow-datafusion#543 +/- ##
==========================================
+ Coverage 76.08% 76.17% +0.08%
==========================================
Files 156 156
Lines 27035 27174 +139
==========================================
+ Hits 20570 20699 +129
- Misses 6465 6475 +10
Continue to review full report at Codecov.
|
@edrevo fyi |
.into_array(input_batch.num_rows())) | ||
}) | ||
.collect::<Result<Vec<_>>>()?; | ||
hashes_buf.clear(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we could reuse the code better at some moment?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Co-authored-by: QP Hou <[email protected]>
}) | ||
.collect::<Result<Vec<_>>>()?; | ||
hashes_buf.clear(); | ||
hashes_buf.resize(arrays[0].len(), 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
noob question: is there a guarantee that all recordbatches have at least one element?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There needs to be at least one column based on the expressions in hash repartitioning - which I think should be a prerequisite when doing hash repartitioning - I am not sure whether DataFusion checks on that explicitly when constructing it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Err(DataFusionError::NotImplemented( | ||
"Shuffle partitioning not implemented yet".to_owned(), | ||
)) | ||
Some(Partitioning::Hash(exprs, n)) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just thinking out loud without any data to back me up, but maybe it is worth special-casing when n==1, so we don't actually perform the hash of everything, since all of the data is going to end up in the same partition anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That makes sense. I filed https://github.com/apache/arrow-datafusion/issues/626 for this. I'd like to get the basic end-to-end shuffle mechanism working before we start optimizing too much.
|
||
// we won't necessary produce output for every possible partition, so we | ||
// create writers on demand | ||
let mut writers: Vec<Option<Arc<Mutex<ShuffleWriter>>>> = vec![]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like Arc + Mutex is unnecessary if you use .iter_mut()
when necessary
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried changing this but ran into ownership issues. I'll go ahead and merge and perhaps someone can help me with fixing this as a follow up PR.
Which issue does this PR close?
Closes #456
Rationale for this change
Another step towards implementing full shuffle support.
What changes are included in this PR?
Are there any user-facing changes?
The result meta-data from executing a query stage now has an additional column with a partition number.