-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Introduce JIT code generation #1849
Conversation
)?; | ||
Ok(()) | ||
}, | ||
)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is how we create an iterative version of Fibonacci calculation with the introduced FunctionBuilder
API.
datafusion/src/row/reader.rs
Outdated
} | ||
|
||
#[cfg(feature = "jit")] | ||
fn gen_read_row(schema: &Arc<Schema>, assembler: &Assembler) -> Result<*const u8> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the other example of how we generate code based on the schema to create a row to record batch deserializer.
@@ -44,6 +48,31 @@ pub fn read_as_batch( | |||
output.output().map_err(DataFusionError::ArrowError) | |||
} | |||
|
|||
/// Read `data` of raw-bytes rows starting at `offsets` out to a record batch | |||
#[cfg(feature = "jit")] | |||
pub fn read_as_batch_jit( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And this is the example usage pattern: generate code and compile once, and run repeatedly.
This looks awesome @yjshen -- I will try and review it soon, though given its size and that I will be out of the office for the next few days it may take me some time. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Epic work @yjshen 👍 Solid foundation for wscg :D
that's a great idea @yjshen; have you measured the effects on performance? |
yes i am working on a bench. will post the results later |
For the record batch to row conversion case, I saw a 15% performance boost while using JIT.
|
0 => write!(f, "nil"), | ||
0x70 => write!(f, "bool"), | ||
0x76 => write!(f, "i8"), | ||
0x77 => write!(f, "i16"), | ||
0x78 => write!(f, "i32"), | ||
0x79 => write!(f, "i64"), | ||
0x7b => write!(f, "f32"), | ||
0x7c => write!(f, "f64"), | ||
0x7e => write!(f, "small_ptr"), | ||
0x7f => write!(f, "ptr"), | ||
_ => write!(f, "unknown"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, can we match with the defined const? e.g. BOOL.code
, instead of the actual code value here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems I cannot put BOOL.code here. Introducing another const may be too complex?
} | ||
|
||
#[cfg(feature = "jit")] | ||
pub fn bench_write_batch_jit_dummy(schema: Arc<Schema>) -> Result<()> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
where is this bench_write_batch_jit_dummy
function used? if it isn't used should it be removed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's used in jit bench now. I check in the benchmark in a later commit.
@viirya @alamb @yordan-pavlov do you want to take a final look before the merge? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good to me, thanks @yjshen
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just had a chance to review this module. Very cool stuff @yjshen 🏅 .. I would be (very) interested in helping push the JIT feature along.
I think using this JIT and the row format to speed up sorting / merging would be a very interesting project and quite relevant to IOx (and thus I could justify spending non trivial time on it). Perhaps I can take a swag at creating some benchmarks or something to kick off the process?
I don't want to start working on anything if you are already doing so
let code_fn = unsafe { | ||
std::mem::transmute::<_, fn(&RowReader, &mut MutableRecordBatch)>(code_ptr) | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
over time it would be good to try and encapsulate the unsafe code into a smaller number of places (e.g. perhaps have an interface that creates a row comparator)
Yes, please go ahead. I have just done some code research on DuckDB's radix sort based on the sort keys in raw bytes format. I think it's great to implement and benchmark the performance here. 1 2 and 3 are worth checking if you want to try out the DuckDB way we've discussed in #1708 (comment) |
Sounds good -- I think I need to spend some more time studying the new JIT code and figuring out how to structure these changes. At least for merging I need to think about how best to use the row format. Will keep you updated on my thinking |
BTW I have not forgotten (or reduced my interest in) working on JIT related code. However, I have not had any time to devote to it yet -- most of my time has been spent reviewing code and moving arrow release along. I'll try and get some time in this next week. |
No need to hurry. I've turned my attention to aggregate for now and will do some benchmarks to evaluate the impact of row format on performance under different key cardinalities. |
Which issue does this PR close?
Closes #1850.
Rationale for this change
With JIT codegen, we could generate specific code for each query to reduce branching overhead from the generalized interpret mode execution. Furthermore, we could reduce the memory footprint during the execution by chaining multiple Arrow compute kernels together and reusing the intermediate vectors.
For the row format recently introduced #1782, we could reduce much branching once row <-> record batch conversion codes are generated based on schema information.
What changes are included in this PR?
datafusion-jit
module and its feature gate:jit
(off-by-default)Are there any user-facing changes?
No.