-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ColumnStatistics::Sum
#14074
Add ColumnStatistics::Sum
#14074
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @gatesn -- I think this is a nice addition.
It looks like the cargo fmt
test is failing
Ideally we would add unit test coverage for Precision::multiply
Precision::sub
and Precision::cast_to
before we merged.
Thanks again -- excited to see this working
FYI @suremarc @berkaysynnada / @ozankabak as this changes statistics and I think you are already working on things related to that:
@@ -436,6 +492,8 @@ pub struct ColumnStatistics { | |||
pub max_value: Precision<ScalarValue>, | |||
/// Minimum value of column | |||
pub min_value: Precision<ScalarValue>, | |||
/// Sum value of a column | |||
pub sum_value: Precision<ScalarValue>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I think we mentioned in #13736 my only real concern with this addition is that it will make ColumnStatistics
even bigger (each ScalarValue is quite large already and ColumnStatistics
are copied a bunch
However, I think the "right" fix for that is to move to using a different statistics representation (e.g. Arc::ColumnStatistics
) so I don't see this as a blocker
(_, _) => Precision::Absent, | ||
} | ||
} | ||
|
||
/// Casts the value to the given data type, propagating exactness information. | ||
pub fn cast_to(&self, data_type: &DataType) -> Result<Precision<ScalarValue>> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@alamb one question I have is whether this should return a Result, or we should assume that a failed cast implies overflow and therefore return Precision::Absent
?
The caller (currently in cross-join) unwraps to Absent
, I just didn't know whether to internalize that here.
Edit: I decided it was better to propagate the error and allow the caller to decide. It was more useful in a couple of places.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added some tests and hopefully appeases the linter!
We've started to refactor. The design is complete, and the implementation is in progress. I’ve taken a look at this and have some questions. For example, are we planning to add many types of functions to statistics, or is there a defined list of statistics that can be inferred from the sources or have meaningful applications in optimizer rules? If we agree that these kinds of extensions to column statistics are indeed useful and obtainable, then we can proceed with merging this. We would also ensure it is included in the new setup. FYI @ozankabak |
Thanks! Is there anywhere I can follow along @berkaysynnada (I am particularly interested in what the final API / representation looks like) |
I've reached you via discord |
For anyone else who is interested, the draft PR in the synnada fork is here: |
Looks like I got hit by some new ColumnStatistics tests on main. Should be fixed now 🤞 @berkaysynnada can you expand on the rationale for the V2 stats? I understand that it's more expressive, but I can't see in the PR or Notion how those distributions might actually be used? Is this for join planning? My understanding is I would no longer define a "min" or a "max" for a column. But there doesn't seem to be a place to define null count or sum? |
You can still define min or max. We are not replacing Statistics with Statistics_v2; it is actually replacing the Precision and Interval objects. We plan to rename the API of the execution plan from
and
What we are trying to address is how the current way of indeterminate quantities are handled in a target-dependent way. For example, if there’s a possibility of indeterminate statistics, it is stored as the mean value when the caller requires an estimate. However, if bounds are required, that indeterminism is stored as an interval. Our goal is to consolidate all forms of indeterminism and structure them with a strong mathematical foundation. This way, every user can utilize the statistics in their intended way. We aim to preserve and sustain all possible helpful quantities wherever feasible. We are also constructing a robust evaluation and back-propagation mechanism (similar to interval arithmetic, evaluate_bounds, and propagate_constraints). With this mechanism, any kind of expression—whether projection-based (evaluation only) or filter-based (evaluation followed by propagation)—will automatically resolve using the new statistics. |
@berkaysynnada can we merge this PR in now? Or shall we wait for the statistics revamp that is underway? |
No need to wait for underway PR as it does not depend which statistics an operator has. It is about how these statistics are stored, computed and used. But still, I wonder if we're planning to support a wide variety of statistical quantities -- like sum -- or is there a specific set of statistics that can be inferred from the sources or have practical applications in optimizer rules? If we agree that extending column statistics in this way is both useful and feasible for any user, we can move forward with merging this. We’ll also make sure it’s integrated into the new setup. |
I can't think of any other statistical quantities that would immediately help operators, so from our perspective it's only "sum" (we may also use sum to mean true-count for booleans). If this lands I can follow up with a PR to start using it in SUM, AVG operators. I guess the more contentious API change was adding @berkaysynnada is this something that would also remain compatible with the V2 API? I believe it is |
What I know is the whole statistics concept was created and used because of helping some optimization decisions, informing the optimizer rules about the data that comes to any execution plan node. What I couldn't understand is how "sum" information is helpful in any kind of optimization process.
Please correct me if I get wrongly your intention within this and #13736, you propose to add this "sum" info to get a result from it as a normal batch data? Why cannot you just use an AggregateExec having a sum accumulator? As I said, the V2 API does nothing to which kind of statistics will be preserved in Statistics{} struct, it is more about consolidating the Precision and Interval objects to represent and compute any kind of statistical quantity. |
Statistics can be helpful for optimizer rules, but they also allow short-circuiting computations. For example, min/max can be used to avoid evaluating a filter over a record batch and quickly throw away the whole thing. Sum statistics help with short-circuiting aggregation functions. For example,
Because our file format already stores a pre-computed sum. |
Thanks for the explanation. I see the reason now, and it makes sense when you have such pre-compute |
I merged this branch up from main and triggered the CI again. If there are no additional concerns I hope to merge this in a day or two |
Any other blockers @alamb ? Thanks for hustling this through |
LGTM |
I am somewhat overwhelmed with
And I haven't had a chance to fully think about downstream implications of this PR / have the bandwidth yet to pull the trigger and add potentially some other issues to the 45 release So no blockers from me yet, I was just hadn't gotten up the guts to merge it yet |
WFT let's do it and keep things moving |
And I broke the build 🤦 . Fix PR: |
Which issue does this PR close?
This PR adds a sum statistic to DataFusion.
Future use will include optimizing aggregation functions (sum, avg, count), see https://github.com/apache/datafusion/pull/13736/files for examples.
Are there any user-facing changes?
The ColumnStatistics struct has an extra public sum_value field.