-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
opt: predict future statistics based on historical stats #79872
Labels
A-sql-table-stats
Table statistics (and their automatic refresh).
C-enhancement
Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception)
T-sql-queries
SQL Queries Team
Comments
rytaft
added
the
C-enhancement
Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception)
label
Apr 13, 2022
michae2
added a commit
to michae2/cockroach
that referenced
this issue
Jun 6, 2022
The predicted histograms in statistics forecasts will often have buckets with NumEq = 0, and some predicted histograms will have _all_ buckets with NumEq = 0. This wasn't possible before forecasting, because the histograms produced by `EquiDepthHistogram` never have any buckets with NumEq = 0. If `adjustCounts` is called on such a histogram, `rowCountEq` and `distinctCountEq` will be zero. `adjustCounts` should still be able to fix such a histogram to have sum(NumRange) = rowCountTotal and sum(DistinctRange) = distinctCountTotal. This patch teaches `adjustCounts` to handle these histograms. (Similarly, predicted histograms could have all buckets with NumRange = 0, but this is already possible for histograms produced by `EquiDepthHistogram`, so `adjustCounts` already handles these.) Also, add a few more comments to `adjustCounts`. Assists: cockroachdb#79872 Release note: None
michae2
added a commit
to michae2/cockroach
that referenced
this issue
Jun 7, 2022
The predicted histograms in statistics forecasts will often have buckets with NumEq = 0, and some predicted histograms will have _all_ buckets with NumEq = 0. This wasn't possible before forecasting, because the histograms produced by `EquiDepthHistogram` never have any buckets with NumEq = 0. If `adjustCounts` is called on such a histogram, `rowCountEq` and `distinctCountEq` will be zero. `adjustCounts` should still be able to fix such a histogram to have sum(NumRange) = rowCountTotal and sum(DistinctRange) = distinctCountTotal. This patch teaches `adjustCounts` to handle these histograms. (Similarly, predicted histograms could have all buckets with NumRange = 0, but this is already possible for histograms produced by `EquiDepthHistogram`, so `adjustCounts` already handles these.) Also, add a few more comments to `adjustCounts`. Assists: cockroachdb#79872 Release note: None
craig bot
pushed a commit
that referenced
this issue
Jun 7, 2022
82474: sql/stats: support rowCountEq = 0 in histogram.adjustCounts r=rytaft,mgartner,msirek a=michae2 The predicted histograms in statistics forecasts will often have buckets with NumEq = 0, and some predicted histograms will have _all_ buckets with NumEq = 0. This wasn't possible before forecasting, because the histograms produced by `EquiDepthHistogram` never have any buckets with NumEq = 0. If `adjustCounts` is called on such a histogram, `rowCountEq` and `distinctCountEq` will be zero. `adjustCounts` should still be able to fix such a histogram to have sum(NumRange) = rowCountTotal and sum(DistinctRange) = distinctCountTotal. This patch teaches `adjustCounts` to handle these histograms. (Similarly, predicted histograms could have all buckets with NumRange = 0, but this is already possible for histograms produced by `EquiDepthHistogram`, so `adjustCounts` already handles these.) Also, add a few more comments to `adjustCounts`. Assists: #79872 Release note: None 82501: sql/storageparam: break builtins dep on tabledesc r=ajwerner a=ajwerner The TableStorageParamObserver meant that paramparse and transitively builtins depended on tabledesc. This was unfortunate because we want seqexpr is depended on by builtins and we want to use seqexpr in tabledesc, so we have to make sure that builtins does not depend on tabledesc. This commit achieves that goal by splitting out paramparse into three new packages from paramparse: * sql/storageparam: defines the interface for the Setter, contains functions to drive forward the setting and resetting of params, and has some shared functionality. * sql/storageparam/indexstorageparam: implementation of storageparam.Setter for the `descpb.IndexDescriptor`. * sql/storageparam/tablestorageparam: implementation of storageparam.Setter for the `*tabledesc.Mutable`. This allows the `builtins` package to use the `indexstorageparam` package cleanly without depending on `*tabledesc.Mutable`. It also recognizes that lots of utility methods in `paramparse` aren't about `storageparam`s. Release note: None Co-authored-by: Michael Erickson <[email protected]> Co-authored-by: Andrew Werner <[email protected]>
michae2
added a commit
to michae2/cockroach
that referenced
this issue
Jun 10, 2022
The predicted histograms in statistics forecasts will often have buckets with NumEq = 0, and some predicted histograms will have _all_ buckets with NumEq = 0. This wasn't possible before forecasting, because the histograms produced by `EquiDepthHistogram` never have any buckets with NumEq = 0. If `adjustCounts` is called on such a histogram, `rowCountEq` and `distinctCountEq` will be zero. `adjustCounts` should still be able to fix such a histogram to have sum(NumRange) = rowCountTotal and sum(DistinctRange) = distinctCountTotal. This patch teaches `adjustCounts` to handle these histograms. (Similarly, predicted histograms could have all buckets with NumRange = 0, but this is already possible for histograms produced by `EquiDepthHistogram`, so `adjustCounts` already handles these.) Also, add a few more comments to `adjustCounts`. Assists: cockroachdb#79872 Release note: None
michae2
added a commit
to michae2/cockroach
that referenced
this issue
Jun 21, 2022
To predict histograms in statistics forecasts, we will use linear regression over quantile functions. Quantile functions are another representation of histogram data, in a form more amenable to statistical manipulation. The conversion of histograms to quantile functions will require conversion of histogram bounds (datums) to quantile values (float64s). And likewise, the inverse conversion from quantile functions back to histograms will require the inverse conversion of float64 quantile values back to datums. These conversions are a little different from our usual SQL conversions in `eval.PerformCast`, so we add them to a new quantile file in the `sql/stats` module. This code was originally part of cockroachdb#77070 but has been pulled out to simplify that PR. A few changes have been made: - `histogramValue` has been renamed to `FromQuantileValue`. - Support for `DECIMAL`, `TIME`, `TIMETZ`, and `INTERVAL` has been dropped. Clamping these types in `FromQuantileValue` was too complex for the first iteration of statistics forecasting. We expect the overwhelming majority of ascending keys to use INT or TIMESTAMP types. - Bugs in `FLOAT4`, `TIMESTAMP` and `TIMESTAMPTZ` conversions have been fixed. - We're now clamping timestamps to slightly tighter bounds to avoid the problems with infinite timestamps (see cockroachdb#41564). Assists: cockroachdb#79872 Release note: None
michae2
added a commit
to michae2/cockroach
that referenced
this issue
Jul 6, 2022
To predict histograms in statistics forecasts, we will use linear regression over quantile functions. (Quantile functions are another representation of histogram data, in a form more amenable to statistical manipulation.) The conversion of histograms to quantile functions will require conversion of histogram bounds (datums) to quantile values (float64s). And likewise, the inverse conversion from quantile functions back to histograms will require the inverse conversion of float64 quantile values back to datums. These conversions are a little different from our usual SQL conversions in `eval.PerformCast`, so we add them to a new quantile file in the `sql/stats` module. This code was originally part of cockroachdb#77070 but has been pulled out to simplify that PR. A few changes have been made: - `histogramValue` has been renamed to `FromQuantileValue`. - Support for `DECIMAL`, `TIME`, `TIMETZ`, and `INTERVAL` has been dropped. Clamping these types in `FromQuantileValue` was too complex for the first iteration of statistics forecasting. We expect the overwhelming majority of ascending keys to use `INT` or `TIMESTAMP` types. - Bugs in `FLOAT4`, `TIMESTAMP` and `TIMESTAMPTZ` conversions have been fixed. - We're now clamping timestamps to slightly tighter bounds to avoid the problems with infinite timestamps (see cockroachdb#41564). Assists: cockroachdb#79872 Release note: None
michae2
added a commit
to michae2/cockroach
that referenced
this issue
Jul 6, 2022
To predict histograms in statistics forecasts, we will use linear regression over quantile functions. (Quantile functions are another representation of histogram data, in a form more amenable to statistical manipulation.) This commit defines quantile functions and adds methods to convert between histograms and quantile functions. This code was originally part of cockroachdb#77070 but has been pulled out to simplify that PR. A few changes have been made: - Common code has been factored into closures. - More checks have been added for positive values. - In `makeQuantile` we now trim leading empty buckets as well as trailing empty buckets. - The logic in `quantile.toHistogram` to steal from `NumRange` if `NumEq` is zero now checks that `NumRange` will still be >= 1. - More tests have been added. Assists: cockroachdb#79872 Release note: None
michae2
added a commit
to michae2/cockroach
that referenced
this issue
Jul 7, 2022
To predict histograms in statistics forecasts, we will use linear regression over quantile functions. (Quantile functions are another representation of histogram data, in a form more amenable to statistical manipulation.) The conversion of histograms to quantile functions will require conversion of histogram bounds (datums) to quantile values (float64s). And likewise, the inverse conversion from quantile functions back to histograms will require the inverse conversion of float64 quantile values back to datums. These conversions are a little different from our usual SQL conversions in `eval.PerformCast`, so we add them to a new quantile file in the `sql/stats` module. This code was originally part of cockroachdb#77070 but has been pulled out to simplify that PR. A few changes have been made: - `histogramValue` has been renamed to `FromQuantileValue`. - Support for `DECIMAL`, `TIME`, `TIMETZ`, and `INTERVAL` has been dropped. Clamping these types in `FromQuantileValue` was too complex for the first iteration of statistics forecasting. We expect the overwhelming majority of ascending keys to use `INT` or `TIMESTAMP` types. - Bugs in `FLOAT4`, `TIMESTAMP` and `TIMESTAMPTZ` conversions have been fixed. - We're now clamping timestamps to slightly tighter bounds to avoid the problems with infinite timestamps (see cockroachdb#41564). Assists: cockroachdb#79872 Release note: None
michae2
added a commit
to michae2/cockroach
that referenced
this issue
Jul 7, 2022
To predict histograms in statistics forecasts, we will use linear regression over quantile functions. (Quantile functions are another representation of histogram data, in a form more amenable to statistical manipulation.) The conversion of histograms to quantile functions will require conversion of histogram bounds (datums) to quantile values (float64s). And likewise, the inverse conversion from quantile functions back to histograms will require the inverse conversion of float64 quantile values back to datums. These conversions are a little different from our usual SQL conversions in `eval.PerformCast`, so we add them to a new quantile file in the `sql/stats` module. This code was originally part of cockroachdb#77070 but has been pulled out to simplify that PR. A few changes have been made: - `histogramValue` has been renamed to `FromQuantileValue`. - Support for `DECIMAL`, `TIME`, `TIMETZ`, and `INTERVAL` has been dropped. Clamping these types in `FromQuantileValue` was too complex for the first iteration of statistics forecasting. We expect the overwhelming majority of ascending keys to use `INT` or `TIMESTAMP` types. - Bugs in `FLOAT4`, `TIMESTAMP` and `TIMESTAMPTZ` conversions have been fixed. - We're now clamping timestamps to slightly tighter bounds to avoid the problems with infinite timestamps (see cockroachdb#41564). Assists: cockroachdb#79872 Release note: None
michae2
added a commit
to michae2/cockroach
that referenced
this issue
Aug 12, 2022
Add function to forecast table statistics based on observed statistics. These forecasts are based on linear regression models over time. For each set of columns with statistics, we construct a linear regression model over time for each statistic (row count, null count, distinct count, average row size, and histogram). If all models are good fits then we produce a statistics forecast for the set of columns. Assists: cockroachdb#79872 Release note: None
michae2
added a commit
to michae2/cockroach
that referenced
this issue
Aug 12, 2022
Add a new WITH FORECAST option to SHOW STATISTICS which calculates and displays forecasted statistics along with the existing table statistics. Also, forbid injecting forecasted stats. Assists: cockroachdb#79872 Release note (sql change): Add a new WITH FORECAST option to SHOW STATISTICS which calculates and displays forecasted statistics along with the existing table statistics.
michae2
added a commit
to michae2/cockroach
that referenced
this issue
Aug 12, 2022
Add function to forecast table statistics based on observed statistics. These forecasts are based on linear regression models over time. For each set of columns with statistics, we construct a linear regression model over time for each statistic (row count, null count, distinct count, average row size, and histogram). If all models are good fits then we produce a statistics forecast for the set of columns. Assists: cockroachdb#79872 Release note: None
michae2
added a commit
to michae2/cockroach
that referenced
this issue
Aug 12, 2022
Add a new WITH FORECAST option to SHOW STATISTICS which calculates and displays forecasted statistics along with the existing table statistics. Also, forbid injecting forecasted stats. Assists: cockroachdb#79872 Release note (sql change): Add a new WITH FORECAST option to SHOW STATISTICS which calculates and displays forecasted statistics along with the existing table statistics.
craig bot
pushed a commit
that referenced
this issue
Aug 13, 2022
77070: sql: add SHOW STATISTICS WITH FORECAST r=michae2 a=michae2 **sql/stats: replace eval.Context with tree.CompareContext** Most uses of eval.Context in the sql/stats package can actually be tree.CompareContext instead, so make the replacement. Release note: None **sql/stats: bump histogram version to 2** In 22.2 as of 963deb8 we support multiple histograms for trigram- indexed strings. Let's bump the histogram version for this change, as we may want to know whether multiple histograms are possible for a given row in system.table_statistics. (I suspect that during upgrades to 22.2 the 22.1 statistics builder will choke on these statistics, so maybe we should also backport a version check to 22.1.) Also update avgRefreshTime to work correctly in multiple-histogram cases. Release note: None **sql/stats: teach histogram.adjustCounts to remove empty buckets** Sometimes when adjusting counts down we end up with empty buckets in the histogram. They don't hurt anything, but they take up some memory (and some brainpower when examining test results). So, teach adjustCounts to remove them. Release note: None **sql/stats: always use non-nil buckets for empty-table histograms** After 82b5926 I've been using the convention that nil histogram buckets = no histogram, and non-nil-zero-length histogram buckets = histogram on empty table. This is mostly useful for testing but is also important for forecasting histograms. Fix a spot that wasn't following this convention. Also, add some empty-table testcases and some other testcases for histogram.adjustCounts. Release note: None **sql/stats: make ordering of SHOW STATISTICS more deterministic** Make two changes to produce more deterministic SHOW STATISTICS output: 1. Sort column IDs when creating statistics. We already use `FastIntSet` in both create_stats.go and statistics_builder.go to ignore ordering when gathering statistics by column set, but the column ordering from CREATE STATISTICS leaks into `system.table_statistics` and can affect SQL on that table, such as SHOW STATISTICS and various internal DELETE statements. 2. Order by column IDs and statistic IDs when reading from `system.table_statistics` in both SHOW STATISTICS and the stats cache. This will ensure SHOW STATISTICS always produces the same output, and shows us rows in the same order as the stats cache sees them (well, reverse order of the stats cache). Release note (sql change): Make SHOW STATISTICS output more deterministic. **sql/stats: forecast table statistics** Add function to forecast table statistics based on observed statistics. These forecasts are based on linear regression models over time. For each set of columns with statistics, we construct a linear regression model over time for each statistic (row count, null count, distinct count, average row size, and histogram). If all models are good fits then we produce a statistics forecast for the set of columns. Assists: #79872 Release note: None **sql: add SHOW STATISTICS WITH FORECAST** Add a new WITH FORECAST option to SHOW STATISTICS which calculates and displays forecasted statistics along with the existing table statistics. Also, forbid injecting forecasted stats. Assists: #79872 Release note (sql change): Add a new WITH FORECAST option to SHOW STATISTICS which calculates and displays forecasted statistics along with the existing table statistics. 85673: storage: Incrementally calculate range key stats in CheckSSTConflicts r=erikgrinaker a=itsbilal This change updates CheckSSTConflicts to incrementally calculate stats in the presence of range keys in the SST being ingested. This avoids expensive stats recomputation after AddSSTable, as previously we were marking stats as estimates if an SST with range keys was added. Fixes #83405. Release note: None. 85794: sql: add not visible index to optimizer r=wenyihu6 a=wenyihu6 This commit adds the logic of the invisible index feature to the optimizer. After this commit has been merged, the invisible index feature should be fully functional with `CREATE INDEX` and `CREATE TABLE`. Assists: #72576 See also: #85239 Release note (sql change): creating a not visible index using `CREATE TABLE …(INDEX … NOT VISIBLE)` or `CREATE INDEX … NOT VISIBLE` is now supported. 85974: cluster-ui: update active execution and sessions details r=xinhaoz a=xinhaoz Fixes #85968 Closes #85912 Closes #85973 This commit adds new details to the active execution details pages: full scan (both stmt and txn), priority (txn only), and last retry reason (txn only). New information is also added to the sessions table and details pages: transaction count, active duration, recent txn fingerprint ids (cache size comes from a cluster setting). This commit also fixes a bug in the sessions overview UI where the duration for closed sessions was incorrectly calcualted based on the current time instead of the session end time. Release note (ui change): the following fields have been added to the active stmt/txn details pages: - Full Scan: indicates if the execution contains a full scan - Last Retry Reason (txn page only): the last recorded reason the txn was retried - Priority (txn page only): the txn priority The following fields have been added to the sessions table and page: - Transaction count: the number of txns executed by the session - Session active duration: the time a session spent executing txns - Session most recent fingerprint ids ------------------------------------- Retry reason populated example: <img width="855" alt="image" src="https://user-images.githubusercontent.com/20136951/184201435-6d585d9b-13a9-4e87-86dd-718f03f9e92a.png"> https://www.loom.com/share/e396d5aa7dda4d5995227154c6b5076b Co-authored-by: Michael Erickson <[email protected]> Co-authored-by: Bilal Akhtar <[email protected]> Co-authored-by: wenyihu3 <[email protected]> Co-authored-by: Xin Hao Zhang <[email protected]>
michae2
added a commit
to michae2/cockroach
that referenced
this issue
Aug 15, 2022
When forecasting table statistics, we don't need a full *eval.Context. We can simply use a nil *eval.Context as a tree.CompareContext. This means we don't have to plumb an eval.Context into the stats cache. Assists: cockroachdb#79872 Release note: None
michae2
added a commit
to michae2/cockroach
that referenced
this issue
Aug 15, 2022
As of this commit, we now try to generate statistics forecasts for every column of every table. This happens whenever statistics are loaded into or refreshed in the stats cache. We use only the forecasts that fit the historical collected statistics very well, meaning we have high confidence in their accuracy. Fixes: cockroachdb#79872 Release note (performance improvement): Enable table statistics forecasts, which predict future statistics based on historical collected statistics. Forecasts help the optimizer produce better plans for queries that read data modified after the latest statistics collection. We use only the forecasts that fit the historical collected statistics very well, meaning we have high confidence in their accuracy. Forecasts can be viewed using `SHOW STATISTICS FOR TABLE ... WITH FORECAST`.
michae2
added a commit
to michae2/cockroach
that referenced
this issue
Aug 15, 2022
When using statistics forecasts, add the forecast time (which could be in the future) to EXPLAIN output. This both indicates that forecasts are in use, and gives us an idea of how up-to-date / ahead they are. Assists: cockroachdb#79872 Release note: None
michae2
added a commit
to michae2/cockroach
that referenced
this issue
Aug 15, 2022
Add a few simple testcases for usage of statistics forecasts by the optimizer. Assists: cockroachdb#79872 Release note: None
michae2
added a commit
to michae2/cockroach
that referenced
this issue
Aug 16, 2022
When forecasting table statistics, we don't need a full *eval.Context. We can simply use a nil *eval.Context as a tree.CompareContext. This means we don't have to plumb an eval.Context into the stats cache. Assists: cockroachdb#79872 Release note: None
michae2
added a commit
to michae2/cockroach
that referenced
this issue
Aug 16, 2022
When using statistics forecasts, add the forecast time (which could be in the future) to EXPLAIN output. This both indicates that forecasts are in use, and gives us an idea of how up-to-date / ahead they are. Assists: cockroachdb#79872 Release note: None
michae2
added a commit
to michae2/cockroach
that referenced
this issue
Aug 16, 2022
Add a few simple testcases for usage of statistics forecasts by the optimizer. Assists: cockroachdb#79872 Release note: None
craig bot
pushed a commit
that referenced
this issue
Aug 16, 2022
86078: sql/stats: generate statistics forecasts r=rytaft,yuzefovich a=michae2 **sql/stats: use nil eval.Context as CompareContext when forecasting** When forecasting table statistics, we don't need a full *eval.Context. We can simply use a nil *eval.Context as a tree.CompareContext. This means we don't have to plumb an eval.Context into the stats cache. Assists: #79872 Release note: None **sql/stats: generate statistics forecasts in the stats cache** As of this commit, we now try to generate statistics forecasts for every column of every table. This happens whenever statistics are loaded into or refreshed in the stats cache. We use only the forecasts that fit the historical collected statistics very well, meaning we have high confidence in their accuracy. Fixes: #79872 Release note (performance improvement): Enable table statistics forecasts, which predict future statistics based on historical collected statistics. Forecasts help the optimizer produce better plans for queries that read data modified after the latest statistics collection. We use only the forecasts that fit the historical collected statistics very well, meaning we have high confidence in their accuracy. Forecasts can be viewed using `SHOW STATISTICS FOR TABLE ... WITH FORECAST`. **sql: show forecasted stats time in EXPLAIN** When using statistics forecasts, add the forecast time (which could be in the future) to EXPLAIN output. This both indicates that forecasts are in use, and gives us an idea of how up-to-date / ahead they are. Assists: #79872 Release note: None **sql/opt: add tests for statistics forecasts** Add a few simple testcases for usage of statistics forecasts by the optimizer. Assists: #79872 Release note: None --- Release justification: Enable feature before we get too far into stability period. 86137: sql: use DelRange with tombstone in `force_delete_table_data` r=ajwerner a=ajwerner Fixes #85754 Release justification: minor change needed to adopt MVCC bulk ops fully Release note: None 86160: colexecerror: do not annotate the context canceled error r=yuzefovich a=yuzefovich This commit makes it so that the context canceled error doesn't get annotated with an assertion failure when it doesn't have a valid PG code. This makes sure that the sentry issues don't get filed for the context canceled errors - they are expected to occur. Fixes: #82947 Release note: None Release justification: bug fix. 86164: sql: deflake TestRoleOptionsMigration15000User r=ajwerner a=RichardJCai Previously it was flakey because we always assumed the first user created had ID 100, however this is not the case due to transaction failures. Release note: None Release justification: test only 86173: opt: fix error due to unsupported comparison for partitioned secondary index r=rytaft a=rytaft This commit fixes a bug where we were attempting to find the locality of the partitions in a secondary index, but we passed the incorrect index ordinal to the function `IndexPartitionLocality`. Fixes #86168 Release justification: Category 3: Fixes for high-priority or high-severity bugs in existing functionality Release note (bug fix): Fixed a bug that existed on v22.1.0-v22.1.5, where attempting to select data from a table that had different partitioning columns used for the primary and secondary indexes could cause an error. This occured if the primary index had zone configurations applied to the index partitions with different regions for different partitions, and the secondary index had a different column type than the primary index for its partitioning column(s). Co-authored-by: Michael Erickson <[email protected]> Co-authored-by: Andrew Werner <[email protected]> Co-authored-by: Yahor Yuzefovich <[email protected]> Co-authored-by: richardjcai <[email protected]> Co-authored-by: Rebecca Taft <[email protected]>
This was referenced Aug 17, 2022
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
A-sql-table-stats
Table statistics (and their automatic refresh).
C-enhancement
Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception)
T-sql-queries
SQL Queries Team
Is your feature request related to a problem? Please describe.
Currently, we only support collecting table statistics on the entire table, and statistics are only automatically refreshed when ~20% of rows have changed. This is problematic for very large tables where only a portion of the table is regularly updated or queried. As stats become more stale, there is greater likelihood that the optimizer will not choose optimal query plans.
For example, if only the rows most recently inserted into a table are regularly queried, stats on these rows will often be stale. This also gets worse over time: As the table increases in size, the 20% trigger for automatic refreshes will happen less and less frequently, and therefore stats on the recent rows will become more and more stale.
Describe the solution you'd like
We could take advantage of the fact that we store 4-5 historical stats for every column. We could use these historical stats to build a simple regression model (or a more complex model) to predict how the stats have changed since they were last collected. Predictions are only possible for column types where a rate of change can be determined between two values, such as DATE, TIME[TZ], TIMESTAMP[TZ], INT[2|4|8] , FLOAT[4|8], and DECIMAL. This prediction would not be stored on disk, but instead would be calculated on the fly, either inside the stats cache or the statisticsBuilder.
Describe alternatives you've considered
There are a number of alternatives described in #75625 (this proposal is one of them). That RFC proposes a more comprehensive solution, but it is heavier weight.
Additional context
This solution is already in progress in #77070.
Epic CRDB-13963
Jira issue: CRDB-15884
The text was updated successfully, but these errors were encountered: