-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow forcing delta lake to recalculate all statistics #16634
Allow forcing delta lake to recalculate all statistics #16634
Conversation
|
||
@JsonCreator | ||
public AnalyzeHandle( | ||
@JsonProperty("initialAnalyze") boolean initialAnalyze, | ||
@JsonProperty("startTime") Optional<Instant> filesModifiedAfter, | ||
@JsonProperty("columns") Optional<Set<String>> columns) | ||
@JsonProperty("columns") Optional<Set<String>> columns, | ||
@JsonProperty("recalculateAllStatistics") Optional<Boolean> recalculateAllStatistics) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use initialAnalyze instead of adding new field?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we have an enum mode with INITIAL
, INCREMENTAL
, and FULL_REFRESH
options?
@@ -139,7 +139,8 @@ private Stream<DeltaLakeSplit> getSplits( | |||
// per file. | |||
boolean splittable = tableHandle.getWriteType().isEmpty(); | |||
AtomicInteger remainingInitialSplits = new AtomicInteger(maxInitialSplits); | |||
Optional<Instant> filesModifiedAfter = tableHandle.getAnalyzeHandle().flatMap(AnalyzeHandle::getFilesModifiedAfter); | |||
Optional<Instant> recalculateAllStatistics = tableHandle.getAnalyzeHandle().flatMap(AnalyzeHandle::getFilesModifiedAfter); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Misleading name change
@@ -165,7 +166,7 @@ private Stream<DeltaLakeSplit> getSplits( | |||
return Stream.empty(); | |||
} | |||
|
|||
if (filesModifiedAfter.isPresent() && addAction.getModificationTime() <= filesModifiedAfter.get().toEpochMilli()) { | |||
if ((!ignoreOldStats.orElse(false)) && recalculateAllStatistics.isPresent() && addAction.getModificationTime() <= recalculateAllStatistics.get().toEpochMilli()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There should be no need to check ignoreOldStats. filesModifiedAfter should is empty for recalculation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. This part doesn't seem right, besides the parenthesis around the first item being redundant.
(!ignoreOldStats.orElse(false)) && recalculateAllStatistics.isPresent()
@@ -2108,8 +2109,12 @@ public ConnectorAnalyzeMetadata getStatisticsCollectionMetadata(ConnectorSession | |||
MetadataEntry metadata = handle.getMetadataEntry(); | |||
|
|||
Optional<Instant> filesModifiedAfterFromProperties = getFilesModifiedAfterProperty(analyzeProperties); | |||
Optional<Boolean> ignoreOldStats = getRecalculateAllStatisticsProperty(analyzeProperties); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's name this boolean consistently throughout, rather than recalculateAll
some and ignoreOld
elsewhere. Personally I prefer affirmative names like recalculateAll
or forceFullRefresh
.
@@ -28,17 +28,20 @@ | |||
private final boolean initialAnalyze; | |||
private final Optional<Instant> filesModifiedAfter; | |||
private final Optional<Set<String>> columns; | |||
private final Optional<Boolean> recalculateAllStatistics; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this need to be Optional? Looks like it just always defaults to false
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed
|
||
@JsonCreator | ||
public AnalyzeHandle( | ||
@JsonProperty("initialAnalyze") boolean initialAnalyze, | ||
@JsonProperty("startTime") Optional<Instant> filesModifiedAfter, | ||
@JsonProperty("columns") Optional<Set<String>> columns) | ||
@JsonProperty("columns") Optional<Set<String>> columns, | ||
@JsonProperty("recalculateAllStatistics") Optional<Boolean> recalculateAllStatistics) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we have an enum mode with INITIAL
, INCREMENTAL
, and FULL_REFRESH
options?
@@ -165,7 +166,7 @@ private Stream<DeltaLakeSplit> getSplits( | |||
return Stream.empty(); | |||
} | |||
|
|||
if (filesModifiedAfter.isPresent() && addAction.getModificationTime() <= filesModifiedAfter.get().toEpochMilli()) { | |||
if ((!ignoreOldStats.orElse(false)) && recalculateAllStatistics.isPresent() && addAction.getModificationTime() <= recalculateAllStatistics.get().toEpochMilli()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah. This part doesn't seem right, besides the parenthesis around the first item being redundant.
(!ignoreOldStats.orElse(false)) && recalculateAllStatistics.isPresent()
3461cc3
to
d82fc5d
Compare
@JsonProperty("startTime") Optional<Instant> filesModifiedAfter, | ||
@JsonProperty("columns") Optional<Set<String>> columns) | ||
{ | ||
this.initialAnalyze = initialAnalyze; | ||
this.analyzeType = analyzeType; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rnn
plugin/trino-delta-lake/src/main/java/io/trino/plugin/deltalake/AnalyzeHandle.java
Show resolved
Hide resolved
plugin/trino-delta-lake/src/main/java/io/trino/plugin/deltalake/DeltaLakeSplitManager.java
Outdated
Show resolved
Hide resolved
@@ -160,8 +161,10 @@ private Stream<DeltaLakeSplit> getSplits( | |||
|
|||
return validDataFiles.stream() | |||
.flatMap(addAction -> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The following check doesn't need to be computed for every element from the stream:
tableHandle.getAnalyzeHandle().isPresent() &&
!(tableHandle.getAnalyzeHandle().get().getAnalyzeType() == FULL_REFRESH)
"('comment', 3764.0, 50.0, 0.0, null, null, null)," + | ||
"('name', 379.0, 50.0, 0.0, null, null, null)," + | ||
"(null, null, null, null, 50.0, null, null)"; | ||
assertUpdate(format("ANALYZE %s WITH(recalculate_all_statistics = true)", tableName)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While testing, I tried the combination
ANALYZE %s WITH(recalculate_all_statistics = true, columns = ARRAY['nationkey', 'regionkey']
Is it a bit confusing to you as well to use "recalculate_all_statistics" , but actually do it only for a specific set of columns?
Just thinking out loud whether the name chosen for the new ANALYZE
property is sane.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok i will try to rename
@@ -2284,7 +2291,14 @@ private void updateTableStatistics( | |||
Optional<Instant> maxFileModificationTime, | |||
Collection<ComputedStatistics> computedStatistics) | |||
{ | |||
Optional<ExtendedStatistics> oldStatistics = statisticsAccess.readExtendedStatistics(session, location); | |||
boolean recalculateAllStatistics = analyzeHandle.map(AnalyzeHandle::getAnalyzeType) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
boolean recalculateAllStatistics = analyzeHandle.map(AnalyzeHandle::getAnalyzeType) | |
boolean recalculateAllStatistics = analyzeHandle | |
.map(AnalyzeHandle::getAnalyzeType) | |
.map(analyzeType -> analyzeType == FULL_REFRESH) | |
.orElse(false); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with map on a new line the rest is similar
@@ -2125,8 +2128,12 @@ public ConnectorAnalyzeMetadata getStatisticsCollectionMetadata(ConnectorSession | |||
MetadataEntry metadata = handle.getMetadataEntry(); | |||
|
|||
Optional<Instant> filesModifiedAfterFromProperties = getFilesModifiedAfterProperty(analyzeProperties); | |||
Optional<Boolean> recalculateAll = getRecalculateAllStatisticsProperty(analyzeProperties); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
recalculateAll
-> recalculateAllStatistics
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
renamed anyway
|
||
Optional<ExtendedStatistics> statistics = statisticsAccess.readExtendedStatistics(session, handle.getLocation()); | ||
Optional<ExtendedStatistics> statistics = Optional.empty(); | ||
if (!recalculateAll.orElse(false)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
!foo.orElse(false)
takes a bit too much time to reason as a reader of the code
@@ -2284,7 +2291,14 @@ private void updateTableStatistics( | |||
Optional<Instant> maxFileModificationTime, | |||
Collection<ComputedStatistics> computedStatistics) | |||
{ | |||
Optional<ExtendedStatistics> oldStatistics = statisticsAccess.readExtendedStatistics(session, location); | |||
boolean recalculateAllStatistics = analyzeHandle.map(AnalyzeHandle::getAnalyzeType) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to convert AnalyzeType
to boolean? Maybe we could check if it's FULL_REFRESH
in following if?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we could check if there but we still would have to get rid of optional around AnalyzeHandle so it also won't be nice, and in the end fo the IF you need a boolean
@@ -357,19 +357,20 @@ public void testAnalyzeSomeColumns() | |||
"('name', null, null, 0.0, null, null, null)," + | |||
"(null, null, null, null, 50.0, null, null)"); | |||
|
|||
String expectedFullStats = "VALUES " + |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add tests when:
- statsOnWrite is disabled and when initial data is inserted
- statsOnWrite it's enabled and some data is added
- SHOW STATS before and after ANALYZE should show only stats from second data addition
- ANALYZE with recalucation should update statistics to have all data.
Maybe also test that modification using UPDATE and DELTE can now be reflected in STATS when using recalculation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1 and 2 are not really related, why would I test if statsOnWrite works well in this PR, and situations when there are disabled and enabled are covered in TestDeltaLakeAnalyze
3 is covered in several places in TestDeltaLakeAnalyze
4 is covered by the piece of test code I added
Maybe also test that modification using UPDATE and DELTE can now be reflected in STATS when using recalculation.
ok
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The 1-4 are steps for one test case and covers situation which is described in ticket which this PR resolves.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm i must have missread the description
d82fc5d
to
df51254
Compare
@@ -39,6 +40,7 @@ | |||
{ | |||
public static final String FILES_MODIFIED_AFTER = "files_modified_after"; | |||
public static final String COLUMNS_PROPERTY = "columns"; | |||
public static final String FORCE_RECALCULATE_STATISTICS = "force_recalculate_statistics"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about using full_refresh
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wouldn't it have the same problem as you mentioned with recalculate_all_stats
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm assuming you were referring to #16634 (comment)
I find that full_refresh
is appropriate even though the user may choose to analyze only specific columns.
While looking for alternatives, I came over fullscan
on Transact-SQL.
However, I find full_refresh
better suited for the purpose of the functionality you are exposing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, no problem
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok let's go with full_refresh
@@ -39,6 +40,7 @@ | |||
{ | |||
public static final String FILES_MODIFIED_AFTER = "files_modified_after"; | |||
public static final String COLUMNS_PROPERTY = "columns"; | |||
public static final String FORCE_RECALCULATE_STATISTICS = "force_recalculate_statistics"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pls create a follow-up docs task if you don't intend now to document this new option.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i forgot about docs, i will add them here
|
||
Optional<ExtendedStatistics> statistics = statisticsAccess.readExtendedStatistics(session, handle.getLocation()); | ||
Optional<ExtendedStatistics> statistics = Optional.empty(); | ||
if (!forceRecalculate) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't we drop initially the existing stats in case we do a full refresh?
In the unlikely case that ANALYZE
statement fails, the stats for the table will still be present.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know, but is it bad to have previous stats there when ANALYZE fails?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @findepi
"('name', 177.0, 25.0, 0.0, null, null, null)," + | ||
"(null, null, null, null, 25.0, null, null)"); | ||
|
||
assertUpdate("DELETE FROM " + tableName + " WHERE nationkey = 1", 1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is not clear from the test why the table stats are not updated when doing UPDATE
/ DELETE
. Could you please add a comment in this regard?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are updated but not correctly, there is a difference in expected values, I am not 100% sure about the reason of this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NDV is updated because of normalization (for example cannot be higher than row count). With different data it should not change.
if (distinctValuesCount > outputRowCount) { |
After update size and NDV has strange values because of estimation based on null count. If changed value would be non-null, size and NDV would not change.
rowValues.add(toDoubleLiteral(symbolStatistics.getAverageRowSize() * planNodeStatsEstimate.getOutputRowCount() * (1 - symbolStatistics.getNullsFraction()))); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, but I wanted them to change so I can show the diff later, i hope that it makes sense
"VALUES " + | ||
"('nationkey', null, 24.0, 0.0, null, 0, 24)," + | ||
"('regionkey', null, 5.0, 0.0, null, 0, 4)," + | ||
"('comment', 3638.0, 24.0, 0.0, null, null, null)," + |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Size metric after UPDATE and ANALYZE gets totally off. I believe it's because we collect statistic on rewritten data and treats them as new entries.
But it's not your changes that causes that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep, exactly, i jsut added it here to show the diff after running analyze
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I created separate task to tackle it: #17096
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks!
df51254
to
884c04c
Compare
ba36e2b
to
dabda92
Compare
@@ -757,6 +757,11 @@ To collect statistics for a table, execute the following statement:: | |||
|
|||
ANALYZE table_schema.table_name; | |||
|
|||
Because Delta Lake connector can also update stats on writes they can drift away from |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Drifting from proper values are because we don't update statistics in some situations not the opposite.
@@ -757,6 +757,11 @@ To collect statistics for a table, execute the following statement:: | |||
|
|||
ANALYZE table_schema.table_name; | |||
|
|||
Because Delta Lake connector can also update stats on writes they can drift away from |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because Delta Lake connector can also update stats on writes they can drift away from | |
Because Delta Lake connector can't update stats on all writes they can drift away from |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will follow Marius advise and delete explanation
dabda92
to
b480958
Compare
plugin/trino-delta-lake/src/main/java/io/trino/plugin/deltalake/DeltaLakeSplitManager.java
Outdated
Show resolved
Hide resolved
|
||
Optional<ExtendedStatistics> statistics = statisticsAccess.readExtendedStatistics(session, handle.getLocation()); | ||
Optional<ExtendedStatistics> statistics = Optional.empty(); | ||
if (!forceRecalculate) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @findepi
@@ -39,6 +40,7 @@ | |||
{ | |||
public static final String FILES_MODIFIED_AFTER = "files_modified_after"; | |||
public static final String COLUMNS_PROPERTY = "columns"; | |||
public static final String FORCE_RECALCULATE_STATISTICS = "force_recalculate_statistics"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm assuming you were referring to #16634 (comment)
I find that full_refresh
is appropriate even though the user may choose to analyze only specific columns.
While looking for alternatives, I came over fullscan
on Transact-SQL.
However, I find full_refresh
better suited for the purpose of the functionality you are exposing.
@@ -757,6 +757,11 @@ To collect statistics for a table, execute the following statement:: | |||
|
|||
ANALYZE table_schema.table_name; | |||
|
|||
Because Delta Lake connector can also update stats on writes they can drift away from | |||
proper values. To recalculate all stats for the table use additional parameter ``force_recalculate_statistics``. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To compute the table statistics from scratch, dismissing the existing table statistics, use the parameter ...
Let's leave aside the explanations about stats drifting away from the reality of the data.
There are multiple reasons why the stats may drift from reality.
"('comment', 3764.0, 50.0, 0.0, null, null, null)," + | ||
"('name', 379.0, 50.0, 0.0, null, null, null)," + | ||
"(null, null, null, null, 50.0, null, null)"; | ||
assertUpdate(format("ANALYZE %s WITH(force_recalculate_statistics = true)", tableName)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's actually showcase that recalculate_statistics
does work with the columns which, when used previously without this option in the test, was causing a statement failure.
ANALYZE %s WITH(force_recalculate_statistics = true, columns = ARRAY['nationkey', 'regionkey', 'name'])
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can be a separate test case, here i want to show that using full_refresh
or however we will name it results in exactly the same stats as you get after dropping existing stats
b480958
to
8a877e2
Compare
plugin/trino-delta-lake/src/main/java/io/trino/plugin/deltalake/DeltaLakeAnalyzeProperties.java
Outdated
Show resolved
Hide resolved
5915acc
to
6c6516e
Compare
@@ -757,6 +757,10 @@ To collect statistics for a table, execute the following statement:: | |||
|
|||
ANALYZE table_schema.table_name; | |||
|
|||
To recalculate all stats for the table use additional parameter ``full_refresh``. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To recalculate all stats for the table use additional parameter ``full_refresh``. | |
Use additional parameter ``full_refresh`` to recalculate from scratch the statistics for the table. |
Can be a follow-up
@ebyhr could you please have a look? |
6c6516e
to
3a5eb99
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM except maybe the naming
Not all the statistics are recomputed.
What is actually being computed from scratch now with the new functionality are the extended stats (the file stats stay the same).
There is probably a better parameter name for ANALYZE
in this context.
So please suggest something, i followed your last suggestion ;) |
3a5eb99
to
ee14ba8
Compare
Rebased on master to resolve conflicts. |
plugin/trino-delta-lake/src/main/java/io/trino/plugin/deltalake/AnalyzeHandle.java
Outdated
Show resolved
Hide resolved
plugin/trino-delta-lake/src/main/java/io/trino/plugin/deltalake/AnalyzeHandle.java
Outdated
Show resolved
Hide resolved
plugin/trino-delta-lake/src/main/java/io/trino/plugin/deltalake/DeltaLakeAnalyzeProperties.java
Outdated
Show resolved
Hide resolved
plugin/trino-delta-lake/src/test/java/io/trino/plugin/deltalake/TestDeltaLakeAnalyze.java
Show resolved
Hide resolved
2df4121
to
06e6d91
Compare
plugin/trino-delta-lake/src/main/java/io/trino/plugin/deltalake/DeltaLakeAnalyzeProperties.java
Outdated
Show resolved
Hide resolved
plugin/trino-delta-lake/src/main/java/io/trino/plugin/deltalake/DeltaLakeAnalyzeProperties.java
Outdated
Show resolved
Hide resolved
06e6d91
to
5532e07
Compare
/test-with-secrets sha=5532e071209fbefb3111fb25527db54fbd154a9e |
The CI workflow run with tests that require additional secrets finished as failure: https://github.com/trinodb/trino/actions/runs/5242991620 |
5532e07
to
e7f32e7
Compare
/test-with-secrets sha=e7f32e7205adaddf942cc22a2ca10bd85f03883f
|
plugin/trino-delta-lake/src/main/java/io/trino/plugin/deltalake/DeltaLakeAnalyzeProperties.java
Show resolved
Hide resolved
plugin/trino-delta-lake/src/main/java/io/trino/plugin/deltalake/DeltaLakeAnalyzeProperties.java
Outdated
Show resolved
Hide resolved
plugin/trino-delta-lake/src/main/java/io/trino/plugin/deltalake/DeltaLakeAnalyzeProperties.java
Outdated
Show resolved
Hide resolved
e7f32e7
to
0bcc679
Compare
Description
fixes: #15968
Release notes
(x) Release notes are required, with the following suggested text: