Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow forcing delta lake to recalculate all statistics #16634

Conversation

homar
Copy link
Member

@homar homar commented Mar 20, 2023

Description

fixes: #15968

Release notes

(x) Release notes are required, with the following suggested text:

# Delat Lake
* Add support for recalculating all statistics in `ANALYZE` statement. ({issue}`15968`)

@cla-bot cla-bot bot added the cla-signed label Mar 20, 2023
@github-actions github-actions bot added the delta-lake Delta Lake connector label Mar 20, 2023
@homar homar requested a review from findepi March 20, 2023 14:06
@findepi findepi requested review from findinpath and alexjo2144 and removed request for findepi March 21, 2023 09:32

@JsonCreator
public AnalyzeHandle(
@JsonProperty("initialAnalyze") boolean initialAnalyze,
@JsonProperty("startTime") Optional<Instant> filesModifiedAfter,
@JsonProperty("columns") Optional<Set<String>> columns)
@JsonProperty("columns") Optional<Set<String>> columns,
@JsonProperty("recalculateAllStatistics") Optional<Boolean> recalculateAllStatistics)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use initialAnalyze instead of adding new field?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we have an enum mode with INITIAL, INCREMENTAL, and FULL_REFRESH options?

@@ -139,7 +139,8 @@ private Stream<DeltaLakeSplit> getSplits(
// per file.
boolean splittable = tableHandle.getWriteType().isEmpty();
AtomicInteger remainingInitialSplits = new AtomicInteger(maxInitialSplits);
Optional<Instant> filesModifiedAfter = tableHandle.getAnalyzeHandle().flatMap(AnalyzeHandle::getFilesModifiedAfter);
Optional<Instant> recalculateAllStatistics = tableHandle.getAnalyzeHandle().flatMap(AnalyzeHandle::getFilesModifiedAfter);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Misleading name change

@@ -165,7 +166,7 @@ private Stream<DeltaLakeSplit> getSplits(
return Stream.empty();
}

if (filesModifiedAfter.isPresent() && addAction.getModificationTime() <= filesModifiedAfter.get().toEpochMilli()) {
if ((!ignoreOldStats.orElse(false)) && recalculateAllStatistics.isPresent() && addAction.getModificationTime() <= recalculateAllStatistics.get().toEpochMilli()) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There should be no need to check ignoreOldStats. filesModifiedAfter should is empty for recalculation.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. This part doesn't seem right, besides the parenthesis around the first item being redundant.

(!ignoreOldStats.orElse(false)) && recalculateAllStatistics.isPresent()

@@ -2108,8 +2109,12 @@ public ConnectorAnalyzeMetadata getStatisticsCollectionMetadata(ConnectorSession
MetadataEntry metadata = handle.getMetadataEntry();

Optional<Instant> filesModifiedAfterFromProperties = getFilesModifiedAfterProperty(analyzeProperties);
Optional<Boolean> ignoreOldStats = getRecalculateAllStatisticsProperty(analyzeProperties);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's name this boolean consistently throughout, rather than recalculateAll some and ignoreOld elsewhere. Personally I prefer affirmative names like recalculateAll or forceFullRefresh.

@@ -28,17 +28,20 @@
private final boolean initialAnalyze;
private final Optional<Instant> filesModifiedAfter;
private final Optional<Set<String>> columns;
private final Optional<Boolean> recalculateAllStatistics;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this need to be Optional? Looks like it just always defaults to false?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed


@JsonCreator
public AnalyzeHandle(
@JsonProperty("initialAnalyze") boolean initialAnalyze,
@JsonProperty("startTime") Optional<Instant> filesModifiedAfter,
@JsonProperty("columns") Optional<Set<String>> columns)
@JsonProperty("columns") Optional<Set<String>> columns,
@JsonProperty("recalculateAllStatistics") Optional<Boolean> recalculateAllStatistics)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we have an enum mode with INITIAL, INCREMENTAL, and FULL_REFRESH options?

@@ -165,7 +166,7 @@ private Stream<DeltaLakeSplit> getSplits(
return Stream.empty();
}

if (filesModifiedAfter.isPresent() && addAction.getModificationTime() <= filesModifiedAfter.get().toEpochMilli()) {
if ((!ignoreOldStats.orElse(false)) && recalculateAllStatistics.isPresent() && addAction.getModificationTime() <= recalculateAllStatistics.get().toEpochMilli()) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. This part doesn't seem right, besides the parenthesis around the first item being redundant.

(!ignoreOldStats.orElse(false)) && recalculateAllStatistics.isPresent()

@homar homar requested review from pajaks and alexjo2144 April 11, 2023 10:48
@homar homar force-pushed the homar_allow_forcing_delta_lake_to_recalculate_all_stats branch from 3461cc3 to d82fc5d Compare April 11, 2023 10:48
@JsonProperty("startTime") Optional<Instant> filesModifiedAfter,
@JsonProperty("columns") Optional<Set<String>> columns)
{
this.initialAnalyze = initialAnalyze;
this.analyzeType = analyzeType;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rnn

@@ -160,8 +161,10 @@ private Stream<DeltaLakeSplit> getSplits(

return validDataFiles.stream()
.flatMap(addAction -> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The following check doesn't need to be computed for every element from the stream:

tableHandle.getAnalyzeHandle().isPresent() &&
                            !(tableHandle.getAnalyzeHandle().get().getAnalyzeType() == FULL_REFRESH)

"('comment', 3764.0, 50.0, 0.0, null, null, null)," +
"('name', 379.0, 50.0, 0.0, null, null, null)," +
"(null, null, null, null, 50.0, null, null)";
assertUpdate(format("ANALYZE %s WITH(recalculate_all_statistics = true)", tableName));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While testing, I tried the combination

ANALYZE %s WITH(recalculate_all_statistics = true, columns = ARRAY['nationkey', 'regionkey']

Is it a bit confusing to you as well to use "recalculate_all_statistics" , but actually do it only for a specific set of columns?
Just thinking out loud whether the name chosen for the new ANALYZE property is sane.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok i will try to rename

@@ -2284,7 +2291,14 @@ private void updateTableStatistics(
Optional<Instant> maxFileModificationTime,
Collection<ComputedStatistics> computedStatistics)
{
Optional<ExtendedStatistics> oldStatistics = statisticsAccess.readExtendedStatistics(session, location);
boolean recalculateAllStatistics = analyzeHandle.map(AnalyzeHandle::getAnalyzeType)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
boolean recalculateAllStatistics = analyzeHandle.map(AnalyzeHandle::getAnalyzeType)
boolean recalculateAllStatistics = analyzeHandle
.map(AnalyzeHandle::getAnalyzeType)
.map(analyzeType -> analyzeType == FULL_REFRESH)
.orElse(false);

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with map on a new line the rest is similar

@@ -2125,8 +2128,12 @@ public ConnectorAnalyzeMetadata getStatisticsCollectionMetadata(ConnectorSession
MetadataEntry metadata = handle.getMetadataEntry();

Optional<Instant> filesModifiedAfterFromProperties = getFilesModifiedAfterProperty(analyzeProperties);
Optional<Boolean> recalculateAll = getRecalculateAllStatisticsProperty(analyzeProperties);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

recalculateAll -> recalculateAllStatistics

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

renamed anyway


Optional<ExtendedStatistics> statistics = statisticsAccess.readExtendedStatistics(session, handle.getLocation());
Optional<ExtendedStatistics> statistics = Optional.empty();
if (!recalculateAll.orElse(false)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

!foo.orElse(false) takes a bit too much time to reason as a reader of the code

@@ -2284,7 +2291,14 @@ private void updateTableStatistics(
Optional<Instant> maxFileModificationTime,
Collection<ComputedStatistics> computedStatistics)
{
Optional<ExtendedStatistics> oldStatistics = statisticsAccess.readExtendedStatistics(session, location);
boolean recalculateAllStatistics = analyzeHandle.map(AnalyzeHandle::getAnalyzeType)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to convert AnalyzeType to boolean? Maybe we could check if it's FULL_REFRESH in following if?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we could check if there but we still would have to get rid of optional around AnalyzeHandle so it also won't be nice, and in the end fo the IF you need a boolean

@@ -357,19 +357,20 @@ public void testAnalyzeSomeColumns()
"('name', null, null, 0.0, null, null, null)," +
"(null, null, null, null, 50.0, null, null)");

String expectedFullStats = "VALUES " +
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add tests when:

  1. statsOnWrite is disabled and when initial data is inserted
  2. statsOnWrite it's enabled and some data is added
  3. SHOW STATS before and after ANALYZE should show only stats from second data addition
  4. ANALYZE with recalucation should update statistics to have all data.

Maybe also test that modification using UPDATE and DELTE can now be reflected in STATS when using recalculation.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 and 2 are not really related, why would I test if statsOnWrite works well in this PR, and situations when there are disabled and enabled are covered in TestDeltaLakeAnalyze
3 is covered in several places in TestDeltaLakeAnalyze
4 is covered by the piece of test code I added

Maybe also test that modification using UPDATE and DELTE can now be reflected in STATS when using recalculation.

ok

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The 1-4 are steps for one test case and covers situation which is described in ticket which this PR resolves.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm i must have missread the description

@homar homar force-pushed the homar_allow_forcing_delta_lake_to_recalculate_all_stats branch from d82fc5d to df51254 Compare April 15, 2023 10:37
@@ -39,6 +40,7 @@
{
public static final String FILES_MODIFIED_AFTER = "files_modified_after";
public static final String COLUMNS_PROPERTY = "columns";
public static final String FORCE_RECALCULATE_STATISTICS = "force_recalculate_statistics";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about using full_refresh ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wouldn't it have the same problem as you mentioned with recalculate_all_stats ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm assuming you were referring to #16634 (comment)

I find that full_refresh is appropriate even though the user may choose to analyze only specific columns.
While looking for alternatives, I came over fullscan on Transact-SQL.
However, I find full_refresh better suited for the purpose of the functionality you are exposing.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, no problem

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok let's go with full_refresh

@@ -39,6 +40,7 @@
{
public static final String FILES_MODIFIED_AFTER = "files_modified_after";
public static final String COLUMNS_PROPERTY = "columns";
public static final String FORCE_RECALCULATE_STATISTICS = "force_recalculate_statistics";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pls create a follow-up docs task if you don't intend now to document this new option.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i forgot about docs, i will add them here


Optional<ExtendedStatistics> statistics = statisticsAccess.readExtendedStatistics(session, handle.getLocation());
Optional<ExtendedStatistics> statistics = Optional.empty();
if (!forceRecalculate) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we drop initially the existing stats in case we do a full refresh?
In the unlikely case that ANALYZE statement fails, the stats for the table will still be present.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know, but is it bad to have previous stats there when ANALYZE fails?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"('name', 177.0, 25.0, 0.0, null, null, null)," +
"(null, null, null, null, 25.0, null, null)");

assertUpdate("DELETE FROM " + tableName + " WHERE nationkey = 1", 1);
Copy link
Contributor

@findinpath findinpath Apr 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is not clear from the test why the table stats are not updated when doing UPDATE / DELETE. Could you please add a comment in this regard?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are updated but not correctly, there is a difference in expected values, I am not 100% sure about the reason of this.

Copy link
Member

@pajaks pajaks Apr 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

NDV is updated because of normalization (for example cannot be higher than row count). With different data it should not change.

if (distinctValuesCount > outputRowCount) {

After update size and NDV has strange values because of estimation based on null count. If changed value would be non-null, size and NDV would not change.
rowValues.add(toDoubleLiteral(symbolStatistics.getAverageRowSize() * planNodeStatsEstimate.getOutputRowCount() * (1 - symbolStatistics.getNullsFraction())));

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, but I wanted them to change so I can show the diff later, i hope that it makes sense

"VALUES " +
"('nationkey', null, 24.0, 0.0, null, 0, 24)," +
"('regionkey', null, 5.0, 0.0, null, 0, 4)," +
"('comment', 3638.0, 24.0, 0.0, null, null, null)," +
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Size metric after UPDATE and ANALYZE gets totally off. I believe it's because we collect statistic on rewritten data and treats them as new entries.
But it's not your changes that causes that.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep, exactly, i jsut added it here to show the diff after running analyze

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I created separate task to tackle it: #17096

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks!

@homar homar force-pushed the homar_allow_forcing_delta_lake_to_recalculate_all_stats branch from df51254 to 884c04c Compare April 19, 2023 17:19
@homar homar requested review from pajaks and findinpath April 20, 2023 08:45
@homar homar force-pushed the homar_allow_forcing_delta_lake_to_recalculate_all_stats branch from ba36e2b to dabda92 Compare April 20, 2023 09:08
@github-actions github-actions bot added the docs label Apr 20, 2023
@@ -757,6 +757,11 @@ To collect statistics for a table, execute the following statement::

ANALYZE table_schema.table_name;

Because Delta Lake connector can also update stats on writes they can drift away from
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Drifting from proper values are because we don't update statistics in some situations not the opposite.

@@ -757,6 +757,11 @@ To collect statistics for a table, execute the following statement::

ANALYZE table_schema.table_name;

Because Delta Lake connector can also update stats on writes they can drift away from
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Because Delta Lake connector can also update stats on writes they can drift away from
Because Delta Lake connector can't update stats on all writes they can drift away from

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will follow Marius advise and delete explanation

@homar homar force-pushed the homar_allow_forcing_delta_lake_to_recalculate_all_stats branch from dabda92 to b480958 Compare April 20, 2023 18:14
docs/src/main/sphinx/connector/delta-lake.rst Outdated Show resolved Hide resolved

Optional<ExtendedStatistics> statistics = statisticsAccess.readExtendedStatistics(session, handle.getLocation());
Optional<ExtendedStatistics> statistics = Optional.empty();
if (!forceRecalculate) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@@ -39,6 +40,7 @@
{
public static final String FILES_MODIFIED_AFTER = "files_modified_after";
public static final String COLUMNS_PROPERTY = "columns";
public static final String FORCE_RECALCULATE_STATISTICS = "force_recalculate_statistics";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm assuming you were referring to #16634 (comment)

I find that full_refresh is appropriate even though the user may choose to analyze only specific columns.
While looking for alternatives, I came over fullscan on Transact-SQL.
However, I find full_refresh better suited for the purpose of the functionality you are exposing.

@@ -757,6 +757,11 @@ To collect statistics for a table, execute the following statement::

ANALYZE table_schema.table_name;

Because Delta Lake connector can also update stats on writes they can drift away from
proper values. To recalculate all stats for the table use additional parameter ``force_recalculate_statistics``.
Copy link
Contributor

@findinpath findinpath Apr 21, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To compute the table statistics from scratch, dismissing the existing table statistics, use the parameter ...

Let's leave aside the explanations about stats drifting away from the reality of the data.
There are multiple reasons why the stats may drift from reality.

"('comment', 3764.0, 50.0, 0.0, null, null, null)," +
"('name', 379.0, 50.0, 0.0, null, null, null)," +
"(null, null, null, null, 50.0, null, null)";
assertUpdate(format("ANALYZE %s WITH(force_recalculate_statistics = true)", tableName));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's actually showcase that recalculate_statistics does work with the columns which, when used previously without this option in the test, was causing a statement failure.

ANALYZE %s WITH(force_recalculate_statistics = true, columns = ARRAY['nationkey', 'regionkey', 'name'])

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be a separate test case, here i want to show that using full_refresh or however we will name it results in exactly the same stats as you get after dropping existing stats

@homar homar force-pushed the homar_allow_forcing_delta_lake_to_recalculate_all_stats branch from b480958 to 8a877e2 Compare April 22, 2023 17:56
@homar homar force-pushed the homar_allow_forcing_delta_lake_to_recalculate_all_stats branch from 5915acc to 6c6516e Compare April 24, 2023 09:59
@@ -757,6 +757,10 @@ To collect statistics for a table, execute the following statement::

ANALYZE table_schema.table_name;

To recalculate all stats for the table use additional parameter ``full_refresh``.
Copy link
Contributor

@findinpath findinpath Apr 25, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
To recalculate all stats for the table use additional parameter ``full_refresh``.
Use additional parameter ``full_refresh`` to recalculate from scratch the statistics for the table.

Can be a follow-up

@findinpath
Copy link
Contributor

@ebyhr could you please have a look?

@homar homar force-pushed the homar_allow_forcing_delta_lake_to_recalculate_all_stats branch from 6c6516e to 3a5eb99 Compare April 25, 2023 22:54
Copy link
Contributor

@findinpath findinpath left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM except maybe the naming

Not all the statistics are recomputed.
What is actually being computed from scratch now with the new functionality are the extended stats (the file stats stay the same).

There is probably a better parameter name for ANALYZE in this context.

@homar
Copy link
Member Author

homar commented Apr 26, 2023

LGTM except maybe the naming

Not all the statistics are recomputed. What is actually being computed from scratch now with the new functionality are the extended stats (the file stats stay the same).

There is probably a better parameter name for ANALYZE in this context.

So please suggest something, i followed your last suggestion ;)

@ebyhr ebyhr force-pushed the homar_allow_forcing_delta_lake_to_recalculate_all_stats branch from 3a5eb99 to ee14ba8 Compare May 9, 2023 00:55
@ebyhr
Copy link
Member

ebyhr commented May 9, 2023

Rebased on master to resolve conflicts.

@homar homar force-pushed the homar_allow_forcing_delta_lake_to_recalculate_all_stats branch 2 times, most recently from 2df4121 to 06e6d91 Compare May 9, 2023 21:56
@findinpath findinpath requested a review from ebyhr May 10, 2023 04:26
@homar homar force-pushed the homar_allow_forcing_delta_lake_to_recalculate_all_stats branch from 06e6d91 to 5532e07 Compare June 12, 2023 10:29
@ebyhr
Copy link
Member

ebyhr commented Jun 12, 2023

/test-with-secrets sha=5532e071209fbefb3111fb25527db54fbd154a9e

@github-actions
Copy link

The CI workflow run with tests that require additional secrets finished as failure: https://github.com/trinodb/trino/actions/runs/5242991620

@homar homar force-pushed the homar_allow_forcing_delta_lake_to_recalculate_all_stats branch from 5532e07 to e7f32e7 Compare June 12, 2023 13:57
@hashhar
Copy link
Member

hashhar commented Jun 12, 2023

/test-with-secrets sha=e7f32e7205adaddf942cc22a2ca10bd85f03883f

https://github.com/trinodb/trino/actions/runs/5247027640

@homar homar force-pushed the homar_allow_forcing_delta_lake_to_recalculate_all_stats branch from e7f32e7 to 0bcc679 Compare June 13, 2023 08:52
@homar homar requested a review from ebyhr June 13, 2023 20:59
@ebyhr ebyhr merged commit f79196f into trinodb:master Jun 14, 2023
@github-actions github-actions bot added this to the 420 milestone Jun 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Development

Successfully merging this pull request may close these issues.

Allow forcing Delta Lake analyze to ignore previous analysis time
6 participants