Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fault tolerant scheduler 2.0 #14205

Merged
merged 3 commits into from
Oct 20, 2022
Merged

Conversation

arhimondr
Copy link
Contributor

@arhimondr arhimondr commented Sep 20, 2022

Description

This PR lays down foundation for the future advancements in fault tolerant execution. The proposed structure of the scheduler is aimed at making it possible to implemen:

  • Adaptive replanning by adjusting the scheduler to allow mutable plans
  • Speculative execution by allowing scheduling without a necessity for a full barrier between stages
  • Prioritized scheduling by maintaining a single task queue per query
  • Advanced autoscalling by allowing to expose task queue statistics that can be taking into account when deciding optimal cluster size dynamically

The scheduler is implemented as an event loop to minimize synchronization necessity and allow developers to think about scheduling as a single threaded process

Non-technical explanation

N/A

TODO

Release notes

(x) This is not user-visible or docs only and no release notes are required.
( ) Release notes are required, please propose a release note for me.
( ) Release notes are required, with the following suggested text:

# Section
* Fix some things. ({issue}`issuenumber`)

@cla-bot cla-bot bot added the cla-signed label Sep 20, 2022
@arhimondr
Copy link
Contributor Author

On top of #14072, still WIP

@arhimondr arhimondr force-pushed the event-driven-scheduler branch 7 times, most recently from 4482925 to 03f90be Compare September 27, 2022 16:36
@arhimondr arhimondr force-pushed the event-driven-scheduler branch from 03f90be to aa023f0 Compare September 27, 2022 21:16
@arhimondr arhimondr changed the title [WIP] Fault tolerant scheduler 2.0 Fault tolerant scheduler 2.0 Sep 27, 2022
@arhimondr
Copy link
Contributor Author

Benchmark results:

+----------------------+-----------------------+----------------------+-----------------------+----------+-----------+
| base_cpu_time_millis | base_wall_time_millis | test_cpu_time_millis | test_wall_time_millis | cpu_diff | wall_diff |
+----------------------+-----------------------+----------------------+-----------------------+----------+-----------+
|          18184316267 |              77292978 |          18536131298 |              76952203 |  1.01935 |   0.99559 |
+----------------------+-----------------------+----------------------+-----------------------+----------+-----------+

+-------------------------------+----------------------+-----------------------+----------------------+-----------------------+----------+-----------+
| suite                         | base_cpu_time_millis | base_wall_time_millis | test_cpu_time_millis | test_wall_time_millis | cpu_diff | wall_diff |
+-------------------------------+----------------------+-----------------------+----------------------+-----------------------+----------+-----------+
| tpcds_sf10000_partitioned     |           2151027639 |              10666623 |           2192108379 |              10325315 |  1.01910 |   0.96800 |
| tpcds_sf10000_partitioned_etl |          12575752138 |              45907876 |          12749500471 |              45367419 |  1.01382 |   0.98823 |
| tpcds_sf100_partitioned       |             25285611 |               1298684 |             24642584 |               1285296 |  0.97457 |   0.98969 |
| tpcds_sf100_partitioned_etl   |            112666635 |               5309658 |            116431690 |               5588347 |  1.03342 |   1.05249 |
| tpch_sf10000_bucketed         |            833789592 |               3341612 |            860208652 |               3195566 |  1.03169 |   0.95629 |
| tpch_sf10000_bucketed_etl     |           2459310174 |               9295363 |           2565149397 |               9585205 |  1.04304 |   1.03118 |
| tpch_sf100_bucketed           |              6506475 |                168296 |              5905512 |                160160 |  0.90764 |   0.95166 |
| tpch_sf100_bucketed_etl       |             19978003 |               1304866 |             22184613 |               1444895 |  1.11045 |   1.10731 |
+-------------------------------+----------------------+-----------------------+----------------------+-----------------------+----------+-----------+

Detailed: https://gist.github.com/arhimondr/02ccc06a4f145fcd5f47b91ff068c013

@arhimondr
Copy link
Contributor Author

Still on top of several PRs (#14328, #14329, #14330, #14320) but is ready for review.

@arhimondr
Copy link
Contributor Author

Rebased

@arhimondr arhimondr force-pushed the event-driven-scheduler branch 4 times, most recently from a3474c9 to 6f3f2e7 Compare October 7, 2022 05:58
@arhimondr
Copy link
Contributor Author

Ready for review

@arhimondr arhimondr force-pushed the event-driven-scheduler branch from 6f3f2e7 to 56deff7 Compare October 7, 2022 19:07
Copy link
Member

@losipiuk losipiuk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Partial review


import static java.util.Objects.requireNonNull;

public interface EventDrivenTaskSource
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does not look like you need this interface. You only need the Callback for tests.
Also it looks awkward to have Callback internal to EventDrivenTaskSource as on interface level those two are not related at all.
I would suggest to just leave Callback as an interface; it can be moved to top-level or to StageTaskSource implementation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Initially I was trying to model it after an existing TaskSource. But indeed, I don't think the EventDrivenTaskSourceFactory, EventDrivenTaskSource interfaces are needed. Going to remove them and rename:

  • StageEventDrivenTaskSourceFactory -> EventDrivenTaskSourceFactory
  • StageTaskSource -> EventDrivenTaskSource

Also going to move the Callback to the EventDrivenTaskSource and move the EventDrivenTaskSource out of the StageEventDrivenTaskSourceFactory (basically reducing the number of nested classes)

partitionUpdates = ImmutableList.copyOf(requireNonNull(partitionUpdates, "partitionUpdates is null"));
}

void update(EventDrivenTaskSource.Callback callback)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I'd suggest marking constructor and update as public to clearly mark what is the public interface of the class

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(relevant to other classes too)

Copy link
Contributor Author

@arhimondr arhimondr Oct 11, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that's the only relevant class after moving classes around and removing unnecessary interfaces. Please let me know if I missed anything.

PartitionAssignment partitionAssignment = openAssignments.get(hostRequirement);
long splitSizeInBytes = getSplitSizeInBytes(split);
if (partitionAssignment != null && partitionAssignment.getAssignedDataSizeInBytes() + splitSizeInBytes > targetPartitionSizeInBytes) {
partitionAssignment.setFull(true);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

verify(partitionAssignment.getAssignedDataSizeInBytes() > 0)? Or maybe there is a chance it would not be true if split reports empty size?
Maybe for connectors which misbehave and report zero-size splits we should also mark as full based on splits count?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe for connectors which misbehave and report zero-size splits we should also mark as full based on splits count?

Totally forgot to implement that. We do indeed have the fault_tolerant_execution_max_task_split_count property. Implemented.

Another, less straightforward problem is actually around an another session property I forgot about, the fault_tolerant_execution_min_task_split_count.

This property is there to ensure there's enough splits assigned to a single task to ensure a task is able to utilize thread level parallelism. Usually, when the file format is "splittable", it doesn't really matter. However for non splittable formats when only a single split per entire file is generated it seems like a good idea to provide enough splits for a single task to utilize all available threads.

One goal I was trying to achieve by ArbitraryDistributionSplitAssigner was to remove the Arbitrary / Source distribution duality (which in their essence are the same). However the fault_tolerant_execution_min_task_split_count only makes sense for table scan splits and doesn't make much sense for RemoteSplits (that can provide parallelism even within a single split). Currently in the ArbitraryDistributionSplitAssigner I'm trying to make as little difference as possible between a RemoteSplit and a ConnectorSplit. Implementing fault_tolerant_execution_min_task_split_count will most certainly make it more difficult. I'm a little bit on a fence whether we really want to have the fault_tolerant_execution_min_task_split_count or should we consider non splittable formats as a nieche use case?

}

@Override
public synchronized void start()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

startIfNotStarted?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also maybe split the method into two - one for creating SplitLoader and other for starting those up with top level exception handlingn.
The indentation depth is below my comfort level right now.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

startIfNotStarted?

We usually call it just start in other places. Do you think this one should be different?

Copy link
Member

@losipiuk losipiuk Oct 12, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think (maybe this is not really the case) that typically a start() method would throw if "object being started" is already started. Not just ignore the call.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, we can do that. I don't know why did I implement it to simply ignore it. I don't think it is ever called more than once.

import static org.testng.Assert.assertEquals;
import static org.testng.Assert.assertTrue;

public class TestArbitraryDistributionSplitAssigner
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm - the test logic is not simpler than the logic in the tested code. I wonder if it is possible to make assertions more explicit and not bloat this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While the assignment algorithm is relatively straightforward the different sequence of interaction is what I wanted to test. The idea is to implement the algorithm in a straightforward way and make sure that different sequences of interaction with the assigner produce the same result.

Copy link
Member

@losipiuk losipiuk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lots of dumb questions. Sorry

this.plan.set(requireNonNull(plan, "plan is null"));
}

public StageInfo getStageInfo()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this be called getRootStageInfo (I know current naming matches QueryScheduler interface

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is a tree internally, including all stage infos for all stages. It is more of a QueryInfo at this point. Not sure if getRootStageInfo wouldn't be interpreted as "stage info for the root stage only"

}

@Override
public BasicStageStats getBasicStageStats()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

getBasicRootStageStats()?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those stats are aggregate stats across all stages

}
}

public Optional<RemoteTask> schedule(int partitionId, InternalNode node)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am too dumb to follow this one.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, probably the most beafy method of the entire scheduler. Essentially what it does - it schedules a task.

And to schedule a task you need to have splits and output selectors.

This method obtains splits either from an open descriptor (if a descriptor is still being built) or from a sealed descriptor stored in the task descriptor storage. It also merges in output selectors.

Open to suggestions of how to make it more readable.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually it is not that bad on the second go. What you could do is to extract:

        private Set<PlanNodeId> getRemoteSourceIds()
        {
            // this can be cached
            Set<PlanNodeId> remoteSourceIds = new HashSet<>();
            for (RemoteSourceNode remoteSource : stage.getFragment().getRemoteSourceNodes()) {
                remoteSourceIds.add(remoteSource.getId());
            }
            return remoteSourceIds;
        }

        private Map<PlanNodeId, ExchangeSourceOutputSelector> getMergedSourceOutputSelectors()
        {
            ImmutableMap.Builder<PlanNodeId, ExchangeSourceOutputSelector> outputSelectors = ImmutableMap.builder();
            for (RemoteSourceNode remoteSource : stage.getFragment().getRemoteSourceNodes()) {
                ExchangeSourceOutputSelector mergedSelector = null;
                for (PlanFragmentId sourceFragmentId : remoteSource.getSourceFragmentIds()) {
                    ExchangeSourceOutputSelector sourceFragmentSelector = sourceOutputSelectors.get(sourceFragmentId);
                    if (sourceFragmentSelector == null) {
                        continue;
                    }
                    if (mergedSelector == null) {
                        mergedSelector = sourceFragmentSelector;
                    }
                    else {
                        mergedSelector = mergedSelector.merge(sourceFragmentSelector);
                    }
                }
                if (mergedSelector != null) {
                    outputSelectors.put(remoteSource.getId(), mergedSelector);
                }
            }
            return outputSelectors.buildOrThrow();
        }

Then you will have just

            Set<PlanNodeId> remoteSourceIds = getRemoteSourceIds();
            Map<PlanNodeId, ExchangeSourceOutputSelector> outputSelectors = getMergedSourceOutputSelectors();

in schedule().

Maybe you could also build some common interface adapter over TaskDescriptor and OpenTaskDescriptor with methods:

  public ListMultimap<PlanNodeId, Split> getSplits();
  public boolean wasNoMoreSplits(PlanNodeId remoteSourcePlanNodeId); 

then you could mostly unify handling of both. But not sure if that is worth a fuss.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great suggestions. Refactored, it seems to be way more readable now. Please take a look.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks


updateOutputSize(outputStats);

// task tescriptor has been created
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo tescriptor

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also what is the case that task completes but descriptor is not created yet? LIMIT?

Copy link
Contributor Author

@arhimondr arhimondr Oct 13, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this implementation we don't wait for the descriptor to be created before scheduling. It is possible that a task finishes (as you mentioned in LIMIT case) before task descriptor is sealed. In such case we don't need to store it, as since the task is already finished there will be no retry.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In such case we don't need to store it, as since the task is already finished there will be no retry

Makes sense. Do we also drop sealed task descriptors from storage when tasks competes sucessefully?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, see StagePartition#taskFinished

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@arhimondr
Copy link
Contributor Author

Thanks for the review. Went through the first section of comments. Going to continue tomorrow.

@arhimondr arhimondr force-pushed the event-driven-scheduler branch from 56deff7 to cc44e88 Compare October 12, 2022 00:44
@arhimondr arhimondr force-pushed the event-driven-scheduler branch from cc44e88 to 756a430 Compare October 13, 2022 15:46
@arhimondr
Copy link
Contributor Author

@losipiuk Updated

@@ -95,6 +95,7 @@
private DataSize faultTolerantExecutionTaskDescriptorStorageMaxMemory = DataSize.ofBytes(Math.round(AVAILABLE_HEAP_MEMORY * 0.15));
private int faultTolerantExecutionPartitionCount = 50;
private boolean faultTolerantPreserveInputPartitionsInWriteStage = true;
private boolean faultTolerantExecutionEventDriverSchedulerEnabled = true;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure about this one. Do we can get some adoption, while keeping it false for a release or two?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I ran multiple rounds of full scale testing and it seems to work fine. I would probably leave it on by default while leaving the old implementation as a fallback option if something goes wrong.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

}
}

for (ExchangeSourceHandleSource handleSource : handleSources) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bail out quickly if failure != null?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These checks are important. I'm trying to verify that sources are getting closed in case of a failure.

@arhimondr arhimondr force-pushed the event-driven-scheduler branch from 756a430 to d2c7e1e Compare October 14, 2022 20:44
@arhimondr
Copy link
Contributor Author

Updated

Comment on lines +84 to +87
result
.addPartition(new Partition(0, new NodeRequirements(Optional.empty(), hostRequirement)))
.sealPartition(0)
.setNoMorePartitions();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: formatted

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's what auto-format does for me. Is it different on your end?

Comment on lines +185 to +196
PriorityQueue<PartitionAssignment> assignments = new PriorityQueue<>();
assignments.add(new PartitionAssignment(new TaskPartition(), 0));
for (int outputPartitionId = 0; outputPartitionId < partitionCount; outputPartitionId++) {
long outputPartitionSize = mergedEstimate.getPartitionSizeInBytes(outputPartitionId);
if (assignments.peek().assignedDataSizeInBytes() + outputPartitionSize > targetPartitionSizeInBytes
&& assignments.size() < partitionCount) {
assignments.add(new PartitionAssignment(new TaskPartition(), 0));
}
PartitionAssignment assignment = assignments.poll();
result.put(outputPartitionId, assignment.taskPartition());
assignments.add(new PartitionAssignment(assignment.taskPartition(), assignment.assignedDataSizeInBytes() + outputPartitionSize));
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm lost in this part of the logic. Can you elaborate?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When HashDistributionSplitAssigner is created an input size esitmate is provided (see Map<PlanNodeId, OutputDataSizeEstimate> outputDataSizeEstimates). Based on the estimates provided the HashDistributionSplitAssigner assigns output partitions to tasks (to avoid small tasks). In the previous version (with a full barrier) it was done based on the information obtained from ExchangeSourceHandle. However if speculative execution is allowed this must be done based on "estimates" as ExchangeSourceHandles may not yet be available.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My question is more like, if we have fixed number of partitions, shouldn't we simply try to distribute data evenly among the assignments? What's the sense of trying to respect targetPartitionSizeInBytes?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It could work but you need same input size statics still. Otherwise you do not know how many partitions you should group together, to be handled by single task.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@losipiuk : in the final result there is no information around size stats. It's a mapping from outputPartitionId to TaskPartitions. So I'm just more confused now...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The targetPartitionSizeInBytes is needed to avoid creating tiny partitions. For example if the total data size is only 1GB it should be enough to create a single partition mapping all the output partitions to a single task.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What I see if we output the map with partitionIds to a new TaskPartition(), so I'm not sure if we are achieving the above purposes stated above. Am I missing something obvious?

I have added the following to code to print out the returned result:

Map<Integer, TaskPartition> resultToBePrinted = result.buildOrThrow();
        resultToBePrinted.forEach((partitionId, taskPartition) -> {
            System.out.println(partitionId);
            if (taskPartition.isIdAssigned()) {
                System.out.println(taskPartition.getId());
            }
            else {
                System.out.println("Not assigned");
            }
        });

I'm seeing:

0
Not assigned
1
Not assigned
2
Not assigned

So not sure this code piece has any effect at all.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, i see.

TaskPartition is a placeholder. Basically what this algorithm does it groups certain output partitions to be processed by certain task partitions.

For example:

  • Output partitions 1,2,3 must be processed by a separate task
  • Output partitions 4,5 must be processed by a different task
  • Output partitions 6,7 must be processed by a yet another task

However we are trying to avoid assigning a certain numeric id to a task at this step. The problem is that the output data size is only an esimate, and in reallity not all the tasks may have some data to process.

For example when reading a bucketed table what we know is that there are 1000 buckets. So we assign 1000 task partitions one for each bucket. But then it is possible that data is missing for a certain bucket what would create a confusing hole in task ids (you may endup with tasks 1.0.0, 1.1.0, 1.5.0 missing the 1.2.0. 1.3.0, 1.4.0 tasks). Assigning numeric ids lazily allows to avoid such gaps.

@arhimondr arhimondr force-pushed the event-driven-scheduler branch from d2c7e1e to 4a2e852 Compare October 17, 2022 16:17
@arhimondr
Copy link
Contributor Author

Updated

Comment on lines +702 to +707
private SubPlan optimizePlan(SubPlan plan)
{
// Re-optimize plan here based on available runtime statistics.
// Fragments changed due to re-optimization as well as their downstream stages are expected to be assigned new fragment ids.
return plan;
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this is a TODO item?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, basically I just wanted to show where the plan can be mutated. The adaptive planner will come later.

if (e == Event.ABORT) {
return false;
}
if (e == Event.WAKE_UP) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the sense of having the WAKE_UP event?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is to be used as a generic event to wake up a scheduler (for example if no state modification needed when something happened).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then we don't need to schedule anything in this case? Doesn't feel we need to schedule anything in this case.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is currently used to notify the scheduler that a node has been acquired. There's no need in any extra information that has to passed to the scheduler through an event, just let the scheduler know that a further progress can be made.

Move TableInfo extraction to TableInfo to make it reusable
The new scheduler allows changing query plan dynamically during
execution, speculative execution as well as provides a single view into
a query task queue allowing to set a priority for any certain task
@arhimondr arhimondr force-pushed the event-driven-scheduler branch from 4a2e852 to 4e09b35 Compare October 20, 2022 17:19
@arhimondr
Copy link
Contributor Author

Updated

@arhimondr arhimondr merged commit 333b728 into trinodb:master Oct 20, 2022
@arhimondr arhimondr deleted the event-driven-scheduler branch October 20, 2022 20:20
@github-actions github-actions bot added this to the 401 milestone Oct 20, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

Successfully merging this pull request may close these issues.

3 participants