Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prioritize non-speculative tasks in scheduler and node allocator #17465

Merged
merged 3 commits into from
May 16, 2023

Conversation

losipiuk
Copy link
Member

Release notes

(x) This is not user-visible or docs only and no release notes are required.
( ) Release notes are required, please propose a release note for me.
( ) Release notes are required, with the following suggested text:

losipiuk added 3 commits May 11, 2023 18:29
The allocatedMemory memory map can be easily computed based on
fulfilledAcquires in BinPackingSimulation constructor. It does not add
significant amount of computation as we are iterating fulfilledAcquires
there anyway.

Removal of allocatedMemory simplify state of
BinPackingNodeAllocatorService making it less prone to concurrency
related issues.
@cla-bot cla-bot bot added the cla-signed label May 11, 2023
@losipiuk losipiuk requested review from arhimondr and linzebing May 11, 2023 16:29
@losipiuk
Copy link
Member Author

CI: #17158

IndexedPriorityQueue.Prioritized<ScheduledTask> task = queue.peekPrioritized();
checkState(task != null, "queue is empty");
// negate priority to reverse operation we do in addOrUpdate
return new PrioritizedScheduledTask(task.getValue(), toIntExact(-task.getPriority()));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we change IndexedPriorityQueue to sort things in reserved order? The fact that we need to always negate priority here feels really error-prone

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah we can - will add commit.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will send separate PR

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment on lines +662 to +671
NodeAllocator.NodeLease acquireSpeculative1 = nodeAllocator.acquire(REQ_NONE, DataSize.of(64, GIGABYTE), true);
assertAcquired(acquireSpeculative1, NODE_1);
NodeAllocator.NodeLease acquireSpeculative2 = nodeAllocator.acquire(REQ_NONE, DataSize.of(32, GIGABYTE), true);
assertAcquired(acquireSpeculative2, NODE_2);

// non-speculative tasks should still get node
NodeAllocator.NodeLease acquireNonSpeculative1 = nodeAllocator.acquire(REQ_NONE, DataSize.of(64, GIGABYTE), false);
assertAcquired(acquireNonSpeculative1, NODE_2);
NodeAllocator.NodeLease acquireNonSpeculative2 = nodeAllocator.acquire(REQ_NONE, DataSize.of(32, GIGABYTE), false);
assertAcquired(acquireNonSpeculative2, NODE_1);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For my understanding, two nodes have a total of 128GB, but now we have 192GB scheduled (96GB speculative, 96GB non-speculative), right? How does this work out later, scheduler will preempt speculative tasks in favor of non-speculative tasks?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does this work out later, scheduler will preempt speculative tasks in favor of non-speculative tasks

They will work side by side until we run out of memory on node. In such case speculative task will be killed by low memory killer. If they will fit in available memory (task does not alway use all reserved memory) both should complete succesfully.

@losipiuk losipiuk merged commit 1eba1f3 into trinodb:master May 16, 2023
@github-actions github-actions bot added this to the 418 milestone May 16, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

Successfully merging this pull request may close these issues.

3 participants