Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid calling scheduler when the executor cannot accept new tasks #378

Merged
merged 4 commits into from
Oct 17, 2022

Conversation

Dandandan
Copy link
Contributor

@Dandandan Dandandan commented Oct 17, 2022

Which issue does this PR close?

Closes #377

Rationale for this change

Improves performance, especially when using a more limited number of task slots by avoiding a longer sleep between runs.

Running with 1 executor and task slots=1:

Timings(ms):

Q Old New
1 15818.86 11097.58
2 50434.39 27457.48
3 19201.45 12227.47
4 12864.86 13034.73
5 39042.86 24073.82
6 3823.15 2750.53
7 59200.60 36749.42

What changes are included in this PR?

Are there any user-facing changes?

@Dandandan Dandandan marked this pull request as draft October 17, 2022 16:54
@Dandandan Dandandan marked this pull request as ready for review October 17, 2022 18:28
@Dandandan Dandandan requested a review from andygrove October 17, 2022 18:30

// Don't poll for work if we can not accept any tasks
if !can_accept_task {
tokio::time::sleep(Duration::from_millis(1)).await;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any reasoning behind picking 1 ms versus any other value? Would it be less overhead to sleep for longer here, maybe 10 ms? Should we make this configurable?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think checking the value is really cheap (e.g. compared to grpc requests), so I did use the minimum to reduce the latency as much as possible when a job is running.

I wonder if we can use some lock or channel instead of this loop in the future to rewrite this without sleep 🤔 and use resources more efficiently...

Copy link
Member

@andygrove andygrove left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks @Dandandan

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Avoid calling scheduler when the executor cannot accept new tasks
2 participants