Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Async/Await: No futures executor #166

Closed
wants to merge 5 commits into from

Conversation

dbcfd
Copy link
Contributor

@dbcfd dbcfd commented Oct 24, 2019

This builds on the work of #143

Futures executor is meant mainly for testing and not production usage. This PR makes things more async friendly, getting rid of additional spawns and blocking. When necessary tokio is used for execution and blocking.

@dbcfd dbcfd changed the title No futures executor [WIP] No futures executor Oct 24, 2019
@dbcfd dbcfd force-pushed the no-futures-executor branch from 6923c71 to 86fe575 Compare October 24, 2019 21:11
@dbcfd dbcfd changed the title [WIP] No futures executor [WIP] Async/Await: No futures executor Oct 27, 2019
Switch from futures executor (which is only for testing) to either pure
implementations or usages of tokio.
@dbcfd dbcfd force-pushed the no-futures-executor branch from a7744b9 to 96009ad Compare November 26, 2019 16:48
@benesch
Copy link
Collaborator

benesch commented Dec 5, 2019

Hi @dbcfd, thanks for the PR, and apologies for the long response time! #143 has landed in a slightly modified form as #187. I adopted some of the cleanup you made in this PR as well.

I'm afraid I'm not sure what to do with this PR, or if it's even still applicable. The tokio-executor crate is no more, and the one place we use futures::executor::block_on does not strike me as problematic, since it's a very simple future that just sends over a channel. I'd hate to force a Tokio dependency on users of this crate. I'm also not clear on why you had to refactor MessageStream so substantially?

Anyway, let me know if there are actual performance problems with the current version of master, and if so, we can discuss using the Tokio runtime instead!

@dbcfd
Copy link
Contributor Author

dbcfd commented Dec 5, 2019

@benesch There's actually a runtime behind that futures::executor, albeit a very simple one. It will then be contending for other threads that were spawned, rather than interweaved with the actual executor being used.

This then produces the refactor of MessageStream, which allows it to implement stream, rather than using a channel sending between a spawned thread running a different executor and whatever is processing the receiver side. You may also miss wakeups on the channel in this case.

Since the poll timeout is only 100ms, we could probably not using tokio blocking.

@dbcfd
Copy link
Contributor Author

dbcfd commented Dec 5, 2019

Now that your PR has landed, I can take another stab at this PR, with a few less changes and without both futures executor and tokio block on.

@benesch
Copy link
Collaborator

benesch commented Dec 5, 2019

There's actually a runtime behind that futures::executor, albeit a very simple one. It will then be contending for other threads that were spawned, rather than interweaved with the actual executor being used.

I'm not sure what you mean here? It should spawn a single-threaded executor on the poll thread itself; essentially we're just performing a blocking send on the channel using the thread we already have. Replacing that by instead blocking the task doesn't seem obviously better? (Happy to be proven wrong by benchmarks here, though!)

I agree that the extra thread is unfortunate, though. One thing that I've wanted to try is calling poll with a 0 timeout (requesting an immediate return). If poll returns a message, then great, we return that immediately. If it doesn't, that means the queue is empty—we can return Pending and use the message_queue_nonempty_callback to signal the waker when data is ready. This last bit is fuzzy, though, as I haven't thought about it in great detail. But conceptually it seems like it should work!

Thanks for looking into this, by the way!

@benesch
Copy link
Collaborator

benesch commented Dec 5, 2019

And if you do wind up rebasing this, it'll be easier if you can wait a few minutes and rebase on top of #188 too!

@dbcfd
Copy link
Contributor Author

dbcfd commented Dec 11, 2019

Closing for #192

@dbcfd dbcfd closed this Dec 11, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants