Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add AsyncWriteExt::write_all_vectored utility #1741

Merged
merged 1 commit into from
Apr 22, 2020
Merged

Add AsyncWriteExt::write_all_vectored utility #1741

merged 1 commit into from
Apr 22, 2020

Conversation

Thomasdezeeuw
Copy link
Contributor

@Thomasdezeeuw Thomasdezeeuw commented Jul 16, 2019

Since vectored I/O has been stabilised I was missing this. Two things to note;

  • this takes &mut [IoSlice], where the call in io::Read takes &[IoSlice]. This is needed to advance the buffer inside the IoSlice.
  • this uses some unsafe code, I couldn't get around without it.

TODO:

  • Docs.

@Thomasdezeeuw
Copy link
Contributor Author

@Nemo157 I've created {IoSlice, IoSliceMut}::advance which does the most heavy lifting of this change in rust-lang/rust#62987. However it's an unstable feature, now that Future is also available on stable how are nightly features supposed to be handled?

@Thomasdezeeuw
Copy link
Contributor Author

@Nemo157 sorry about the delay. I've rebased this on master, updated the code to use the newly created io_slice_advance Rust feature and added documentation.

///
/// [vectored writes]: std::io::Write::write_vectored
///
/// # Notes
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please carefully review this section, I don't if it clearly states what I mean.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the future successfully completes, bufs is an empty slice, right?

On failure bufs may have been modified, but is not guaranteed to match with what has actually been written out (because I believe the underlying IO doesn't guarantee that it either writes some data or returns an error, it could have written some of the data and then returned an error). That's pretty much equivalent to what write_all guarantees, if it returns an error some part of the data will have been written with no way to know how much.

I think this should try to avoid the "undefined" term since that has such strong memory-safety connotations, and the failure cases are relatively well-defined.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the future successfully completes, bufs is an empty slice, right?

That is correct, but in the future we could optimise to just return ready if the entire (remaining) buffer is written, not modifying bufs.

On failure bufs may have been modified, but is not guaranteed to match with what has actually been written out (because I believe the underlying IO doesn't guarantee that it either writes some data or returns an error, it could have written some of the data and then returned an error). That's pretty much equivalent to what write_all guarantees, if it returns an error some part of the data will have been written with no way to know how much.

Do you want this to be reflected in the documentation somehow, or leave it as is?

I think this should try to avoid the "undefined" term since that has such strong memory-safety connotations, and the failure cases are relatively well-defined.

Hence I added the it was not about memory safety, but I do understand the concern. Should I replace "undefined" with "unknown" (or something else) or you want to provide the guarantee that bufs will be empty after it successfully returns (also see my first remark in this comment)?

@Thomasdezeeuw
Copy link
Contributor Author

Let me know if you want me to rebase this on the master branch.

@Thomasdezeeuw
Copy link
Contributor Author

@Nemo157 this pr has fallen out of date, but can I get an OK/not OK before spending more time on this?

@Thomasdezeeuw Thomasdezeeuw requested a review from Nemo157 January 6, 2020 14:49
@Thomasdezeeuw
Copy link
Contributor Author

@Nemo157 @taiki-e ping :)

@Thomasdezeeuw
Copy link
Contributor Author

@cramertj do you have time to review this?

@cramertj
Copy link
Member

Sure, sorry for the delay! Thanks for the ping.

futures-util/Cargo.toml Outdated Show resolved Hide resolved
///
/// Different to `io::Write::write_vectored` this takes a *mutable*
/// reference to a slice of `IoSlice`s, not a non-mutable reference, because
/// we need to modify the slice to keep track of the bytes already written.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Who needs to? The future can store that state internally. Is this for the user to know? IMO we should try to match the API of write_all as closely as possible. Actually, for that matter, what we should be trying to match would be the yet-to-exist std::io::Write::write_all_vectored. If there isn't yet an issue for creating that API, would you mind creating one and linking it here? I'd really like to keep these extension traits from diverging from the API in the standard library, but I'm fine landing it here if it's already under a feature gate, so long as we make sure to tag that feature gate as _unstable / have it require an unstable feature flag in order to compile. Does that make sense?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem with writing all buffers is that is we need to deal with partially written IoSlices. For which I wrote IoSlice::advance, which internally updates the slices, but needs mutable access. Basically WriteAllVectored needs ownership of bufs, to keep track of what bytes are already written. Also see the next section in the documentation.

In practice I don't think this will be a problem as IoSlice slices are often created just before writing, e.g. see the example. But if you know of a better solution I'll love to hear it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@@ -0,0 +1,200 @@
use futures_core::future::Future;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Implementation and tests here look good to me, thanks!

/// # Examples
///
/// ```
/// # futures::executor::block_on(async {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for including this example!

@Thomasdezeeuw
Copy link
Contributor Author

@cramertj I've rebased this on master and changed the docs to match io::Write::write_all_vectored in the std lib (as merged in #70612).

@Thomasdezeeuw
Copy link
Contributor Author

An unrelated tests failed select_on_non_unpin_size on the CI. @cramertj should I just increase the expected size in test? Or is this being tracked elsewhere?

@Nemo157
Copy link
Member

Nemo157 commented Apr 8, 2020

Fixed in #2119, though that's still failing because of a nightly bug in the rustdoc check.

This adds a new feature to future-util, named write_all_vectored, to
enable the utility since it requires the unstable io_slice_advance
Rust feature.

This matches the same API found in io::Write::write_all_vectored in the
std lib.
@Thomasdezeeuw
Copy link
Contributor Author

Now that #2119 is merged I've rebased this.

@Thomasdezeeuw
Copy link
Contributor Author

Friendly ping @cramertj, I don't want to have to rebase this again :)

@cramertj
Copy link
Member

Thanks for the ping, and for the changes! Sorry for the delay on my end.

@cramertj cramertj merged commit d7d8216 into rust-lang:master Apr 22, 2020
@Thomasdezeeuw Thomasdezeeuw deleted the write_all_vectored branch April 23, 2020 07:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants