-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: Public APIs for C# 8 async streams #27547
Comments
would be nice if you include some sample code for the usage. |
e.g. (subject to decisions around language syntax) static async IAsyncEnumerable<TResult> Zip<TFirst, TSecond, TResult>(
IAsyncEnumerable<TFirst> first, IAsyncEnumerable<TSecond> second, Func<TFirst, TSecond, TResult> resultSelector)
{
using await (IAsyncEnumerator<TFirst> e1 = first.GetAsyncEnumerator())
using await (IAsyncEnumerator<TSecond> e2 = second.GetAsyncEnumerator())
{
while (await e1.MoveNextAsync() && await e2.MoveNextAsync())
yield return resultSelector(e1.Current, e2.Current);
}
} |
Too late to include these APIs in .NET Standard 2.1? |
@stephentoub Did we determine what was happening with Ix.NET Async from dotnet/reactive? Especially with the Impl here: Are you taking that code and putting it here in corefx? What about the existing package, etc? We should likely have a discussion on this? |
When we've (you, me, @MadsTorgersen, etc.) talked about this previously, my understanding of the plan was that it would continue to live in reactive, and it would be updated based on the actual APIs shipping in the platform. I don't believe anything has changed there. Has it? |
Yes. Particularly because the associated language feature isn't complete yet, and isn't intended to be complete until the .NET Core 3.0 time frame. Need to ship the API + language feature at the same time so they can evolve together. |
@stephentoub That's fine with me, it just wasn't clear if that was still the case since it has been a while. Once the core interfaces/compiler work is somewhere we can pull from a private feed, we can update that branch to use that interface and match the specs. Also happy to give direct commit access to any member on the team that needs it. |
Thanks, @onovotny. @MadsTorgersen, this is all still correct, right? |
@stephentoub shared a code sample above using high-level primitives. public class C
{
public static async System.Collections.Generic.IAsyncEnumerable<int> M()
{
await System.Threading.Tasks.Task.CompletedTask;
yield return 4;
}
} |
I have a couple of questions:
|
|
Hmm. I had it in my head that you couldn't modify/augment APIs of any types that had shipped in such assemblies, but I hadn't internalized that you couldn't add additional unrelated types to them, either. What's the cause of that? @weshaggard, @terrajobst, is that actually the case such that if we wanted to go partial OOB here we'd need to introduce a new assembly? |
You need to expect that code will run synchronously as part of the completion of the
Yes. An implementation of the interfaces is not guaranteed to be thread-safe, and most will not be. So you should not make a second asynchronous call on the object while a first is still in progress.
You can't:
You can't. It'd be part of the enumerable, or else you can choose to cancel between awaits on your own. The alternative is to take a
The cancellation would be built into the source. Any LINQ operations over it would implicitly inherit that as part of their awaiting on that source. If you wanted to further inject cancellation into individual query operators, then those query operators should accept a token as well, e.g. There's a whole discussion of cancellation at https://github.com/dotnet/csharplang/blob/master/proposals/async-streams.md#cancellation.
It's never mandatory to call
Yes, it's valid (though uncommon) to do so. With |
The OOB would need to target a netstandard. If it does not target a netstandard, it is platform specific and we do not need unified type identity. It can be one type identity for netfx and different one for netcoreapp, etc. You can ship OOB like that, but it has limited value. If the OOB targets netstandard, it should better work on all platforms that support netstandard, including .NET Core 2.1 where this shipped inbox. The problem is which implementation are you going to pick when somebody references the OOB in .NET Core 2.1. It cannot be the inbox implementation because of it is missing the new methods; and it cannot be the OOB implementation because of it would introduce second copy of the inbox types. |
Thanks, @jkotas. That makes sense. We've made a bit of a mess, haven't we ;) Ok, so if we decide we need to go OOB for platforms other than .NET Core 3, we'll need to introduce a new assembly. |
@stephentoub Shot in the dark, and subject to API/design reviews anyway, but @bartdesmet split up IxAsync into Perhaps that's a good place for those interfaces? |
Can you clarify what you mean? The interfaces are going to ship in .NET Core, not as part of an external library. If you mean if we decide to ship something OOB, sure, we could consider using such a name for an assembly. Though OOB isn't the main plan right now; I only mentioned it as an off-hand thing... the delivery vehicle here is .NET Core 3. |
@stephentoub I guess I mean that the assembly could/should be part of .NET Core. Ix Async is in grey area already and EF Core depends on it today. |
Thanks. I have a question regarding the |
|
Does that mean that the interfaces will be shipped in a separate package and then be shipped in .NET Core 3.0, a bit like how |
We have found that trying to patch the existing runtimes with new core features via NuGet package like we have done with ValueTuple always results in sub-par experience. We have tried that many times. Based on the discussions about this, I believe we are going to prioritize the long-term sustainability of the platform over patching the existing runtimes with new core features in creative ways. For this feature, it would mean: Ship async streams in .NET Core 3.0 only first in its natural place, without introducing a special little assembly for it. You want to be on .NET Core 3.0 to get the first class async stream experience. Once it ships in .NET Core, these is an option to ship a best-effort standalone NuGet package to provide async streams for earlier .NET versions. This package can be shipped from corefx or even shipped by community (like https://www.nuget.org/packages/AsyncBridge/). You may use this NuGet package if you desperately need async streams on .NET Framework or earlier .NET Core versions and you can live with the sub-par experience that it provides. |
I remember when async was first introduced and there was the Microsoft.Bcl.Async package which did a similar job to AsyncBridge. Perhaps the types could be shipped with Ix.Async for those lower "unsupported" TFMs? The community will want to fill the gap, so this seems like a sensible home for it. |
I have seen many hard-core optimizations around tasks and await on this issue tracker. It seems that over time performance has become a significant priority. Maybe it is prudent to directly skip to the fastest design possible. This should be seen as low-level machinery that is only touched by experts. Normal users use library functions and language support. Similarly it is very rare that normal C# users must touch enumerators manually. I think it's correct that performance was so much prioritized in all the other measures described. |
I understand the position; this was my original stance as well. I'm the one who suggested the alternative of WaitForNextAsync+TryGetNext, and then I'm also the one who recently re-raised the issue and suggested we might want to revert back to MoveNextAsync+Current, even after @jcouv had mostly implemented the WaitForNextAsync+TryGetNext approach in the compiler. There are several concerns with the alternative. First, from a theoretical perspective, I have a general problem with APIs that launch asynchronous work but don't give you back a handle to it, yet that's exactly what TryGetNext can do. This means you invoke TryGetNext, it may return false, and if it does, it's possible there's asynchronous work now happening in the background, and thus you must call WaitForNextAsync again in order to get the promise that represents the eventual completion of that work. It adds non-trivially to the complexity. Second, there's something very nice about maintaining direct correspondence with the existing synchronous interface, which is something that's been around for a long time and that many people are familiar with. Third, it's easy to get wrong. Yes, we expect the majority of consumption to be via foreach await, but there are absolutely times when you need to drop down to the interfaces. Just as an example, I found one of the simplest LINQ implementations using MoveNext+Current and looked at what it would take to implement with both approaches. For reference, here's the synchronous implementation: private static IEnumerable<TResult> ZipIterator<TFirst, TSecond, TResult>(IEnumerable<TFirst> first, IEnumerable<TSecond> second, Func<TFirst, TSecond, TResult> resultSelector)
{
using (IEnumerator<TFirst> e1 = first.GetEnumerator())
using (IEnumerator<TSecond> e2 = second.GetEnumerator())
{
while (e1.MoveNext() && e2.MoveNext())
yield return resultSelector(e1.Current, e2.Current);
}
} Converting that to use the MoveNextAsync+Current API is trivial (to the point where it could likely be easily automated): private static async IAsyncEnumerable<TResult> ZipIterator<TFirst, TSecond, TResult>(IAsyncEnumerable<TFirst> first, IAsyncEnumerable<TSecond> second, Func<TFirst, TSecond, TResult> resultSelector)
{
using await (IAsyncEnumerator<TFirst> e1 = first.GetAsyncEnumerator())
using await (IAsyncEnumerator<TSecond> e2 = second.GetAsyncEnumerator())
{
while (await e1.MoveNextAsync() && await e2.MoveNextAsync())
yield return resultSelector(e1.Current, e2.Current);
}
} Not so much with the alternative. First, we can take an approach where we replace MoveNextAsync with WaitForNextAsync+TryGetNext, e.g. private static async IAsyncEnumerable<TResult> ZipIterator<TFirst, TSecond, TResult>(IAsyncEnumerable<TFirst> first, IAsyncEnumerable<TSecond> second, Func<TFirst, TSecond, TResult> resultSelector)
{
using await (IAsyncEnumerator<TFirst> e1 = first.GetEnumerator())
using await (IAsyncEnumerator<TSecond> e2 = second.GetEnumerator())
{
while (true)
{
if (!await e1.WaitForNextAsync()) break;
TFirst currentFirst = e1.TryGetNext(out bool success);
if (!success) break;
if (!await e2.WaitForNextAsync()) break;
TSecond currentSecond = e1.TryGetNext(out bool success);
if (!success) break;
yield return resultSelector(currentFirst, currentSecond);
}
}
} This works, but it's more complicated, isn't as easily proven correct, and isn't any more efficient than the MoveNextAsync+Current approach, so the extra complexity isn't buying us anything. The benefit of WaitForNextAsync+TryGetNext is it allows us to have an inner loop using just TryGetNext so that we can have one interface call rather than two when items are yielded synchronously. So, what if we wanted to get those benefits? We could try what feels like a natural translation: private static async IAsyncEnumerable<TResult> ZipIterator<TFirst, TSecond, TResult>(IAsyncEnumerable<TFirst> first, IAsyncEnumerable<TSecond> second, Func<TFirst, TSecond, TResult> resultSelector)
{
using await (IAsyncEnumerator<TFirst> e1 = first.GetEnumerator())
using await (IAsyncEnumerator<TSecond> e2 = second.GetEnumerator())
{
while (true)
{
if (!await e1.WaitForNextAsync()) break;
if (!await e2.WaitForNextAsync()) break;
while (true)
{
TFirst currentFirst = e1.TryGetNext(out bool success);
if (!sucess) break;
TSecond currentSecond = e2.TryGetNext(out success);
if (!sucess) break;
yield return resultSelector(currentFirst, currentSecond);
}
}
}
} but this is actually buggy, which may or may not be immediately obvious: if e1.TryGetNext returns true but then e2.TryGetNext returns false, we will have consumed an element from e1 and not from e2, such that subsequent retrievals will be out of sync between the two enumerables. To fix that while still retaining the potential for one interface call per element, we have to get non-trivially more complex: private static async IAsyncEnumerable<TResult> ZipIterator<TFirst, TSecond, TResult>(IAsyncEnumerable<TFirst> first, IAsyncEnumerable<TSecond> second, Func<TFirst, TSecond, TResult> resultSelector)
{
using await (IAsyncEnumerator<TFirst> e1 = first.GetEnumerator())
using await (IAsyncEnumerator<TSecond> e2 = second.GetEnumerator())
{
while (true)
{
if (!await e1.WaitForNextAsync()) break;
if (!await e2.WaitForNextAsync()) break;
while (true)
{
TFirst currentFirst = e1.TryGetNext(out bool success);
if (!success) break;
TSecond currentSecond;
while (true)
{
currentSecond = e1.TryGetNext(out success);
if (success) break;
if (!await e2.WaitForNextAsync()) return;
}
yield return resultSelector(currentFirst, currentSecond);
}
}
}
} So, doable, but really easy to get wrong, and difficult to prove is right. This relies on being able to call WaitForNextAsync multiple times without intervening TryGetNexts and having it advance only once: that would be a documented guarantee, but it's yet another thing you need to understand and really internalize. Fourth, similar complexity exists if you're manually implementing an Finally, the perf benefits are there (at most saving one interface call per element), but they generally pale in comparison to any kind of I/O being done. You can demonstrate the perf benefits with microbenchmarks... it's much harder to demonstrate with anything real in an app. I agree it's painful to leave any perf potential on the table. Part of me doesn't like the other part of me that's pushing for the simple approach. But I also believe the advantages of the simple approach outweigh the cons. And if we find that there really are important workloads that would benefit from the advanced interfaces, we've convinced ourselves that they can be introduced in a light-up fashion that would enable those workloads to retain 90% of the benefits. |
In my library I already use async streams for a while and implement both To the point of performance discussion, another contract of
The idea is that
If there are no more values available instantly in memory then some IO is required and all micro-optimizations are already quite meaningless. The design proposed here with Composition of synchronous enumerators is also much simpler (LINQ vs Ix style) and availability of new data could be propagated via a side channel, while complex composed operations could be micro-optimized without touching async machinery. My I believe async streams should not try to replace their sync counterparts when performance matters because they will always be slower. Max performance in terms of method inlining and similar micro optimizations should be outside the scope of async streams. They should not allocate and that is the must and already done. And they should have a quick path probing if a value is already available but mostly to avoid jumping to thread pool and yielding, especially on Windows with 15 msec thread time slice. Pragmatically there is almost no such new data that could arrive faster than a couple of virtual/interface calls. |
I do like the general approach ultimately taken. We chose the simplest design, with the fallback of adding an advanced version that can improve performance, but we don't do it until it is clear that the extra perf can actually be realized and is valuable in real scenarios. It is a low risk strategy, which is good. |
First of all I really appreciate that you found a way to keep a simple interface with acceptable performance characteristics. Out of curiosity or nitpicking, shouldn't a private static async IAsyncEnumerable<TResult> ZipIterator<TFirst, TSecond, TResult>(IAsyncEnumerable<TFirst> first, IAsyncEnumerable<TSecond> second, Func<TFirst, TSecond, TResult> resultSelector)
{
using await (IAsyncEnumerator<TFirst> e1 = first.GetAsyncEnumerator())
using await (IAsyncEnumerator<TSecond> e2 = second.GetAsyncEnumerator())
{
while (true)
{
var t1 = e1.MoveNextAsync();
var t2 = e2.MoveNextAsync();
if (!await t1 | !await t2)
yield break;
yield return resultSelector(e1.Current, e2.Current);
}
}
} |
I'm talking about using iterator support, i.e. If I implement the whole thing myself, then sure, I can do exactly what the compiler does and return But if I implement my own internal sealed class MyAsyncEnumerable<T> : IAsyncEnumerable<T>
{
...
} that then uses iterator support to implement public async IAsyncEnumerator<T> GetAsyncEnumerator(CancellationToken cancellationToken)
{
...
yield return await ...;
...
} then I can't do that. That was the point I was trying to make. If you want to implement your |
Ah, ok, got it, yes, I wasn't thinking of the generated iterator, thanks. Is that something that future language or compiler support could optimize? With that trade-off it still seems like a worthy trade-off to have a complete solution. The current POR (option 2) concerns me deeply as it prevents me from passing |
How so? You have the exact same expressivity between them. There's no way to cancel a MoveNext call on |
It's different because cancellation is a thing for async operations where it's not really for non-async. I don't expect to be able to cancel an IEnumerator, I would expect to control cancellation of an IAsyncEnumerator. |
The |
Yes. This is all email-compiled and email-tested. I hope we don't focus on that :) |
Just considering the I have no idea how the foreach/using could work with it, or if it even should. The closest analogous problem I can remember is from Ix.NET where the |
To play devil's advocate:
Thanks, but I'm not seeing how that helps, and at least from my perspective it hurts:
|
Indeed it is not very .NET like. All in all, I think option 2 is the least problematic from my perspective and I'll have to figure out an API to give the user a token in exchange for a async enumerable source: zipMany(int count, Func<int, CancellationToken, IAsyncEnumerable<T>> sourceSupplier) |
Since everything else here has been done and the only known open issue is around whether we want to change something for cancellation, I've opened a separate issue just for that. |
Closed by dotnet/corefx#33104. |
Last final question about the lifecycle of the enumerator. Is the following legal? var source = asyncSource.GetAsyncEnumerator();
if (!shouldWeContinue) {
await source.DisposeAsync();
return;
} I.e., not calling |
Yup, that's legal. Just as it is for |
@stephentoub I read your interesting blog post about ValueTask. It left me with the question: why |
Answering my own question: this is to be able to reduce allocations when the |
Corrrect. In the normal case, we just allocate an object that is an |
I apologise if this is handled in another issue, but I'm unable to find anything! Is there a list of existing BCL types that will get API's returning IAsyncEnumerables in .Net Core 3? I'm thinking of things such as File.ReadAllLines etc? |
Very few. Right now just a new method in the channels library was added, and I would not expect any others in 3.0. |
Ok, are there going to be examples to show how to do things like read from streams using IAsyncEnumerable?
Questions that come to mind are things like should I use a StreamReader as per .Net Core 2.x and yield return, or should I derive from IAsyncEnumerable.
Is there any public discussion around what will be added in future?
From: Stephen Toub <[email protected]>
Sent: 29 March 2019 10:21
To: dotnet/corefx <[email protected]>
Cc: Sean Farrow <[email protected]>; Comment <[email protected]>
Subject: Re: [dotnet/corefx] Proposal: Public APIs for C# 8 async streams (#32640)
Is there a list of existing BCL types that will get API's returning IAsyncEnumerables in .Net Core 3? I'm thinking of things such as File.ReadAllLines etc?
Very few. Right now just a new method in the channels library was added, and I would not expect any others in 3.0.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub<https://github.com/dotnet/corefx/issues/32640#issuecomment-477945869>, or mute the thread<https://github.com/notifications/unsubscribe-auth/ABY1foS0Dv3PwGlZf3ISdRj7tYhbIirOks5vbej7gaJpZM4XKa0->.
|
IAsyncEnumerable isn't any different in this regard from IEnumerable. If you would find the programming model of using an iterator to access lines from the stream valuable, then you'd expose it via
We're not planning to proactively add a ton of
Like this? static async IAsyncEnumerable<string> ReadLinesAsync(StreamReader reader)
{
string line;
while ((line = await reader.ReadLineAsync().ConfigureAwait(false)) != null)
{
yield return line;
}
}
...
await foreach (string line in ReadLinesAsync(reader))
{
...
} |
Thanks, feel free to close this.
From: Stephen Toub <[email protected]>
Sent: 31 March 2019 03:21
To: dotnet/corefx <[email protected]>
Cc: Sean Farrow <[email protected]>; Comment <[email protected]>
Subject: Re: [dotnet/corefx] Proposal: Public APIs for C# 8 async streams (#32640)
Questions that come to mind are things like should I use a StreamReader as per .Net Core 2.x and yield return, or should I derive from IAsyncEnumerable.
IAsyncEnumerable isn't any different in this regard from IEnumerable. If you would find the programming model of using an iterator to access lines from the stream valuable, then you'd expose it via I{Async}Enumerable.
Is there any public discussion around what will be added in future?
We're not planning to proactively add a ton of IAsyncEnumerable implementations, and will instead be demand driven here. If there's a particular API you'd like to see, please feel free to propose something by opening an issue.
https://github.com/dotnet/corefx/blob/master/Documentation/project-docs/api-review-process.md
Ok, are there going to be examples to show how to do things like read from streams using IAsyncEnumerable?
Like this?
static async IAsyncEnumerable<string> ReadLinesAsync(StreamReader reader)
{
string line;
while ((line = await reader.ReadLineAsync().ConfigureAwait(false)) != null)
{
yield return line;
}
}
...
await foreach (string line in ReadLinesAsync(reader))
{
...
}
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub<https://github.com/dotnet/corefx/issues/32640#issuecomment-478305769>, or mute the thread<https://github.com/notifications/unsubscribe-auth/ABY1foGQi1pxKSv46JrhXn_5bj3IVWhoks5vcBuXgaJpZM4XKa0->.
|
Will |
Yes. The most common consumption will be via |
Converting my existing APIs to use async streams, quite a lot of places have cropped up where an This problem has knock-on effects on converters or processors that asynchronously enumerate items of one kind and convert/join/split them into items of a different kind. If they take an The issue of how to supply the cancellation token that is discussed in dotnet/corefx#33338 and the LDM discussion linked from it also appears to be connected to the distinction between an asynchronous enumeration that can be repeated as many times as necessary (in this case supplying the cancellation token to each particular instance of the enumeration via
|
Having played for a week with the extension-method approach that I'd called kludgy above, it actually feels quite natural, and is not especially burdensome compared to |
We're at a point where we should start exposing the necessary library support for C# 8 async streams, aka async enumerables, aka async iterators. While we may find ourselves wanting to make further changes to these APIs, we've locked on what we plan to deliver initially, and we can adapt as necessary based on feedback.
Assembly:
We need to decide where we put these. My expectation is .NET Core we'd have all of this in System.Private.CoreLib, but we could also put it in
System.Threading.Tasks.Extensionsa new assembly. If we want to OOB this, it could be in both, with the .NET CoreS.T.T.Extensionsthe new assembly just type forwarding to S.P.CoreLib as happens today for ValueTask.A few notes:
WaitForNextAsync
+TryGetNext
current design. We spent a lot of time exploring this attractive alternative. It has perf advantages, but also non-trivial additional complexity when using the APIs directly rather than through compiler support (e.g.foreach await
). We also have a design where we could light-up with the alternative API should we find that the perf benefits are desirable enough to introduce a second interface. As such, we're going with the simpler, more familiar, and easier to work withMoveNextAsync
+Current
design.MoveNextAsync
+Current
in an atomic fashion (without higher-level locking that provided the thread safety), but also in terms of accessingMoveNextAsync
again before having consumed a previous callsValueTask
; doing so is erroneous and has undefined behavior. So, too, is accessingDisposeAsync
after having calledMoveNextAsync
but without having consumed itsValueTask
.ValueTask
instead ofTask
. The original design called forMoveNextAsync
to returnTask<bool>
andDisposeAsync
to returnTask
. That works well whenMoveNextAsync
andDisposeAsync
complete synchronously, but when they complete asynchronously, it requires allocation of another object to represent the eventual completion of the async operation. By instead returningValueTask<bool>
andValueTask
, an implementation can choose to implementIValueTaskSource
and reuse the same object repeatedly for one call after another; in this fashion, for example, the compiler-generated type that's returned from an iterator can serve as the enumerable, as the enumerator, and as the promise for every asynchronously-completingMoveNextAsync
andDisposeAsync
call made on that enumerator, such that the whole async enumerable mechanism can incur overhead of a single allocation.CancellationToken
to the thing creating the enumerable, such that the token can be embedded into the enumerable and used in its operation. You can of course choose to cancel awaits by awaiting something that itself represents both theMoveNextAsync
ValueTask<bool>
and aCancellationToken
, but that would only cancel the await, not the underlying operation being awaited. As for IAsyncDisposable, while in theory it makes sense that anything async can be canceled, disposal is about cleanup, closing things out, freeing resources, etc., which is generally not something that should be canceled; cleanup is still important for work that's canceled. The same CancellationToken that caused the actual work to be canceled would typically be the same token passed to DisposeAsync, making DisposeAsync worthless because cancellation of the work would cause DisposeAsync to be a nop. If someone wants to avoid being blocked waiting for disposal, they can avoid waiting on the resulting ValueTask, or wait on it only for some period of time.Async
is used in the various method names (e.g.DisposeAsync
) even though the type also includesAsync
so that a type might implement both the synchronous and asynchronous counterparts and easily differentiate them.AsyncIteratorMethodBuilder
. The compiler could get away with using the existingAsyncTaskMethodBuilder
orAsyncVoidMethodBuilder
types, but these both have negative impact that can be avoided by using a new, specially-designed type.AsyncTaskMethodBuilder
allocates aTask
to represent the async method, but thatTask
goes unused in an iterator.AsyncVoidMethodBuilder
interacts withSynchronizationContext
, becauseasync void
methods need to do so (e.g. callingOperationStarted
andOperationCompleted
on the currentSynchronizationContext
if there is one). And some of the methods are poorly named, e.g.Start
makes sense when talking about starting an async method, but not when talking about iterating with an iterator. As such, we introduce a new type tailored to async iterators.Create
just returns a builder that the compiler can use, which is likely justdefault(AsyncIteratorMethodBuilder)
, but the method might also do additional optional work, like tracing.MoveNext
pushes the state machine forward, effectively just callingstateMachine.MoveNext()
, but doing so with the appropriate handling ofExecutionContext
.AwaitOnCompleted
andAwaitUnsafeOnCompleted
are exactly what they are on the existing builders. AndComplete
just serves to notify the builder that the iterator has finished iterating: technically this isn't necessary, and it may just be a nop, but it gives us a hook to be able to do things like tracing/logging should we choose to do so. (We could decide not to include this.)IEnumerable<T>
andIAsyncEnumerable<T>
. When consuming manually, the developer can easily distinguish which is being used based on naming (e.g.GetEnumerator
vsGetAsyncEnumerator
), and when consuming via the compiler, the compiler will provide syntax for differentiation (e.g.foreach
vsforeach await
).What else will we want?
I've opened several additional issues to cover related support we'll want to consider:
foreach await
in addition to binding to the interface, and we can use that to enable the implicit awaits onMoveNextAsync
andDisposeAsync
to be done usingConfigureAwait(false)
by having an extension method likepublic static ConfiguredAsyncEnumerable<T> ConfigureAwait<T>(this IAsyncEnumerable<T> enumerable, bool continueOnCapturedContext)
, where the returnedConfiguredAsyncEnumerable<T>
will propagate that through to aConfiguredAsyncEnumerator<T>
, and itsMoveNextAsync
andDisposeAsync
will returnConfiguredValueTaskAwaitable
s.MoveNextAsync
andDisposeAsync
were defined to returnTask
s, then the compiler could useTaskCompletionSource<T>
to implement those async operations. But as it's returningValueTask
, and as we're doing so to enable object reuse, the compiler will be using its own implementation ofIValueTaskSource
. To greatly simplify that and to encapsulate all of the relevant logic, we should productize theManualResetValueTaskSource
/ManualResetValueTaskSourceLogic
helper type from https://github.com/dotnet/corefx/blob/master/src/Common/tests/System/Threading/Tasks/Sources/ManualResetValueTaskSource.cs.IAsyncDisposable
exposed, we'll want to implement on a variety of types in coreclr/corefx where the type could benefit from having an asynchronous disposable capability in addition to an existing synchronous ability.cc: @jcouv, @MadsTorgersen, @jaredpar, @terrajobst, @tarekgh, @kouvel
The text was updated successfully, but these errors were encountered: