-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allocation-free awaitable async operations with ValueTask<T> and ValueTask #25182
Comments
public interface IValueTaskAwaitable
{
bool IsCompleted { get; }
bool IsCompletedSuccessfully { get; }
void OnCompleted(Action continuation, ValueTaskAwaitableOnCompletedFlags flags);
void GetResult();
}
public interface IValueTaskAwaitable<out TResult>
{
bool IsCompleted { get; }
bool IsCompletedSuccessfully { get; }
void OnCompleted(Action continuation, ValueTaskAwaitableOnCompletedFlags flags);
TResult GetResult();
}
[Flags]
public enum ValueTaskAwaitableOnCompletedFlags
{
None = 0x0,
UseSchedulingContext = 0x1,
FlowExecutionContext = 0x2,
} Similar to the How are bool IsCanceled { get; }
bool IsFaulted { get; }
AggregateException Exception { get; } communicated from the Do they come via throwing Cancelled vs OtherException on Ripple effects; would go via the statemachine and i.e Should they throw from e.g. try
SetResult(GetResult);
catch Exception
SetException(ex) |
@stephentoub does all of this work on .NET Standard 2.0? I'm assuming yes? |
Can you give some details on what the ValueTaskAwaitableOnCompletedFlags do? I assume this is at least partly related to ConfigureAwait? |
Will it make impossible/complicated to use AsTask and then WhenAll/WhenAny?
So the following will be unsupported? var read = pipe.Reader.ReadAsync();
var write = pipe.Writer.WriteAsync();
await read;
await write;
// or
var write = pipe.Writer.WriteAsync();
var flush= pipe.Writer.FlushAsync();
await write;
await flush; |
On one hand, I've been wondering if the new interfaces could be made more general, so that they can serve as the solution to "make On the other hand, all those limitations make using such APIs error-prone and unpleasant, so I've thought if it would make sense to have a separate type for this (e.g. Maybe there is a way to avoid or diminish the limitations? Some options I considered:
|
But it's not awaitable (by design). From a technical perspective, for it to be awaitable, it would need to expose slightly different surface area, and it would need to implement ICriticalNotifyCompletion, which then makes it difficult or impossible to have the same object implement both
I've implemented I've made To me, the design I have seemed like the lesser of two evils. It does lead to an inconsistency, in that with code like: ValueTask<int> vt = SomeAsync();
bool faulted = vt.IsFaulted;
Task<int> t = vt.AsTask();
bool canceled = t.IsCanceled; could result in both int i = await SomeAsync(); whether the thing returned from SomeAsync is canceled or faulted is indistinguishable, and for the 9% use case of: Task<int> t = SomeAsync().AsTask(); you'll never look at the ValueTask<int> vt = SomeAsync();
int i = vt.IsCompletedSuccessfully ? vt.Result : await vt; you're not looking at (Honestly, I wish we hadn't added Anyway, that's how I ended up here. Do you disagree with the approach or see a flaw in my reasoning?
Yes. Prior to .NET Core 2.1,
Awaiters can implement the UseSchedulingContext is just the
No. It'd be perfectly fine to do: await Task.WhenAll(
SomeAsync().AsTask(),
SomeAsync().AsTask(),
SomeAsync().AsTask());
It really depends on the implementation of For example, today it's perfectly acceptable to have multiple operations outstanding on a int bytesReceived = await socket.ReceiveAsync(memory, cancellationToken);
bytesReceived += await socket.ReceiveAsync(memory, cancellationToken);
bytesReceived += await socket.ReceiveAsync(memory, cancellationToken); each of those operations will take the fast, non-allocating path, but with code like: ValueTask<int> vt1 = socket.ReceiveAsync(memory1, cancellationToken);
ValueTask<int> vt2 = socket.ReceiveAsync(memory2, cancellationToken);
ValueTask<int> vt3 = socket.ReceiveAsync(memory3, cancellationToken);
int bytesReceived = await vt1;
bytesReceived += await vt2;
bytesReceived += await vt3; the ValueTask<int> vt1 = socket.ReceiveAsync(memory1, cancellationToken);
await vt1;
await vt1; // BUG BUG BUG In contrast, for example, System.Threading.Channels has a single-consumer specialized unbounded channel, e.g. Channel<int> c = Channel.CreateUnbounded<int>(new UnboundedChannelOptions { SingleReader = true }); That implementation explicitly only supports a single reader at a time (with any number of writers), and thus it caches a singleton T item1 = await c.Reader.ReadAsync();
T item2 = await c.Reader.ReadAsync(); and fine to do: ValueTask<T> vt = c.Reader.ReadAsync();
await c.Writer.WriteAsync(producedItem);
T consumedItem = await vt; but it's very much an error on the developer's part to do: ValueTask<T> vt1 = c.Reader.ReadAsync();
ValueTask<T> vt2 = c.Reader.ReadAsync(); // BUG BUG BUG and an error to do: ValueTask<T> vt = c.Reader.ReadAsync();
await vt;
await vt; // BUG BUG BUG and an error to do: ValueTask<T> vt = c.Reader.ReadAsync();
await c.Reader.ReadAsync(); // BUG BUG BUG
await vt; // BUG BUG BUG In other words, it's really up to the API returning the
I explicitly opted away from that, for the reasons outlined earlier in this response.
I considered the cookie approach. Basically public struct ValueTask<T>
{
public ValueTask(IValueTaskObject<T> obj, long version);
...
} and then all of public interface IValueTaskObject<out T>
{
public bool IsCompleted(long version);
public bool IsCompletedSuccessfully(long version);
public T GetResult(long version);
public void OnCompleted(Action continuation, ValueTaskObjectOnCompletedFlags flags, long version);
} and it would be up to the I know @KrzysztofCwalina was a fan of this approach, though, at least in principle.
I'm not understanding this... wouldn't it be the other way around, the await SomeAsync(); and using (ValueTask<int> vt = SomeAsync())
{
await vt;
} and not only is that klunky and more expensive, I actually would expect it would lead to more errors, as it promotes storing the
That might be reasonable, though I expect it would likely have both false negatives and false positives. Happy to be proven wrong, though. |
What I meant is that the struct ValueTask<T>
{
public void Dispose() => valueTaskObject?.Release();
}
The proposed design means that the 99% case is efficient, but also makes it easy to write buggy code. I don't like to sacrifice safety for performance, because performance often doesn't matter much, while safety always matters. And with this proposal, any API that returns What if it wasn't await SomeAsync(); // allocates
var vt1 = SomeAsync();
await vt1;
await vt1; // ok
await SomeAsync().IKnowICantReuseTheReturnedValueISwear(); // does not allocate
var vt2 = SomeAsync().IKnowICantReuseTheReturnedValueISwear();
await vt2;
await vt2; // bug (With This makes it much easier to use than I guess the important question here is: will almost all code that uses But if |
No; just checking :) ValueTask<T> vt = c.Reader.ReadAsync();
await vt;
await vt; // BUG BUG BUG Could It wouldn't cater for struct copies; but would that reduce the 0.1% to 0.001%? 😉 |
It's a readonly struct, and even if it weren't, by definition there's a copy when getting the awaiter from it, so each time you await it you're seeing a different copy of the struct. An implementation of
It already does that: if
I disagree. Look at all of the code that uses tasks that's been written in the last few years; 99% of it just awaits the operation directly... it's fairly rare to get a handle to the task and do something other than await it. Sometimes you use operations with combinators, but notice combinators aren't exposed here for
Many APIs in .NET (and any programming language for that matter) can be misused in a dangerous way. Access a
If performance doesn't matter for a method, it can simply return a
I don't see how that's a feasible design. By the time SomeAsync returns to the caller, it's already scheduled the asynchronous operation, and already needs to know what object it's talking to upon completion. |
Doesn't this create the situation in which the consumer of your API now needs to understand how to properly await based on the kind of With custom awaitables like |
A consumer of an API needs to know the semantics of that API, including details about how its return type behaves. By default, you need to assume if you get a
Why not? var result1 = pipeReader.ReadAsync();
await result1;
var result2 = pipeReader.ReadAsync();
await result1; How does misuse difficulty change here based on the concrete type of the |
This is a really great work @stephentoub ! With the increase in focus on performance in the .NET ecosystem I can see this used not only in core pieces of the stack or performance critical business applications, but also to a wider set of library and applications. I believe the semantic of Have you also considered not making it compatible with Task (given also some other differences like the IsCancelled behaviour) and make this a first class concept, eventually with is own keywords (e.g. asynconce/awaitonce)? |
Thanks.
Essentially that's what I'm saying. If you happen to know more about the implementation, you might be able to get away with more.
|
In this particular case, because I'm not at all against the idea, it's just that my consumer has to understand how I'm using Now, that may be non-trivial for a myriad of other reasons. |
I'd argue this isn't about PipeAwaiter, it's about the API that returned it, and that's the case regardless of the return type. Further, I'd argue that the 99% case (and it's really probably more the 99.99% case) is no one knows or cares what the return type is because they simply await it. It's just compiler goo to let you write |
BTW Stephen, despite my hesitations (I write a lot of low level stuff that junior devs need to be able to easily consume for our system), this is a really great idea and awesome stuff which could solve a lot of the "data coming off the wire" async allocation scenarios. |
Sorry, I meant this new feature. |
Whatever this thing is, it's going to need to wrap Tasks, as the vast majority of these async APIs are going to still return tasks under the covers. And it's going to need to be as efficient as possible with Task, which means it shouldn't go through an interface to get to it, plus the fact that there'd be no way to make it work with netstandard2.0 if Task needed to implement another interface to make that work. At that point, you're just introducing another type with the exact same support as Further, even with something that was given a different name, there are still going to be differences between APIs that return one, as it's up to the implementation what level of reuse is possible. As I noted earlier, Socket for example lets you make any number of calls to receive/send data, and doing so won't invalidate a previously returned I do not see how shipping a different type addresses safety concerns here. It might address the small inconsistency around IsCanceled, but I don't believe that's an important issue necessitating introducing another type. If we introduced a differently named shared type for this, |
I'm happy with it; your initial warnings were more scary than how it would match to use cases. i.e. normally you await an operation prior to initializing the same operation, and with Task.When requiring the
Bikeshedding on names aside, thank you for taking time to explain 😄 |
Thanks @stephentoub for the comprehensive explanation. |
I like IValueTaskSource.
Yeah 😄 I just wanted to be upfront about potential concerns. If I was actually scared of it, we wouldn't be having this conversation as I wouldn't have opened the issue. 😉
You're very welcome. Thank you for participating!
What does that look like? How does an var t = SomeAsync();
var s = t;
awaitonce t;
awaitonce s; ? |
Does it also make sense to provide a way to create ValueTask() from Exception instance to have allocation-free equivalent for Task.FromException(...). This might be useful for cases where operation should fail synchronously (so no IValueTaskObject instance allocation or acquire from reusable pool is required) but for some reason it's inconvenient/impossible to just throw and force all calling parties to wrap the call with try/catch. Basically same reason to use as for Task.FromException. |
@stephentoub Nice! You the man!!! On the high performance side, this will open up all sorts of opportunities. |
The only way I know of to do that would be to make ValueTask bigger, which would incur cost for all uses. I don't think that's a desirable trade-off. Exceptions are already very expensive. |
Will there be an equivalent of TaskCompletionSource? Will it be resettable/reusable? What would this look like? Is it too late to simply remove IsCancelled from ValueTask? Could we add an overload of OnComplete with a state argument, to avoid delegate allocation? Seems like the compiler could be modified to use this in the future. |
Could the compiler simply disallow local vars of type ValueTask? At least in async methods? (Maybe it's a stack-only type, like Span?) This wouldn't disallow any advanced usage scenarios, it would just make them more explicit. That is: await SomeAsync(); // 99% case
Task t = SomeAsync().AsTask(); // 0.9% case; allows deferred await, combinators
ValueTaskAwaiter awaiter = SomeAsync().GetAwaiter(); // 0.1% case; if you call GetAwaiter,
// you better know what you're doing
ValueTask vt = SomeAsync(); // compiler error (in async method) I realize that's a big change, but if it helps prevent user error/confusion, perhaps it's worth it? |
It would look like an implementation of
It would be a source- and runtime-breaking change for anyone currently using it. So, yes.
I've gone back and forth on that. Maybe. The reason I've wanted to do it is that right now AsTask for an asynchronously completing operation is two allocations, one for the task and one for an Action delegate; if we had a state argument, it could be done with just one allocation for the Task. However, if it's instead of the existing overload, it would mean that all asynchronously completing operations with await would incur two delegate invocations rather than just one delegate invocation, as we'd essentially do
It would certainly be a breaking change.
|
C# compiler wouldn't let you use a stack-only type in an async method; so you couldn't |
You can use Span in an async method, you just can't store it in a local var. E.g. this works: static async Task TestAsync()
{
byte[] b = new byte[1024];
byte[] b2 = new Span<byte>(b).Slice(0, 256).ToArray();
// go do async stuff
} (at least that's my understanding) |
@marksmeltzer for iValueTaskSource, ValueTask is not just for pseudo-async senarios! |
@juepiezhongren, You are definitely correct about IValueTaskSource, and I edited my comment above to clarify my point. I am just commenting that the WhenAll and WhenAny APIs offer less utility in general. I would still like seeing them added for parity's sake though.
What unique utility do you see ValueTask.Delay having, since you brought it up?
The IValueTaskSource allows utilizing an existing object instance that can implement the task APIs and wrap that instance in a ValueTask. In the case of the delay we just have a value and would still need to allocate a task object (unless you're thinking of some case I'm not seeing)...
One scenario I can think of for enabling a non-allocating implementation for ValueTask.Delay() would be that it returns, essentially, a ValueTask<int> using an additional enum value internally to declare it as representing a delay. That *might* allow for implementing special case logic within the task APIs to handle the delay case in a non-allocating way. That's a more substantial change and I'm not sure if it would offer any performance benefits at runtime. The idea on the IValueTaskSource being wrapped by a ValueTask was that async IO operations already allocate state transfer objects that can easily handle the demands of the task API and thus remove the additional per async call allocation for a Task. Through effective reuse of SocketEventArgs (for example), that can eliminate a large number of allocations and thus deliver tangible performance benefits. At this level, the async IO operations can be completing in mere microseconds and those allocations do otherwise cause noticeable aggregate performance drags.
In the case of a delay, however, there isn't likely to be any visible pressure from allocations because delays happen at the whole millisecond time scale. There isn't a mechanism in most architectures to reliably do any sub-millisecond delays and which means they won't accumulate noticeable performance drag from the additional Task allocations. In general, I'm not too worried about having @stephentoub and team eliminate allocations in the Task API unless those allocations are impacting CPU-bound or IO-bound workloads. Those are the workloads are actually affected by the extra allocations.
If you have some other idea in mind regarding delays that I'm not seeing, please share.
@juepiezhongren, what did you mean by "Object Pooling 'heap 'with manua-gc api is required"?
|
@marksmeltzer it is best that ivalueTaskSource is allocated in pooling heap where init and gc shall be manually done. Task.WhenAll and WhenAny definitely need to be tuned or new api to be done, considering that Task and ValueTask methods would be used together for sync. @stephentoub
btw, TaskCompletionSource is required to be reusable and pool-cached |
@stephentoub ,Task.Delay is too often used, ValueTask.Delay with IValueTaskSource instance injected is really a strongly-needed API |
@juepiezhongren, I do not understand what that would solve here. What costs do you believe would be avoided in doing so? If you believe it's important, your best bet is to code it up and show measurements of exactly how it would improve things and by how much. |
@stephentoub just make less alloc(wont delay create new task instance? maybe i got some mistakes), task.delay is great already now |
Task.Delay will not only create a Task, it'll also create a Timer, a TimerHolder, and a TimerQueueTimer as part of the underlying implementation. But a ValueTask.Delay would incur the latter three as well, and it would need some object to be the IValueTaskSource. Either that object would need to be allocated, in which case it might as well just be Task, or it would need to be pooled in some way, which brings with it its own costs... managing a process-wide object pool and doing it well is not easy (or necessarily cheap). |
ur answer is perfect, thx. |
Could use it for a reoccurring awaitable Task.Delay type loop; reusing the same timer? |
i've heard @davidfowl has done some things for delay |
There are certainly things you can layer on top, e.g. you can coalesce if you're willing to trade off accuracy, e.g. https://blogs.msdn.microsoft.com/pfxteam/2011/12/03/coalescing-cancellationtokens-from-timeouts/.
Sure, but that brings with it other problems, e.g. when you're done with it, who disposes of the underlying Timer? At that point you're not going to want to use ValueTask, because you're going to want an API that lets you control such things. |
(One addendum... I forgot I previously made a change so that Task.Delay just goes straight to the TimerQueueTimer and skips allocating the Timer and the TimerHolder objects. Part of dotnet/coreclr#14527.) |
so, valueTask is possible!!!! Great Stephen |
Could wrap the source in a using e.g. using (var timer = new PeriodicAsyncTimer(period: 1000))
{
timer.Start();
while (!token.IsCancellationRequested)
{
await timer;
// Do stuff
}
} Whereas going via period on timer directly its harder to dispose as its just a callback (need access to timer via whatever is injected as state; or an external disposal) |
@benaadams' suggestion would be that |
Sure. Then it's no longer a ValueTask. |
D'oh, I was thinking it of it using the |
@stephentoub to make me self-control timer is a good idea, for a lot of delays are together with while, so this is a perfect solution |
VTaks + IValueTaskSource is just for regular-pattern await senarios, things like while-delay really need a VTask solution, in my opinion. |
Why? You have a scenario where the 20 small allocations / second that might save you is critical? |
Just follow your philosophy in this article, current words it's not about perf.
but, this is not bad |
In my mind what Stephen brought to the table is about performance, but it is not simply performance for performance's sake. When doing things like sockets or other data pipelines it can be easy to be doing millions of async operations per second where there is already some state primitive that can implement the IValueTaskSource and thus prevent extra task allocation overhead. That is especially true when some of those scenarios can complete synchronously if data is immediately available.
For 20 times per second, there isn't any problem that needs to be optimized. Doing delays of 0 would still only get you a max of 100 operations per second. So, in my mind there is zero need for this.
Regards,
…________________________________
From: juepiezhongren <[email protected]>
Sent: Thursday, October 11, 2018 8:14:13 PM
To: dotnet/corefx
Cc: Mark Smeltzer; Mention
Subject: Re: [dotnet/corefx] Allocation-free awaitable async operations with ValueTask<T> and ValueTask (#27445)
Just follow your philosophy in this article, current words it's not about perf.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<https://github.com/dotnet/corefx/issues/27445#issuecomment-429190353>, or mute the thread<https://github.com/notifications/unsubscribe-auth/ASRWY2izIS-xti2SHClmU8Xx2Zuz4fhaks5ukAkFgaJpZM4SSKTv>.
|
@marksmeltzer DotNetCore, in my mind, is somthing that will implement everything and every dream out of Midori & Singularity, 20 times per second is not important for server app, but is for system. |
What I am saying is that is empirically false: Task.Delay() as a minimum theoretical throughput of less than 20 per second no-matter-what because the underlying system hardware timers operate at a maximum of 7 millisecond intervals. Most systems lack a high resolution timer though so the actual throughput could be a low as 4 per second!
In other words the overhead on calling Task.Delay() is so high that any optimization like reducing allocations will so zero benefit in empirical testing. Allocations are not implicitly bad.
Allocations on a very hot path are the only ones worth worrying about (e.g. millions per second). Once *all* of the VERY hot paths are optimized (Stephen seems to be working on that), then the next step is all of the VERY warm paths, etc.
Task.Delay() would *never* benefit however.
Regards,
…________________________________
From: juepiezhongren <[email protected]>
Sent: Thursday, October 11, 2018 9:15:50 PM
To: dotnet/corefx
Cc: Mark Smeltzer; Mention
Subject: Re: [dotnet/corefx] Allocation-free awaitable async operations with ValueTask<T> and ValueTask (#27445)
@marksmeltzer<https://github.com/marksmeltzer> DotNetCore, in my mind, is somthing that will implement everything and every dream out of Midori & Singularity, 20 times per second is not important for server app, but is for system.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<https://github.com/dotnet/corefx/issues/27445#issuecomment-429198905>, or mute the thread<https://github.com/notifications/unsubscribe-auth/ASRWY4hTIdKGm7Ij6rCVy4_l5Mu6whJJks5ukBd2gaJpZM4SSKTv>.
|
I'm wondering to know is it ok that we use public async ValueTask<IActionResult> Index()
{
await DoSomethingAsync();
return View();
} |
Background
ValueTask<T>
is currently a discriminated union of aT
and aTask<T>
. This lets APIs that are likely to complete synchronously and return a value do so without allocating aTask<T>
object to carry the result value. However, operations that complete asynchronously still need to allocate aTask<T>
. There is no non-genericValueTask
counterpart today because if you have an operation that completes synchronously and successfully, you can just returnTask.CompletedTask
, no allocation.That addresses the 80% case where synchronously completing operations no longer allocate. But for cases where you want to strive to address the 20% case of operations completing asynchronously and still not allocating, you’re forced to play tricks with custom awaitables, which are one-offs, don’t compose well, and generally aren’t appropriate for public surface area.
Task
andTask<T>
, by design, never go from a completed to incomplete state, meaning you can’t reuse the same object; this has many usability benefits, but for APIs that really care about that last pound of performance, in particular around allocations, it can get in the way.We have a bunch of new APIs in .NET Core 2.1 that return
ValueTask<T>
s, e.g.Stream.ReadAsync
,ChannelReader.ReadAsync
,PipeReader.ReadAsync
, etc. In many of these cases, we’ve simply accepted that they might allocate; in others, custom APIs have been introduced specific to that method. Neither of these is a good place to be.Proposal
I have implemented a new feature in
ValueTask<T>
and a counterpart non-genericValueTask
that lets these not only wrap aT
result or aTask<T>
, but also another arbitrary object that implements theIValueTaskSource<T>
interface (orIValueTaskSource
for the non-genericValueTask
). An implementation of that interface can be reused, pooled, etc., allowing for an implementation that returns aValueTask<T>
orValueTask
to have amortized non-allocating operations, both synchronously completing and asynchronously completing.The enabling APIs
First, we need to add these interfaces:
An object implements
IValueTaskSource
to be wrappable byValueTask
, andIValueTaskSource<TResult>
to be wrappable byValueTask<TResult>
.Then we add this ctor to
ValueTask<TResult>
:Then we add a non-generic
ValueTask
counterpart toValueTask<TResult>
. This mirrors theValueTask<TResult>
surface area, except that it doesn’t have aResult
property, doesn’t have a ctor that takes aTResult
, usesTask
in places whereTask<TResult>
was used, etc.And finally we add the System.Runtime.CompilerServices goo that allows
ValueTask
to be awaited and used as the return type of an async method:Changes to Previously Accepted APIs
At the very least, we would use the
ValueTask
andValueTask<T>
types in the following previously accepted/implemented APIs that are shipping in 2.1:PipeAwaiter<T>
type, it will returnValueTask<T>
from theReadAsync
andFlushAsync
methods that currently returnPipeAwaiter
.PipeAwaiter<T>
will be deleted.Pipe
uses this to reuse the same pipe object over and over so that reads and flushes are allocation-free.WaitToReadAsync
andWaitToWriteAsync
methods will returnValueTask<bool>
instead ofTask<bool>
. TheWriteAsync
method will returnValueTask
instead ofTask
. At least some of the channel implementations, if not all, will pool and reuse objects backing these value tasks.WriteAsync(ReadOnlyMemory<byte>, CancellationToken)
overload will returnValueTask
instead ofTask
.Socket
’s newReceiveAsync
/SendAsync
methods that are already defined to returnValueTask<int>
will take advantage of this support, making sending and receiving on a socket allocation free.NetworkStream
will then expose that functionality viaReadAsync
/WriteAsync
.FileStream
will potentially also pool so as to make synchronous and asynchronous reads/writes allocation-free.SendAsync(ReadOnlyMemory<byte>, …)
overload will returnValueTask
instead ofTask
. ManySendAsync
calls just pass back the result from the underlyingNetworkStream
, so this will incur the benefits mentioned above.There are likely to be other opportunities in the future as well. And we could re-review some of the other newly added APIs in .NET Core 2.1, e.g.
TextWriter.WriteLineAsync(ReadOnlyMemory<char>, ...)
, to determine if we want to change those from returningTask
toValueTask
. The tradeoff is one ofTask
's usability vs the future potential for additional optimization.Limitations
Task
is powerful, in large part due to its “once completed, never go back” design. As a result, aValueTask<T>
that wraps either aT
or aTask<T>
has similar power. AValueTask<T>
that wraps anIValueTaskSource<T>
can be used only in much more limited ways:await SomethingAsync();
), await it with configuration (e.g.await SomethingAsync().ConfigureAwait(false);
), or get a Task out (e.g.Task t = SomethingAsync().AsTask();
). UsingAsTask()
incurs allocation if theValueTask
/ValueTask<T>
wraps something other than aTask
/Task<T>
.ValueTask
/ValueTask<T>
or calledAsTask
, you must never touch it again.ValueTask<T>
that wraps aTask<T>
, today you can callGetAwaiter().GetResult()
, and if it hasn’t completed yet, it will block. That is unsupported for aValueTask<T>
wrapping anIValueTaskSource<T>
, and thus should be generally discouraged unless you're sure of what it's wrapping.GetResult
must only be used once the operation has completed, as is guaranteed by the await pattern.ValueTask<T>
that wraps aTask<T>
, you can await it an unlimited number of times, both serially and in parallel. That is unsupported for aValueTask<T>
wrapping anIValueTaskSource<T>
; it can beawait
ed/AsTask
'd once and only once.ValueTask<T>
that wraps aTask<T>
, you can call any other operations in the interim and then await theValueTask<T>
. That is unsupported for aValueTask<T>
wrapping anIValueTaskSource<T>
; it should beawait
ed/AsTask
’d immediately, as the underlying implementation may be used for other operation, subject to whatever the library author chose to do.IsCompletedSuccessfully
and then useResult
orGetAwaiter().GetResult()
, but that is the only coding pattern outside ofawait
/AsTask
that’s supported.We will need to document that
ValueTask
/ValueTask<T>
should only be used in these limited patterns unless you know for sure what it wraps and that the wrapped object supports what's being done. And APIs that return aValueTask
/ValueTask<T>
will need to be clear on the limitations, in hopes of preserving our ability to change the backing store behindValueTask<T>
in the future, e.g. an API that we ship in 2.1 that returnsValueTask<T>
around aTask<T>
then in the future instead wrapping anIValueTaskSource<T>
.Finally, note that as with any solution that involves object reuse and pooling, usability/diagnostics/debuggability are impacted. If an object is used after it's already been effectively freed, strange/bad behaviors can result.
Why now?
If we don’t ship this in 2.1, we will be unable to do so as effectively in the future:
Stream.WriteAsync
overload) are currently defined to returnTask
but should be changed to returnValueTask
.ValueTask<T>
, but if we’re not explicit about the limitations of how it should be used, it’ll be a breaking change to modify what it backs in the future.PipeAwaiter<T>
) will be instant legacy.ValueTask<T>
was just OOB. It’s now also in System.Private.CoreLib, with core types likeStream
depending on it.Implementation Status
With the exception of pipelines, I have these changes implemented across coreclr and corefx. I can respond to any changes from API review, clean things up, and get it submitted as PRs across coreclr and corefx. Due to the breaking changes in existing APIs, it will require some coordination across the repos.
(EDIT stephentoub 2/25: Renamed IValueTaskObject to IValueTaskSource.)
(EDIT stephentoub 2/25: Changed OnCompleted to accept object state.)
The text was updated successfully, but these errors were encountered: