Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possibility to benchmark asynchronous methods #236

Closed
5 tasks done
adamsitnik opened this issue Jul 24, 2016 · 9 comments
Closed
5 tasks done

Possibility to benchmark asynchronous methods #236

adamsitnik opened this issue Jul 24, 2016 · 9 comments
Assignees
Milestone

Comments

@adamsitnik
Copy link
Member

adamsitnik commented Jul 24, 2016

I would like to add support for benchmarking asynchronous methods.

Please post your requirements here, I want to know what people need.

What should be supported:

  • Task<T> returning methods
  • Task returning methods
  • ValueTask<T> returning methods (since we target .NET 4.5 we can have it!)
  • async void returning methods (edit: we throw NotSupportedException in that case)
  • custom task schedulers
@adamsitnik adamsitnik added this to the v0.9.9 milestone Jul 24, 2016
@adamsitnik adamsitnik self-assigned this Jul 24, 2016
@benaadams
Copy link
Member

async void should return a bad practice won't benchmark error ;-)

Been doing some poor async benchmarking here dotnet/roslyn#10449 (comment)

@i3arnon
Copy link

i3arnon commented Jul 24, 2016

@benaadams "always a-void async void"

@adamsitnik
Copy link
Member Author

@AndreyAkinshin ok, the code is ready for code review

In general I was not sure how to implement this one 100% correctly. Some of the ideas I had was to use ETW events to get time execution, write custom task scheduler that would measure it on it's own etc.. But all of these were limited (no .NET Core support, no custom schedulers support).

So I decided to go with simpler approach:

  • use existing MethodInvoker that accepts Action or Func<T> delegates
  • I wrote some async invokers that simply call .Wait or Result on the tasks.
  • instead of passing async delegates to MethodInvoker I pass the wrapped synchronous delegates that come from AsyncMethodInvoker

Sample code:

[Benchmark]
public Task<int> ReturningGenericTask() => // some async implementation goes here

Sample auto-generated program.cs file:

public Program()
{
    setupAction = () => { };
    cleanupAction = () => { };
    idleAction = Idle;
    targetAction = () => { return BenchmarkDotNet.Running.TaskMethodInvoker<System.Int32>.ExecuteBlocking(ReturningGenericTask); };
}

private System.Int32 value;
private Action setupAction;
private Action cleanupAction;
private Func<System.Int32> targetAction;
private Func<System.Int32> idleAction;

public void RunBenchmark()
{
    new MethodInvoker().Invoke(job, 1, setupAction, targetAction, cleanupAction, idleAction);
}

private System.Int32 Idle()
{
    return BenchmarkDotNet.Running.TaskMethodInvoker<System.Int32>.Idle();
}

@i3arnon
Copy link

i3arnon commented Jul 27, 2016

@adamsitnik GetAwaiter.GetResult IMO is preferable to Wait and Result both because it throws the actual exception instead of AggregateException and because it's the same API for Task and Task<T>

On another note (and I'm not sure if it really fits in the project) testing how long does a single execution of an async method take is somewhat less useful since async improves scalability by sacrificing the performance of single executions.
Maybe running multiple executions concurrently and dividing the elapsed time among the executions is a better metric. You can even use a single threaded SynchronizationContext to match the current sequential nature and not overuse multi-core environments.

@benaadams
Copy link
Member

@adamsitnik @i3arnon been trying to benchmark impact to threadpool changes with a depth vs parallel matrix https://github.com/benaadams/ThreadPoolTaskTesting

A similar approach might capture the scalability metric?

@adamsitnik
Copy link
Member Author

GetAwaiter.GetResult IMO is preferable to Wait and Result both because it throws the actual exception instead of AggregateException and because it's the same API for Task and Task

@i3arnon Thanks for the hint! I have measured .Result vs .Wait vs GetAwaiter.GetResult() and it seems that for Tasks the GetAwaiter.GetResult() is also the fastest way to go. On the other hand, for ValueTask it was much more slower so I stayed with .Result for VT.

Host Process Environment Information:
BenchmarkDotNet-Dev.Core=v0.9.8.0
OS=Windows
Processor=?, ProcessorCount=8
Frequency=2338336 ticks, Resolution=427.6545 ns, Timer=TSC
CLR=CORE, Arch=64-bit ? [RyuJIT]
GC=Concurrent Workstation
JitModules=?
dotnet cli version: 1.0.0-preview2-003121

Type=Waiting  Mode=Throughput  GarbageCollection=Concurrent Workstation  
LaunchCount=1  
Method Median StdDev
TaskWait 3.0377 ns 0.0483 ns
TaskGetAwaiterGetResult 2.5771 ns 0.0403 ns
GenericTaskWait 3.1328 ns 0.0247 ns
GenericTaskResult 3.0494 ns 0.0173 ns
GenericTaskGetAwaiterGetResult 2.2051 ns 0.0131 ns
ValueTaskResult 2.5167 ns 0.0232 ns
ValueTaskGetAwaiterGetResult 13.0092 ns 0.0839 ns

@adamsitnik
Copy link
Member Author

testing how long does a single execution of an async method take is somewhat less useful

@i3arnon I 100% agree with you. For me the biggest advantage will be to have the possibility to check Tasks vs ValueTasks scenarios.

@adamsitnik
Copy link
Member Author

Maybe running multiple executions concurrently and dividing the elapsed time among the executions is a better metric. You can even use a single threaded SynchronizationContext to match the current sequential nature and not overuse multi-core environments.

A similar approach might capture the scalability metric?

@benaadams @i3arnon good ideas! I popose that you create new issue for that, we gather all the ideas, then I do some research and implement it. I know that @AndreyAkinshin has also some ideas for concurrent benchmarks for 1.0

@AndreyAkinshin
Copy link
Member

@adamsitnik, the async branch LGTM. I suggest to merge it into master.

I know that @AndreyAkinshin has also some ideas for concurrent benchmarks for 1.0

Yep, concurrent benchmarks are in the roadmap. But it's not easy to add this feature because of some CPU "features" like false sharing. Also it requires major changes of MethodInvoker because we have to change the current approach with fixed amount of iterations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants