-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CPU time vs code size tradeoffs #94334
Comments
Tagging subscribers to this area: @dotnet/area-meta Issue DetailsWhile reviewing some of the stale PRs I’ve noticed that we are frequently discussing the tradeoff between improving CPU time at the cost of increased code size. Example: #90459 (comment) We have the tools to measure the former, but I am not sure about the latter. I believe that we should at least provide a clear definition of what we mean by the code size (size of the managed assembly? size of precompiled native code? both?). If possible, BenchmarkDotNet should also include the metrics we care about, by default when benchmarking local dotnet runtime builds. Ideally, we would also describe our decision process behind making or rejecting such tradeoffs, with some examples. @jkotas @stephentoub thoughts?
|
Native assembly code plus all the runtime data structures required to keep track of everything. IL is generally less interesting. |
Could you provide some exact names? It would be easier for me to search for it in the ClrMD APIs. |
We have macro benchmarks to measure the published binary sizes and startup time for different application types that will detect this type of regressions. #93072 is an example of a regression detected by these benchmarks.
ClrMD API does not have APIs for this. It is hard to account for all contributing costs reliably. For micro benchmarking purposes, I typically create a test that has thousand instances of the construct over different types and then measure the working set or startup time for one instance vs. thousand instances. Here is an example of such test from years ago: https://gist.github.com/jkotas/102dc708cca8d2c85002cb47bdd49870 |
There are also linker tests that track code size for common application types. @eerhardt |
We have a set of ASP.NET benchmark apps where we track the native AOT size. Links to the app code:
These are run a couple times a day on the latest bits and any size changes larger than 2% get automatic issues logged. For example: We also have a "dotnet new console" template benchmark that is run in the perf lab and it measures the size of the default "Hello World" app. I'm not sure where that code is and can't find the link to the results right now. cc @LoopedBard3 @MichalStrehovsky |
I think it's this one: https://github.com/dotnet/performance/tree/main/src/scenarios/emptyconsolenativeaot |
While reviewing some of the stale PRs I’ve noticed that we are frequently discussing the tradeoff between improving CPU time at the cost of increased code size. Example: #90459 (comment)
We have the tools to measure the former, but I am not sure about the latter.
I believe that we should at least provide a clear definition of what we mean by the code size (size of the managed assembly? size of precompiled native code? both?). If possible, BenchmarkDotNet should also include the metrics we care about, by default when benchmarking local dotnet runtime builds. Of course, this should be documented as well.
Ideally, we would also describe our decision process behind making or rejecting such tradeoffs, with some examples.
@jkotas @stephentoub thoughts?
The text was updated successfully, but these errors were encountered: