Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmark Functions #1010

Closed
BraedonWooding opened this issue May 14, 2018 · 11 comments
Closed

Benchmark Functions #1010

BraedonWooding opened this issue May 14, 2018 · 11 comments
Labels
proposal This issue suggests modifications. If it also has the "accepted" label then it is planned.
Milestone

Comments

@BraedonWooding
Copy link
Contributor

Similar to how test "MyTest" { ... } works maybe there should be a benchmark one. It would run your code a set amount of times determined by a build option maybe? Or perhaps would run till it got the standard deviation down to a certain point similar to how a lot of benchmarkers work. Would then print out the time it took along with a few statistics, and perhaps you could even assert that the time has to be less than a certain value. Something like;

benchmark "MyTest" {
     // For setup
     @benchmarkPause();
     // ... Do setup
     @benchmarkContinue();
     runExpensiveFunction();
     @benchmarkStop();
     assert(@benchmarkCurrentTime().inSeconds() < 10);
}

We could also maybe support parallelisation through something like;

fn testParallel(i: i32) void {
    doExpensiveFunction();
}

benchmark "Parallel" {
    var i;
    @benchmarkParallel(&i, 10, 100, testParallel);
    assert(@benchmarkTotalTime().inSeconds() < 10);
}

I'm not 100% sure on the syntax/use, but I think the idea is definitely strong.

Use case

For easy benchmarking of functions. As simple as that.

@BraedonWooding BraedonWooding changed the title benchmark "<Name>" Syntax Benchmark Functions May 14, 2018
@andrewrk andrewrk added this to the 1.1.0 milestone May 14, 2018
@tiehuis
Copy link
Member

tiehuis commented May 15, 2018

Here is something in userland that may provide some ideas:

const std = @import("std");
const time = std.os.time;

fn printTiming(ns: f64) void {
    if (ns < 1000) {
        std.debug.warn("{.0} ns/op\n", ns);
        return;
    }

    const us = ns / 1000;
    if (us < 1000) {
        std.debug.warn("{.3} us/op\n", us);
        return;
    }

    const ms = us / 1000;
    if (ms < 1000) {
        std.debug.warn("{.3} ms/op\n", ms);
        return;
    }

    const s = ms / 1000;
    if (s < 1000) {
        std.debug.warn("{.3} s/op\n", s);
        return;
    }
}

const bench_cap = time.ns_per_s / 5;

// run the function for min(100000 loops, ~0.2 seconds) or at least once, whichever is longer
pub fn bench(comptime name: []const u8, F: var, m: usize) !void {
    var timer = try time.Timer.start();

    var loops: usize = 0;
    while (timer.read() < bench_cap) : (loops += 1) {
        // this would either take a void function (easy with local functions)
        // or comptime varargs in the general args
        _ = F(m);

        if (loops > 100000) {
            break;
        }
    }

    const ns = f64(timer.lap() / loops);

    const mgn = std.math.log10(loops);
    var loop_mgn: usize = 10;
    var i: usize = 0;
    while (i < mgn) : (i += 1) {
        loop_mgn *= 10;
    }

    std.debug.warn("{}: {} loops\n   ", name, loop_mgn);
    printTiming(ns);
}

fn fib_rec(n: usize) usize {
    if (n == 1 or n == 0) {
        return 1;
    } else {
        return fib_rec(n - 1) + fib_rec(n - 2);
    }
}

fn fib_iter(n: usize) usize {
    var f0: usize = 1;
    var f1: usize = 1;

    var i: usize = 0;
    while (i < n) : (i += 1) {
        f0 = f0 + f1;
        f1 = f0 - f1;
    }

    return f1;
}

pub fn main() !void {
    try bench("fib_rec", fib_rec, usize(30));
    try bench("fib_iter", fib_iter, usize(30));
}

This outputs:

fib_rec: 100 loops
   8.820 ms/op
fib_iter: 1000000 loops
   180 ns/op

Going from here, the following may be useful:

// simply calls some implementation of bench as in above, but with the specified name
bench "benchmark name" {
    // only need a reset builtin or similar to handle the start/stop case
    @benchReset();
}

I would think that if you wanted to ensure you didn't regress on a benchmark, you would write this as a test and call some bench function as in above, asserting against the time itself. This way you don't need to explicitly provide a total time builtin as this is already provided by time anyway (bar the repeated run averaging etc).

Also, right now you can write this in a test and get something that kind of works okay (bar formatting and probably some other things).

test "bench fib_rec" {
    std.debug.warn("\n");
    try bench("", fib_rec, usize(30));
}

test "bench fib_iter" {
    std.debug.warn("\n");
    try bench("", fib_iter, usize(30));
}
Test 1/2 bench fib_rec...
: 100 loops
   8.718 ms/op
OK
Test 2/2 bench fib_iter...
: 1000000 loops
   182 ns/op
OK

@PavelVozenilek
Copy link

For the purpose, it may be more useful to extend the tests:

test "why is this string needed?" (< 10 ms)
{
  ... // must fit into 10 millis
}

and smart test runner, which will repeat slower-than-expected tests couple of times to make sure it is not just a fluke.

I have implemented such a thing for my test system in C. It also increases process priority and makes time measurement more precise (using Win32 tricks).

Practical benchmarking support would need to store the data and a also tooling to show trends over time, to be useful.

Also, any form of benchmarking would benefit from ability to mock non-interesting functions, as proposed e.g. in #500.

@PavelVozenilek
Copy link

PavelVozenilek commented May 15, 2018

Btw, tests could be extended to ensure multiple dynamic properties of code:

test "this test verifies that only certain range of bytes is allocated inside" (100B < allocated < 100kB) { ... }

test "checks there's no I/O inside" (I/O = false) { ... }

test "checks there's no TCP/IP happening inside" (TCP/IP = false) { ... }

test "mix" (time < 10 ms, allocated <= 1kB, I/O=true, TCP/IP = false) { ... }

In limited way these things could be checked statically, but dynamical checking covers more situations. Mocking could be used to intercept and verify I/O and TCP/IP calls.

@andrewrk
Copy link
Member

There is one reason to potentially have benchmarks in the language rather than userland and that is Profile Guided Optimization.

Idea being that you could create benchmarks that are automatically run in release modes and used for PGO.

You would not want regular tests to be used for PGO because regular tests test edge cases by design which is directly in conflict with optimizing for the common case.

@PavelVozenilek
Copy link

PavelVozenilek commented May 15, 2018

@andrewrk This can be implemented using the same way:

test "" (PGO = true) { ... }

Customizable test runner was once proposed in #567.


Edit: potential expansion of scope of the tests (perhaps miniprogram would be better name) reminds me the proposal #608.


Edit2: Benchmarks intended for automatic checking of performance regressions can be also implemented this way. Specialized test runner would recognize tests to be checked, will record their times somewhere, and then can warn if regression happens.

@data-man
Copy link
Contributor

data-man commented May 26, 2020

Some thoughts:

  • implement online statistics (e.g. math.stats)
  • add new keyword benchmark "description"
  • add zig bench <file> command

Some projects for inspiration:

C/C++
Google's benchmark the most famous
nanobench
sltbench
b63 (Windows isn't supported)

Haskel
criterion

Nim
criterion.nim

Rust
criterion.rs

@SpexGuy SpexGuy added the proposal This issue suggests modifications. If it also has the "accepted" label then it is planned. label Mar 20, 2021
@andrewrk andrewrk modified the milestones: 1.1.0, 0.8.0 May 13, 2021
@SpexGuy
Copy link
Contributor

SpexGuy commented May 13, 2021

More info on this decision:
Benchmarks tend to be much more complex than tests, involving a significant amount of effort for data generation, stabilization criteria, and careful application of clobbers. There are tradeoffs in all of these decisions that make benchmarking more suited to be a library than a language feature. Regarding PGO, while it's true that microbenchmarks provide better PGO data than tests, they can still be very misleading. It's not a clear win to have this within the language, since you would still need the ability to run PGO on normal program execution. So we've decided to close this and leave benchmarking up to a library.

@likern
Copy link

likern commented Jan 29, 2022

More info on this decision: Benchmarks tend to be much more complex than tests, involving a significant amount of effort for data generation, stabilization criteria, and careful application of clobbers. There are tradeoffs in all of these decisions that make benchmarking more suited to be a library than a language feature. Regarding PGO, while it's true that microbenchmarks provide better PGO data than tests, they can still be very misleading. It's not a clear win to have this within the language, since you would still need the ability to run PGO on normal program execution. So we've decided to close this and leave benchmarking up to a library.

Is there any decent library for benchmarking as a way to go?

Maybe turn https://github.com/ziglang/gotta-go-fast to general-purpose solution?

@coffeebe4code

This comment was marked as off-topic.

@andrewrk
Copy link
Member

No you're not hearing that. Please refrain from asking nonsensical rhetorical questions on closed issues. All discussions on the issue tracker must be focused and effortful.

@codingjerk
Copy link

codingjerk commented Aug 9, 2023

I agree that benchmarking is hard, but testing is hard too. We still have tests in the language, because it makes easier to work with the language using its toolchain (zig build, zig run, zig test) and it's nice to have unit tests near code.

It sounds nice to use zig to run benchmarks and to have micro benchmarks near code too.

My point is, like in-language testing encourage people to write tests, in-language benchmarking will encourage people to make their software faster.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
proposal This issue suggests modifications. If it also has the "accepted" label then it is planned.
Projects
None yet
Development

No branches or pull requests

9 participants