Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Idea: Custom Benchmarking Metric #176

Open
JonasIsensee opened this issue Sep 24, 2020 · 3 comments
Open

Feature Idea: Custom Benchmarking Metric #176

JonasIsensee opened this issue Sep 24, 2020 · 3 comments

Comments

@JonasIsensee
Copy link

JonasIsensee commented Sep 24, 2020

Hi,

in my research we're developing an adaptive solver for some agent based simulations and we build a
benchmark suite with PkgBenchmark.
This already helps a lot but sometimes the results can be quite confusing because the runtime of the full solving process
depends very strongly on the adaptive solver and its heuristics.
One thing that would improve our benchmarks significantly would be if we could include
a custom additional metric to our benchmark pipeline.
(In this case that would just be the number of iterations)

We already have one working but inefficient way of doing this

@benchmarkable sleep(iterations/1000) setup=(iterations=solve(...))

which adds an entry in the PkgBenchmark judge/result files hinting at the number of iterations but of course this is very inefficient.

I also tried implementing a macro for this myself by essentially duplicating the code for @benmarkable but
so far this lacks generality and how this should fit into the rest of the logic is not clear.
( https://github.com/JonasIsensee/BenchmarkTools.jl/tree/mytrial )

What are your thoughts?
Would this be useful to others as well?
Could this be done in a slightly more general way?

@gdalle
Copy link
Collaborator

gdalle commented Jun 13, 2023

So more generally this could mean including the output of the function you're benchmarking into the results?

@JonasIsensee
Copy link
Author

Yes, essentially that.
Either one would need to require simple floating point output (to be able to automatically reduce the output to mean / std),
or the much more flexible option would be to accept a user-provided metric function that takes in time&returnvalue
and spits out a numerical metric

@gdalle
Copy link
Collaborator

gdalle commented Sep 18, 2023

See #314

@gdalle gdalle added this to the v2.0 milestone Sep 18, 2023
@gdalle gdalle removed this from the v2.0 milestone Jan 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants