Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BENCH] Benchmark Bron-Kerbosch Algorithm #147

Closed
2 tasks
bobluppes opened this issue Oct 10, 2023 · 4 comments
Closed
2 tasks

[BENCH] Benchmark Bron-Kerbosch Algorithm #147

bobluppes opened this issue Oct 10, 2023 · 4 comments
Assignees
Labels
good first issue Good for newcomers hacktoberfest help wanted Extra attention is needed no-issue-activity performance Benchmarks or performance improvements

Comments

@bobluppes
Copy link
Owner

bobluppes commented Oct 10, 2023

Benchmark Bron-Kerbosch

The goal of this issues is to add a benchmark for the Bron-Kerbosch algorithm. The algorithm implementation is located under include/graaflib/algorithm/clique_detection/bron_kerbosch.h. Benchmarks are vital to our library and allow us to measure the impact of future performance improvements.

We use the Google benchmark framework. For inspiration, please take a look at the existing benchmarks in /perf.

The benchmark should be added under /perf in a directory which resembles the file structure of the original algorithm. i.e. if the algorithm is implemented in include/graaflib/algorithm/coloring/greedy_graph_coloring.h then the benchmark should be added to perf/graaflib/algorithm/coloring/greedy_graph_coloring_benchmark.cpp.

The benchmark should measure the runtime performance of the algorithm for increasing input sizes.

Running Benchmarks

If you IDE has the necessary integrations for it, all benchmarks can be run in the IDE from the perf/graaflib/benchmark.cpp file.

Otherwise, we can run the benchmarks from the command line:

# run all benchmarks
cd build/perf && ./Graaf_perf

To run an individual benchmark:

./Graaf_perf --benchmark_filter=YOUR_BENCHMARK_NAME

For more options, pass the --help flag.

Definition of Done

  • A benchmark is added for the algorithm
  • Benchmark results (copy past of the output is fine) is added to the PR
@bobluppes bobluppes added help wanted Extra attention is needed good first issue Good for newcomers performance Benchmarks or performance improvements hacktoberfest labels Oct 10, 2023
@Hromz
Copy link
Contributor

Hromz commented Oct 10, 2023

Hey @bobluppes, can I work on this, even thought I have active issue(example section)? :)

@bobluppes
Copy link
Owner Author

Hey @bobluppes, can I work on this, even thought I have active issue(example section)? :)

Hi @Hromz, of course! Seeing the number of algorithms we currently have there should be no shortage of benchmarking tickets ;)

Assigning this to you!

@Hromz
Copy link
Contributor

Hromz commented Oct 12, 2023

Hey @bobluppes! Please check PR #157.

@github-actions
Copy link
Contributor

Stale issue message

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Nov 3, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers hacktoberfest help wanted Extra attention is needed no-issue-activity performance Benchmarks or performance improvements
Projects
None yet
Development

No branches or pull requests

2 participants