Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🚇 Retarget PR benchmarks to main #831

Merged
merged 2 commits into from
Sep 19, 2021

Conversation

s-weigand
Copy link
Member

@s-weigand s-weigand commented Sep 18, 2021

Since staging finally got merged into main 🎉 we can use main as the comparison standard for PR instead of the v0.4.0 release.

For the updated comment see.

Change summary

  • 🚇 Use main and comparison for PRs
  • 🚇 Compare main against last release (tag)

Checklist

  • ✔️ Passing the tests (mandatory for all PR's)

@s-weigand s-weigand requested a review from a team as a code owner September 18, 2021 16:52
@s-weigand s-weigand added the Type: Tooling Tools used for the project (CI, CD, docs etc.) label Sep 18, 2021
@github-actions
Copy link
Contributor

Binder 👈 Launch a binder notebook on branch s-weigand/pyglotaran/fix-pr-benchmark-target

@sonarcloud
Copy link

sonarcloud bot commented Sep 18, 2021

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

No Coverage information No Coverage information
No Duplication information No Duplication information

@github-actions
Copy link
Contributor

Benchmark is done. Checkout the benchmark result page.
Benchmark differences below 5% might be due to CI noise.

Benchmark diff v0.4.1 vs. main

Parametrized benchmark signatures:

BenchmarkOptimize.time_optimize(index_dependent, grouped, weight)

All benchmarks:

       before           after         ratio
     [21ba272a]       [5c5e7a29]
     <v0.4.1>                   
-      46.6±0.2ms       31.7±0.2ms     0.68  BenchmarkOptimize.time_optimize(False, False, False)
-         285±1ms       37.5±0.8ms     0.13  BenchmarkOptimize.time_optimize(False, False, True)
-      64.2±0.3ms       52.4±0.1ms     0.82  BenchmarkOptimize.time_optimize(False, True, False)
-      65.8±0.4ms       56.0±0.6ms     0.85  BenchmarkOptimize.time_optimize(False, True, True)
-      46.7±0.2ms      40.8±0.05ms     0.87  BenchmarkOptimize.time_optimize(True, False, False)
-       286±0.6ms        59.8±50ms     0.21  BenchmarkOptimize.time_optimize(True, False, True)
       64.5±0.5ms       63.4±0.2ms     0.98  BenchmarkOptimize.time_optimize(True, True, False)
       66.3±0.2ms        67.4±40ms     1.02  BenchmarkOptimize.time_optimize(True, True, True)
             179M             182M     1.02  IntegrationTwoDatasets.peakmem_create_result
             197M             201M     1.02  IntegrationTwoDatasets.peakmem_optimize
-         202±2ms          165±2ms     0.82  IntegrationTwoDatasets.time_create_result
-      4.32±0.03s       1.47±0.08s     0.34  IntegrationTwoDatasets.time_optimize

Benchmark diff main vs. PR

Parametrized benchmark signatures:

BenchmarkOptimize.time_optimize(index_dependent, grouped, weight)

All benchmarks:

       before           after         ratio
     [2f1afe78]       [5c5e7a29]
      31.6±0.09ms       31.7±0.2ms     1.00  BenchmarkOptimize.time_optimize(False, False, False)
       37.1±0.3ms       37.5±0.8ms     1.01  BenchmarkOptimize.time_optimize(False, False, True)
       52.2±0.3ms       52.4±0.1ms     1.00  BenchmarkOptimize.time_optimize(False, True, False)
       55.6±0.6ms       56.0±0.6ms     1.01  BenchmarkOptimize.time_optimize(False, True, True)
       40.6±0.2ms      40.8±0.05ms     1.01  BenchmarkOptimize.time_optimize(True, False, False)
        88.6±50ms        59.8±50ms    ~0.68  BenchmarkOptimize.time_optimize(True, False, True)
       63.1±0.3ms       63.4±0.2ms     1.00  BenchmarkOptimize.time_optimize(True, True, False)
        69.6±40ms        67.4±40ms     0.97  BenchmarkOptimize.time_optimize(True, True, True)
             179M             182M     1.02  IntegrationTwoDatasets.peakmem_create_result
             197M             201M     1.02  IntegrationTwoDatasets.peakmem_optimize
          160±1ms          165±2ms     1.03  IntegrationTwoDatasets.time_create_result
       1.42±0.04s       1.47±0.08s     1.03  IntegrationTwoDatasets.time_optimize

@codecov
Copy link

codecov bot commented Sep 18, 2021

Codecov Report

Merging #831 (5c5e7a2) into main (2f1afe7) will not change coverage.
The diff coverage is n/a.

Impacted file tree graph

@@          Coverage Diff          @@
##            main    #831   +/-   ##
=====================================
  Coverage   84.7%   84.7%           
=====================================
  Files         77      77           
  Lines       4343    4343           
  Branches     785     785           
=====================================
  Hits        3682    3682           
  Misses       521     521           
  Partials     140     140           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 2f1afe7...5c5e7a2. Read the comment docs.

Copy link
Member

@jsnel jsnel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good.

@jsnel jsnel merged commit 7c5573d into glotaran:main Sep 19, 2021
@jsnel jsnel deleted the fix-pr-benchmark-target branch September 19, 2021 04:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Tooling Tools used for the project (CI, CD, docs etc.)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants