Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use continues benchmark to track long term performance #150

Open
ringsaturn opened this issue Aug 16, 2022 · 6 comments
Open

Use continues benchmark to track long term performance #150

ringsaturn opened this issue Aug 16, 2022 · 6 comments

Comments

@ringsaturn
Copy link
Contributor

https://github.com/marketplace/actions/continuous-benchmark could save each commit’s benchmark results to a file, and come with a nice HTML page to view trends, I believe it will be useful for performance improvements.

@jannikmi
Copy link
Owner

AFAIK the benchmark will run on GitHub actions for every build. One question that comes to my mind is if the performance is really comparable when the runtime environment (computational setup, CPU, RAM etc.) of the benchmark cannot be controlled?
@ringsaturn do you know about this?

@ringsaturn
Copy link
Contributor Author

ringsaturn commented Aug 16, 2022

There are two main variables

Hardware

Sometimes the performance could be very different(maybe the CPU load caused), take tzf as example https://ringsaturn.github.io/tzf/ the performance will regression after many benchmark has runned.

image

That’s part we can’t control. Just run many times and wait the trends came to the regression.

Project based env

Sometimes we need to test under multi env, like OS/verison, and that we can set in GA.

An internal project uploads different env benchmark data to different folder, such as Go’s version:

      - name: Continuous Benchmark
        uses: benchmark-action/[email protected]
        with:
          name: Go Benchmark
          tool: 'go'
          output-file-path: ${{ matrix.go }}_benchmark_result.txt
          github-token: ${{ secrets.TOKEN }}
          gh-repository: 'github.com/xxxxx/yyyyy'
          auto-push: true
          alert-threshold: '200%'
          comment-on-alert: true
          fail-on-alert: false
          gh-pages-branch: "benchmark"
          benchmark-data-dir-path: "benchmark/${{ matrix.go }}/"
          alert-comment-cc-users: '@ringsaturn'

The save path need a unique path and then we can see diffent benchmark results:

.
└── benchmark
    ├── 1.17
    │   ├── data.js
    │   └── index.html
    ├── 1.18
    │   ├── data.js
    │   └── index.html
    └── 1.19
        ├── data.js
        └── index.html

If we need a page to view, we can add index.html to gh-pages branch, and then show a list of benchmark sets.

@ringsaturn
Copy link
Contributor Author

Another problem I know is that if PR isn’t opened by owner, secret token couldn’t be accessed like ringsaturn/tz-benchmark#7

@ringsaturn
Copy link
Contributor Author

Here is a screenshot from our internal project. A few days ago we add a performance improvement feature, which is obviously.

Screenshot 2022-08-17 at 15 49 03

@jannikmi
Copy link
Owner

For future reference: the scripts folder already contains scripts for time benchmarks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants