Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minimal viable benchmarks #14

Open
sheplu opened this issue Nov 15, 2022 · 4 comments
Open

Minimal viable benchmarks #14

sheplu opened this issue Nov 15, 2022 · 4 comments

Comments

@sheplu
Copy link
Member

sheplu commented Nov 15, 2022

Following the discussion during the first meeting of the team a few topics were raised regarding benchmarks

  • how we want to benchmark the runtime
  • what should we benchmark

Some issues were also discussed about benchmarks

  • time to run all benchmark
  • in which architecture
  • how frequently

I will grab and put the discussion we had yesterday (at least the key points made) so we can have a constructive talk around it

@anonrig
Copy link
Member

anonrig commented Nov 17, 2022

We should keep in mind that certain benchmarks take a lot of time to execute right now. For example, running all URL benchmarks take 4 hours. (Referencing https://ci.nodejs.org/view/Node.js%20benchmark/job/benchmark-node-micro-benchmarks/1222/)

@BethGriggs
Copy link
Member

nodejs/Release#479 is old but relevant. If we had a small set of benchmarks that provided a reasonable amount of coverage/value, then we could consider running them as part of the release process (for specific, or even all releases).

The caveat is that they would need to take a similar amount of time as our builds and be reasonably easy to interpret to avoid adding too much time/effort to the already laborious release process. Maybe that could initially be achieved by setting a generous max regression % that if we go below, then we should investigate before shipping the release.

@sxa
Copy link
Member

sxa commented Nov 29, 2022

@sheplu I didn't manage to get on the call yesterday but it sounded from the recording like you had information from quite a few sources - are there more reference links that you can add into this issue so we can use this as an initial "hub" for collating information about what we've currently got?

@sheplu
Copy link
Member Author

sheplu commented Nov 30, 2022

I was talking a bit linked to this issue #13

Here is the interesting links

Now that I list them here I was imagining more resources :D

Also a good talk to have is even if we don't benchmark everything every times we should have something to test some of the core feature on some "basic" use case

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants