-
Notifications
You must be signed in to change notification settings - Fork 47
Web Tooling Benchmark #138
Comments
This sounds awesome. I hope that the tools are not only tested in isolation, but also in tandem. Many performance problems occur, if you combine several tools with each other. E.g. at my current company we use TypeScript together with Babel (and maybe one or two other loaders) with WebPack. Not just TypeScript or just Babel. Our linting step currently also consists of three tools: prettier, TSLint and ESLint. (Mainly because TS support in ESLint is still new, so we use both linters.) All in all that is an awesome project. 👍 I wonder how this benchmark will look like? Do you will use dedicated machines for test runs? Can tooling authors add their own tools via Pull Requests? Sorry, if this is already to detailed 😆 |
The idea is to start small and eventually include more tools. Pull requests are of course welcome. I'd prefer to have the benchmark suite runnable standalone (i.e. independent of a concrete machine setup). Similar to let's say the Octane benchmark, which you could easily run on any machine via |
I agree. Being able to run it anywhere, including the browser, would be a huge plus. |
Please ensure this includes stress testing of the source-map package. (Transitive) sourcemap generation is hugely expensive today and is heavy on memory allocations and usage meaning it stresses GC. |
As discussed in the bench-marking meeting yesterday it would be great to add coverage for that use case category. As you mention we don't have good coverage there. One quick question, how long do you think the benchmark will run for ? I'm assuming it will be relatively short (<1 hour) so that we can easily run it nightly as part of the bench runs. |
Yes, the idea is that it runs only for a couple of minutes. |
Ok quick status update: I compiled an initial version of the WebToolingBenchmark, consisting of a test case for Chai and one for Espree. I'm currently working on generating a useful test case for TypeScript and I'm in contact with the webpack folks about a proper benchmark. |
Ok, I have a demoable version at github.com/v8/web-tooling-benchmark. Please give it a go and let me know what you think. |
This is the initial version of the Web Tooling Benchmark as discussed in nodejs/benchmarking#138, and contains tests for Chai and Espree only. It's meant to be a starting point for further extension.
As requested earlier in the discussion on nodejs/benchmarking#138 (comment) this adds a benchmark for the popular source-map package, which covers both the serialization and the parsing of source maps. The payload is the source map for the minified source-map.js itself. We use `backbone-min.map`, `jquery.min.map`, `preact.min.js.map` and `source-map.min.js.map` (all from the lastest versions) as payloads for the benchmark.
Benchmark is now running in nightly builds. I think this issue can now be closed. Closing, please let me know if it should still be open. |
As discussed in the last benchmarking WG meeting yesterday, I'd like to propose a new benchmark suite that covers common tools that are used daily by developers to build web pages, a so-called Web Tooling Benchmark. This will include at least tools like:
As far as I can tell this kind of use case is not yet covered and well-represented in the Main Use Case Categories.
The idea for this benchmark suite is to have it running both in Node, but also in the browser, to be able to use this as a general measure to advance JavaScript engine performance. It should ideally mock out all I/O and focus on the core execution logic inside those tools.
I expect work on this to start end of Q3 / beginning of Q4. Help from the community in both design and coding would be highly appreciated.
The text was updated successfully, but these errors were encountered: