-
-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dramatically improve watch mode performance. #8201
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So excited!
Nice one, thanks! ✨ |
@@ -32,7 +32,7 @@ describe('getMaxWorkers', () => { | |||
|
|||
it('Returns based on the number of cpus', () => { | |||
expect(getMaxWorkers({})).toBe(3); | |||
expect(getMaxWorkers({watch: true})).toBe(2); | |||
expect(getMaxWorkers({watch: true})).toBe(3); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
was there any reason this was 1 less? @rogeliog do you know?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe the rationale for cpus/2 was that watch mode is typically used while doing other things (mostly editing files) and this was to make it less likely that the editor becomes slow / freezes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we want to make it 1 less that's fine, but before it was halved! Just happened to be 1 less in this case.
On my machine, it was running 6 workers instead of 11.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know, stumbled over the formula as well some time ago and wondered if there's any experimental basis for it ^^
I'm fine with using cpus - 1
for watch mode as well. It's a good heuristic for what performs best when Jest is the only thing running, we can't really do much about sharing resources with other applications.
If you want your editor to remain responsive while running an expensive task, nice yarn jest --watch
is a good idea anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is awesome!
Could you update the changelog? 😀
static DuplicateHasteCandidatesError: typeof DuplicateHasteCandidatesError; | ||
private static nextUniqueID = 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
static
above instance variables?
Also, as mentioned somewhere else, start this at 1
so it's not falsy?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
static above instance should be a lint rule / auto-fixable? done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
static above instance should be a lint rule / auto-fixable?
yeah, probably 🙂
packages/jest-runner/src/index.ts
Outdated
@@ -134,6 +134,9 @@ class TestRunner { | |||
Array.from(this._context.changedFiles), | |||
}, | |||
globalConfig: this._globalConfig, | |||
moduleMapUniqueID: watcher.isWatchMode() | |||
? test.context.moduleMap.uniqueID | |||
: null, | |||
path: test.path, | |||
serializableModuleMap: watcher.isWatchMode() | |||
? test.context.moduleMap.toJSON() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@SimenB I've been thinking about this and I'm pretty sure we're still paying a serialization cost to send this to the worker for each test, just much less of one. Is this part of the public interface? Can I change it without it being considered a breaking change?
Can always explore in further PRs since this one is obviously an improvement, but it would be nice if I was able to move the logic up into the main thread and not even send the haste map if it wasn't new. I'm not sure if it is a breaking change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure... That's more of a question for @rubennorte or @mjesun
The less we have to send across ipc the better, so if we can change it I'm all for it 🙂
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like external runners include both the worker and the orchestrator, so changing what we pass to the worker here should be safely in the realm of a non-breaking-change!
Working to further optimize this.
@rubennorte @mjesun Let me know if I'm incorrect.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@SimenB Feel free to re-review, I've refactored this to approach it from a different angle and it's now even faster. On Jest's test suite, before Watch mode was a 6%~ performance cost (in my initial PR) and now it's about 1%~.
The refactor matters even more for larger haste maps, like Facebook's.
Codecov Report
@@ Coverage Diff @@
## master #8201 +/- ##
=========================================
- Coverage 62.3% 62.3% -0.01%
=========================================
Files 265 265
Lines 10473 10483 +10
Branches 2542 2543 +1
=========================================
+ Hits 6525 6531 +6
- Misses 3366 3370 +4
Partials 582 582
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, hell yeah! I like this even more. Happy sunday! 😀
As @jeysal pointed out, the reason only half of the available CPUs are used in watch mode is because otherwise Jest will grind your system to a halt. Ideally, watch mode should be unobtrusive. With the change from this PR, Jest will run faster at the cost of developer experience because all of their other tooling will be slower (at Facebook consider Buck, Jest, Flow, Nuclide etc. all fighting for resources). Initially, we started out with using all available CPUs in Jest and people complained which is why we changed it to half of the available CPUs. I feel strongly that the change in this PR should be reverted back, even if it is at the cost of watch performance. (Apologies for not adding a code comment in the past, I thought that this could be blamed back to the original PR that most likely has an explanation). |
Alternatively, I'm definitely happy if you wanna go for an adaptive approach that checks for recent CPU utilization and adjusts CPU usage for Jest accordingly. |
Shouldn't that work the same regardless of watch mode? Why should watch mode use less cores than a normal run? |
Mainly because watch mode is something you have open as a service, often in the background, and it runs automatically while writing/saving code. When you invoke |
Right, makes sense |
@cpojer I actually checked the blame but it was lost to an earlier refactor. :) It was only a small part of this PR anyway, if anything I thought it might have been a mistake. I’ll submit a PR to revert that piece. |
Can that PR include doc changes which explains what these values are and why? That was part of the confusion in #7341 |
Sure, will do. I just validated (as is pretty obvious) that even without the CPU change this is a massive performance boost. At 300k files in the map, I’m seeing things run almost twice as fast. I’ll work on reverting that piece with documentation today. |
This pull request has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
Summary
Resolves #7341
This PR dramatically improves watch mode performance, bringing it in line with single run mode performance. It accomplishes that by:
ModuleMap
andResolver
for every test in watch mode. Now, those objects are only initialized once when the worker is setup.ModuleMap
to a JSON-friendly object.Benchmarks
I benchmarked against Jest's own test suite, excluding e2e tests which don't provide good signal because they individually take a long time (so startup time for the test is marginalized). The numbers show that running in Watch mode previously added an extra 35%~ of runtime to the tests but that has now been reduced to almost nothing.
Watch mode should now just be paying a one-time initial cost for each worker when the haste map changes instead of paying that same cost for every test run.
branch: master
yarn jest ./packages
Run time: 15.091s
yarn jest ./packages --watch
Run time: 23.234s
branch: watch-performance
yarn jest ./packages
Run time: 14.973s
yarn jest ./packages --watch
Run time: 15.196s
Test plan