-
-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add mechanism to report on modules that take a long time to resolve #9588
Comments
I like it! Either making sure dependency trees are more shallow or just a well-placed Another benefit is that this might help people understand why the time if the test file (5s in your example) is way higher than the individual test times indicates - the test itself is fast, but it takes time to get to the actual test to run it. I've seen question pop up about this before. I'm not sure what the best API is.
It's also not clear to me how to best print this. I think it makes sense added to a test result somehow, and then a reporter can choose to print it (or not). /cc @cpojer @thymikee @jeysal would love to hear your thoughts on this one 🙂 |
It seems that potentially there is a very large number of total modules that are imported, especially when dealing with third party libraries. Most have negligible performance impact. I would think the reporter could be responsible for displaying this in a useful way.
For our current state we have a large number if test suites that perform poorly, so part of the desire would be to have it highlight why certain tests are slow so we can start improving it. Once we're in a better baseline state this is probably less of a problem. Maybe that is too specific to our particular codebase though. I'll try to come up with a basic mockup of how the reporter might display the results, maybe that will make the needed options a bit clearer |
This seems useful for sure. In case you have a lot of files in your repo and are looking for a way to improve the resolution globally, I recently wrote up some work I did here that you might find useful, depending on your constraints: #11034 |
This issue is stale because it has been open for 1 year with no activity. Remove stale label or comment or this will be closed in 14 days. |
This issue was closed because it has been stalled for 7 days with no activity. Please open a new issue if the issue is still relevant, linking to this one. |
This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
🚀 Feature Proposal
I'd like to have options to set a threshold when running tests (
--slowImportThreshold 1000
for 1s for example) that would log the specific imports that take significant time to resolve prior to executing tests. This would greatly simplify the effort for our developers to find the specific modules that are slowing down overall test performance.Ideally this would report on nested dependencies so I can follow the import path and prune items that are responsible for the bulk of the time being spent.
Motivation
In our current codebase it frequently takes 5-10 seconds to resolve the dependencies for many test suites, which is outside the actual execution time (in most cases the individual tests execute quickly). Multiply this be tens or hundreds of test suites and it takes several minutes to run all of our tests.
This can generally be improved by narrowing the imports or mocking various modules, but the larger problem is identifying the offending modules in the first place. Currently this requires manually commenting out import statements to observe the effect this has on run time, and use trial and error to narrow down to the statement that is importing slowly. In many cases this can be nested several levels deep so it requires a deep understanding of dependency graph being used.
This is asking a lot of the developers working on our projects, if the tooling provided specific actionable information about why the tests are performing poorly it's much more likely that the problems would be addressed.
Example
I've created a sample repo with references to an expensive import (the Material UI Icons are causing problems for us here, though this isn't the only poor performing import): https://github.com/MattKunze/jest-import-perf
If I manually add diagnostic code in jest-runtime to time how long it takes to execute modules around:
https://github.com/facebook/jest/blob/1ed46e71a058c93b529e5e3f9388a800352de21a/packages/jest-runtime/src/index.ts#L850-L871
then I can see where time is being spent during the import process:
Then I can address the problem by either narrowly importing the code needed
https://github.com/MattKunze/jest-import-perf/blob/fa6881d03dfbebf62f9bacfb53bedbea3bc486b8/src/App.tsx#L5-L10
or mocking the module that is importing slowly
https://github.com/MattKunze/jest-import-perf/blob/fa6881d03dfbebf62f9bacfb53bedbea3bc486b8/src/App.test.tsx#L5-L8
Either of these changes significantly speed up the time it takes for jest to actually start executing tests. The problem in a large codebase is identifying when these improvements can be made.
Pitch
Test tooling should run as fast as possible to incentivize developers to write quality tests that scale with the codebase. Overall test performance is important to quickly provide feedback through CI and related mechanisms.
I'm confident that we can cut our test runtime by more than half just by optimizing how much code jest has to inspect when analyzing dependencies. The challenge is identifying the source of the problem, and as importantly providing feedback to developers to keep these issues from creeping back in over time.
This is something I'm willing to work on as a contributor, I just need some indication that the functionality would be accepted upstream and basic guidance of an approach that fits in well with the rest of the ecosystem.
The text was updated successfully, but these errors were encountered: