Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I help to get this faster? #20

Open
Andarist opened this issue Feb 20, 2020 · 8 comments
Open

How can I help to get this faster? #20

Andarist opened this issue Feb 20, 2020 · 8 comments

Comments

@Andarist
Copy link

I'm really interested in seeing the speed of this runner improve - would you be willing to share with me what you have tried in the past? Is there an interest in improving this?

From what I understand a new compiler is being created per test file right now, so probably the best improvement would be trying to reuse a single compiler instance and using TS's incremental API - I think is the strategy used by Fork TS Checker Webpack Plugin which I found to be pretty fast. Not 100% sure how this would play out with jest worker farm and how this is even utilized here, but this is something that could be explored.

@azz
Copy link
Owner

azz commented Feb 26, 2020

Hey @Andarist,

To be honest I just threw this together to see if it could be done, and didn't spend a lot of time trying to optimize it. I've since moved back to running tsc separately to Jest so don't really dogfeed this package.

I'd be interested to see if the tactics used by that webpack plugin will work in the Jest ecosystem.

@Andarist
Copy link
Author

I think it's either this or creating a language service as ts-jest does. I have limited time to work on this, but I would really like to have this working - so gonna slowly try to work towards that. At the moment I'm researching TS APIs to understand their tradeoffs.

If you have any insights about jest runners that would be very helpful as well. From my quick tests I've noticed that, for example, ts-jest (a transformer) was loaded/executed with each run which was rather surprising to me - I would have expected such a thing to be cached.

@azz
Copy link
Owner

azz commented Feb 26, 2020

cc. @SimenB, @rogeliog who know a lot more about Jest runners than me 😉

@SimenB
Copy link
Contributor

SimenB commented Feb 27, 2020

Would be awesome to make it a viable option, yeah! 👏

Runners are generally a better fit for files that a processed in isolation (like tests, linting, and babel's transpilation) rather than something that needs the whole thing in memory and operate on it in one go. But ts's transpileModule is supposed to be that, so it should be possible to do something? So yeah, either a language service or something like it would be needed. I'm not sure how incremental builds triggered from different processes (or worker_threads) work for tsc - I've never investigated although it's been on my todo-list for 18 months or so. Looking at what the webpack plugin and ts-jest does seems like a great start 👍


transfomers are instantiated for each run (config might change, which invalidates caches), but as long as getCacheKey is implemented correctly the actual transpilation (process call) shouldn't happen

@Andarist
Copy link
Author

Runners are generally a better fit for files that a processed in isolation (like tests, linting, and babel's transpilation) rather than something that needs the whole thing in memory and operate on it in one go.

Right, I've noticed that 😅 Is there any other API that would be better suited for this? Or maybe it could be considered to introduce one? Fork TS Checker Webpack Plugin works by spawning a separate process for type checking and using RPC to communicate between the main process (webpack-aware) and the type checking one. So to reuse this technique it would be great if jest could provide a "runId" or something, so we could only call the type checking service once for a particular run.

But ts's transpileModule is supposed to be that, so it should be possible to do something?

From what I know (but I might be wrong) this just transpiles stuff (hence the name), it doesnt actually do any type checking.

I'm not sure how incremental builds triggered from different processes (or worker_threads) work for tsc - I've never investigated although it's been on my todo-list for 18 months or so.

It's probably not needed at all to use worker_threads in this scenario - because as mentioned above, ideally we should just be able to request diagnostics once for the whole project, so splitting this into threads doesnt make much sense.

transfomers are instantiated for each run (config might change, which invalidates caches), but as long as getCacheKey is implemented correctly the actual transpilation (process call) shouldn't happen

Thanks for this info! Wouldnt it be possible to skip that when you know that a config hasnt changed? Or is it just considered to be so low-cost that it's not worth optimizing this?

@SimenB
Copy link
Contributor

SimenB commented Feb 28, 2020

So to reuse this technique it would be great if jest could provide a "runId" or something, so we could only call the type checking service once for a particular run.

We have a JEST_WORKER_ID env variable. You could also make some custom watch plugin or reporter to hook into some lifecycle telling you "new run". Does that cover it?

Wouldnt it be possible to skip that when you know that a config hasnt changed?

Possibly! But the whole runtime is instantiated for every single test file for every run (which further instantiates the transformers), so I think any change would be rather large. Feel free to explore it though! There's bound to be some low-hanging fruit performance wise here anyways

@Andarist
Copy link
Author

We have a JEST_WORKER_ID env variable. You could also make some custom watch plugin or reporter to hook into some lifecycle telling you "new run". Does that cover it?

Possibly yes - although, when thinking about it more, the ideal solution would be to bail out of worker pool for this use case. Is it possible on per runner basis? Using an ID to emulate a single run would certainly work, but in a long run sounds like a workaround.

Are those hooks etc documented somewhere or do I have to do some code-spelunking? 😉 Or are you simply referring to those mentioned here?

Feel free to explore it though! There's bound to be some low-hanging fruit performance wise here anyways

I would love to, but unfortunately, there are low chances I get to it. Too much of other OSS work ahead of me right now.

@ahnpnl
Copy link

ahnpnl commented Mar 5, 2020

I think it's either this or creating a language service as ts-jest does. I have limited time to work on this, but I would really like to have this working - so gonna slowly try to work towards that. At the moment I'm researching TS APIs to understand their tradeoffs.

If you have any insights about jest runners that would be very helpful as well. From my quick tests I've noticed that, for example, ts-jest (a transformer) was loaded/executed with each run which was rather surprising to me - I would have expected such a thing to be cached.

To be precise, in ts-jest, the first ts file will create language service once and cache it. The next files will use that cached instance of language service.

What I can think of now is use node worker to separate emit diagnostics into another process, that might speed up emit process, see https://github.com/microsoft/TypeScript/wiki/Performance#concurrent-type-checking

In https://github.com/TypeStrong/fork-ts-checker-webpack-plugin they use worker-rpc to offload that emit diagnostics into another process.

I’m not so experienced with nodejs so I came to this thread to share my thoughts and probably learn something from you guys.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants