-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Component tests failing intermittently with uncaught Vite "failed to fetch dynamically imported module" error in CI #25913
Comments
We are having the exact same issue with component tests ( I was just coming here to open a new issue ) . We just migrated from webpack to vite. The error we get is:
We are getting this despite the fact that the file does exist. And this occurs randomly. Vite: 4.1.1 One related issue: our cypress directory is on the same level as our src directory. When running locally, Cypress correctly expects component.ts to be where we have it in the cypress directory. But when we run the tests in docker, Cypress expects component.ts to be in a cypress directory under the src directory (see above error). Even if we use the 'supportFolder' config setting, Cypress still looks for it in the src directory (src is the vite root folder). So I just copied component.ts to the location cypress expects it to be (but still fails randomly despite this). So locally this is not reproducable for us (so far). This only occurs in a docker container. We are also running the tests in parallel. |
Update: we found an ugly workaround by using https://docs.cypress.io/guides/guides/module-api#cypressrun and writing our own parallelization mechanism and runner which wraps Cypress. We are checking for errors and we will retry failed tests again. It works but it requires some "balance" in the number of workers. We also made the first group run first, and only when it's successful, we run the other groups in parallel. This seems to help a lot. But there are still random failures from time to time but because we retry them a few times, it will eventually settle down. It looks to be somehow related to "background" Vite's compilation & deps optimization because we observed that it usually gets stable once some of Vite's log messages appear. |
Is this only in Docker? We use Vite heavily internally - everything is really stable. I hope someone can reproduce it reliably. I wonder if adding entries to https://vitejs.dev/config/dep-optimization-options.html#optimizedeps-entries helps? |
Yes, it seems to be only happening on CI, which means Docker. And we tried It is somehow related to CI being under load. We observed a few more different errors happening. It just seems that loading of some resources is either failing or incomplete causing various strange issues (sometimes with loading the import, sometimes errors on Cypress level, sometimes in app, usually something about missing context). But all these are usually resolved after 1-3 retries, depending on the CI load. |
We are experiencing more or less exactly the same thing. In particular the missing context error from time to time and failing to load component.ts. |
Maybe related: #18124 (comment) The OP says "Latest Cypress + Vite" - is this suggesting this is a regression? If so, knowing which version introduced this regression would be very useful. I haven't seen this in our internal suite, but it sounds like I may be able to reproduce by adding lots of large dependencies, like MUI? Or by increasing the load, eg Docker? |
We use MUI. We have split approx 700 component tests into 8 groups of varying sizes which run in parallel in docker containters. The machine is sufficient for the load. Since writing the first post above we had 7 successful runs and then today we experienced the same failure across several of the groups. This time it was component.ts in one group, but also random test files in other groups. I tried the suggestion that @synaptiko gave to run one small group first before running the other tests in parallel. I have since had two successful test runs in a row. Hopefully this will stabalize the build more, but I have seen in the past that making a change can can improve things temporarily only to have a regression. I have also tried running all the tests sequentially and get the same error btw. |
We are transitioning from CRA and much older Cypress, so I can really say if it's regression or not. We just started using versions from the last week.
This correlates with our observations. We have MUI but also Syncfusion and a few other bigger dependencies. Our CI uses quite powerful machines but we are running a lot of things in parallel, the load changes over the day, which would explain why it happens randomly (but we observed it's like ~50% of cases). |
Wow, what a hack - the fact you needed to do that really isn't ideal, I hope we can isolate and fix this soon. I wonder if we need to reach out to the Vite team - they'd probably have more insight than we would for the Vite internals. Seems a lot of similar issues in Vite: https://github.com/vitejs/vite/issues?q=is%3Aissue+Failed+to+fetch+dynamically+imported+module
FYI we do the import here:
I wonder if we can add some pre-optimzation logic during CI mode to force Vite to pre-compile all dependencies, side stepping this issue entirely (which is sort of what the workarounds here are doing). |
Does anyone know any large OSS React + Vite + MUI projects I could try using to reproduce? I tried moving MUI core to Vite but it's not straight forward. |
This seems like a duplication of this thread so writing here also: I have tried to run the tests: locally, on our Github Actions CI, cypress IO cloud and currents.dev(cypress cloud competitor). Out project is using React + Typescript. locally: Github Actions: Cypress.io currents.dev(cypress alternative) Haven't mention it until now but on the cloud solution it does seem to work. which leads me to the conclusion that it is something with my environment that I'm doing wrong here is an example of a test I notice that fails more often ApproveRun.spec.cy.tsx
@lmiller1990 regarding the complexity of the test - I'd say it is pretty complex and heavy test because we render almost the top component in a SPA application @Murali-Puvvada the bash scripts is a workaround. tests that should take 300ms for example with the cypress run command takes 1000-3000ms instead because it rerun the chrome instance every time. Also: we don't have cypress e2e in our application(only component tests.) I've uploaded the debug output to this public file(700mb log file) when I run all the tests: https://drive.google.com/file/d/1-2KOb6KV1SyOc_hBi2c2DK40gRtFeeop/view?usp=sharing Would love any help regarding the issue. |
We're seeing this in Vuetify's CI too: https://github.com/vuetifyjs/vuetify/actions/runs/4354367755/jobs/7609586657 |
@KaelWD |
@lmiller1990 For us it's in docker and component testing. For months now... But what's weird: It only happens on CI/CD (GH actions) where we using exact same docker container as on devs computers. On PC (docker) it works always. I was using Vite from beginning, and had this issue for months, trying multiple Cypress, browers and Vite versions and any workaround I could find. Everything is up to date. It's hard to debug because like I said in never happens locally, just on GH. I was suspecting memory leak, but cypress/docker is running with stable ram usage, low for GH limits. What I tried recently was configuring "attempts" to tests, so even if it fails, it should work fine second time. But this is useless option in that scenario, because when Failed to fetch dynamically imported module happens, Cypress just stops running that test and second attempt never happens........ Recently I added 2 packages that constantly optimized itself to optimizationDeps.entries. And I think that helped a little, but random failed tests still occurs. Now I'm expirecing "failed to fetch" not component file, but cypress index file. |
@chojnicki for me it's happening a lot of constantly. Currently the only workaround I've found was this bash script which run the tests separately one by one and if it fails than it retries for two times. Its significantly slower, but it does the job to only fail when a test actually fails. (we're running this bash script in GH actions too)
|
We have an internal repro 🔥 FINALLY! It's Cypress org private project, but dropping the link here so someone at Cypress can internally... https://cloud.cypress.io/projects/d9sdrd/runs/4193/test-results/2f924501-7fb2-4c80-b69f-819010c67c87 Now we've got a repro I can dig into it... for us it's CI only too, so sounds like a resources/race condition. |
Related: vitejs/vite#11804 |
FYI It seems I don't have a access to the link @lmiller1990 |
@lmiller1990 hope that repo you got will help, but if not, I think I could get access for you/cypress to our repo too. |
Hey team! Please add your planning poker estimate with Zenhub @astone123 @marktnoonan @mike-plummer @warrensplayer @jordanpowell88 |
@chojnicki thanks a lot, I'll ping you if we need access, I think our reproduction should be enough. Is your reproduction consistent? CI only? @matanAlltra sorry this reproduction repo is in our Cypress org but a private repo, I can't make it public it right now, but someone on our team will be able to see it and debug it. If anyone else can share a public repo with this issue, that would sure help, too! |
I experienced this too - for me it was because I was switching between branches with different dependencies, and one of them somehow hadn't been reinstalled inbetween changing branches (in my case @pinia/testing). So while Cypress was reporting that it couldn't import components.js, it wasn't because that file was missing, it was because that file was erroring because of the missing import of @pinia/testing within it. Hope this helps people who end up here in the future (or me, when this happens again and see my own comment). |
I was also unable to see any impact from manipulating the optimizeDeps option in the vite config, but I did want to document what worked for us in case it's useful to someone else as it turned out to be tertiary to the original issue documented. The issue for us turned out to be tied to CircleCI and the use of a static port in our vite config. Our CI config used the same executor to run both E2E tests and Component tests in parallel after a build/install job. The static port was required for the E2E tests to serve reliably for the cypress run, but Cypress component tests were also picking up the static port from the default vite config. Though I initially thought the executors would be isolated, this post made me reconsider. Explicitly overriding the port in our cypress config file fixed this issue and prevented the error from occurring due to host conflicts on the same port. import viteConfig from './vite.config.ts';
// override default config
const customViteConfig = {
...viteConfig({ mode: 'dev' }),
server: {
port: 3001,
},
};
...
// add configuration for component dev server
component: {
devServer: {
framework: 'react',
bundler: 'vite',
viteConfig: customViteConfig,
},
... |
I am facing same issue |
Fixed by deleting node_modules then |
In my case, I try |
Something interesting is happening here. After I claimed victory on this issue, we did indeed continue error-free for many months (thousands of runs), which I think is solid evidence of However. Yesterday, due to some other requirement, I upgraded Cypress from v12 to v13, and at the same time, I upgraded Vite from v3 to v5. It is probably tertiary, but it is also worth noting this also involved an upgrade of the Cypress docker image, which would have also resulted in a Chrome upgrade, It was only once, and I hardcore logged off shortly after in total denial as Friday evening drew to a close -- but after doing this, the issue re-emerged for the first time in months on a single CI run. Deep down, I know the curse has returned. Perhaps an ever-present race condition has suddenly reactivated due to the subtle execution speed differences all these upgrades made, or perhaps there was a relevant behavioural change that affected this issue amongst those changes. I think it would be useful for everyone to post their exact Vite and Cypress versions and their Monday will probably be interesting, but England just got to the semi-finals so I'm pretending not to be affected by the blaze of inevitable red all over CI. This is a total stretch but I'm also someone who is loading service workers (MSW) as part of the test story. I doubt its related, but would be interested to know in the rare event there's some consistency between that cause and this affect. |
This finally flared again enough for me to attend to it. The recurrence was somehow related to a Vite 5 upgrade. Vite 5 now has a new option We removed configuration around
The problem is now gone again. If you are on Vite 5, I recommend trying this. If you are not, I recommend upgrading Vite (you also need at least Cypress 13.10 to be compatible) and then trying it. Probably this glob pattern can be reduced to just my test file, but just getting this information out there. We also haven't done enough testing to know if it was actually the removal of Either way, we defintely need to be clear what Vite versions we have when reporting back. |
Got the same problem on CI/CD pipeline: Vite: Had no entries in |
Got the same problem on CI/CD pipeline: Vite: 5.2.2 Had no entries in optimizeDeps as it gives a lot of errors. Another observation:
Adding viteConfig: {
// ...etc
server: {
warmup: {
clientFiles: ['**/*'],
},
},
}, Gives a lot of these errors: |
We recently released the |
Ah yes! I had seen this just today also. The warmup trick is still working for me, but I will give this a go as well. |
We're having this problem still, after adding the experimentalJustInTimeCompile flag and optimizeDeps stuff. It's random and intermittent, which somehow makes it even worse. The warmup stuff requires us to upgrade to Vite 5, which I don't want to do just yet, so I'm stumped as what to try next.
Error:
The other 2 runs did not fail, only the first one. |
So, I decided to make a stress test where the same 3 spec files, with a total of 9 tests are run in parallel on 18 different runners, and the results are ... interesting.
It's essentially a 50/50 occurrence, which is quite annoying. I don't really know what to try next, I want to try to tell vite not to use dynamic import for the cypress modules, if I can configure it that way and see if that helps. I also was able to see this error occur when running locally on my dev machine. Strange, I thought this was only happening in Pipelines / CI environments. |
A monumentally stupid way to fix this for us is to just retry the cypress test run a couple of times if the failure is due to this dynamic import. Here is the code snip we use in our pipe, maybe it can help any of you. Again, this is stupid. And the bash script can probably be improved, but it at least works.
You can probably get ChatGPT to reformat and explain the script to you, so I won't do that here. I'm adding bash at the beginning because for some reason our image uses dash instead of bash, which doesn't have pipefail. |
@GAK-NOX , thanks! I was waiting for a solution for weeks, this prolongs the build, but solves the issue. |
EDIT: In the end this doesn't work, it just made the problem rarer.I'm still having this problem. I'm not really satisfied with the last solution (retrying the run), so I explored the "warmup files" approach suggested earlier by @adamscybot. The solution provided was viteConfig: {
server: {
warmup: {
clientFiles: ['**/*'],
},
},
}, However, as @apdrsn mentioned, this leads to numerous errors on my end: The Vite documentation also advises against warming up too many files, so I was reluctant to use a wildcard to preload everything. Instead, I refined this solution by warming up only the specific file that was failing to load. This approach seems cleaner and has been working well so far: viteConfig: {
server: {
warmup: {
clientFiles: ['**/cypress/support/component.ts'],
},
},
}, Note that this still requires Vite 5 as the |
Nice! Not many people have reported back on the Glad to see it working for someone. |
I report back positively back then as well.
After some time i can confirm that it works for me, but im afraid that this could change in the future with bigger components, etc. |
Thanks @moritz-baecker-integra. Excellent. Due to my experience with previous attempts breaking down, I wont claim the smoking gun is definitely found this time 🙃. But I think its clear that anyone experiencing this should probably look to go down this road and report back. Wether this is the "root cause" or not is up for debate. Though I do note that the
This is possibly a good idea in isolation anyway, but could also have positive effects on this difficult issue as well. Im not sure how practical this is, but you might argue that all tests in the queue (given suite) could be added as well, as we know they are about to be used. But with that one its a debate about suite startup cost vs individual test startup cost. The support file seems more cut and dry. I therefore want to grab the attention of the Cypress team. Not sure who is relevant but pinging @AtofStryker as looks like he did the Vite 5 integration work. Please redirect if wrong. The upgrade to Vite 5 and Cypress 13.10 is obviously a bit of work for some people -- though I note in my case (with many many tests) the work needed was quite minimal. I'll also add I developed a version of the crude "retry when the cursed log line is detected" solution a bit further up some time ago out of desperation. Eventually, even that broke with the retry set to three times. It did buy us an additional few months though. |
In our case, to fix the CI/CD instability with Cypress component tests, we applied the following workaround: adding specific project dependencies to |
The only thing for us that completely killed this issue was when we switched to a non-shared CI runner. Our Infra team kindly gave us a dedicated machine so we don't share CPU/Memory with other CI jobs and that seems to have completely stopped the issue for us. Don't know how much this would help as to me it points to an issue in Vite rather than cypress? |
I've also seen improvements by lowering the level of parallelism. Unfortunately, though, this is just an evasion of the problem, and it can resurface. It is just changing the parameters so that the race condition does not surface. However, there is nothing scientific or guaranteed about this approach. All that said, I did of course tweak those paramaters several times because there was no other option haha. Interested to know if you've tried #25913 (comment)? |
I think we did, but it didn't make any difference for us so I never pushed it to main. Yea I agree this is basically just avoiding the issue, but it's avoided it enough that I don't care anymore 🤷♂️ |
For me setting cypress files to optimizedDeps worked! At first, I updated vite to 5.4.11 and added warmup config but it didn't work by its own. I'm using cypress 13.15.2.
UPDATE: Not working consistently in CI, it's still failing :/ |
Current behavior
We started migrating our project from CRA to Vite and we have almost 300 component and 40 E2E Cypress tests in place. Unfortunately, after fixing all the other issues, we are still not able to stabilize our tests since there is always 1 or 2 failing randomly on "Failed to fetch dynamically imported module" errors.
We noticed that it's somehow related to the load of CI. Under some conditions more tests are failing like this, in other times it succeeds. But it's random. We checked our tests and we are pretty sure it's not caused by any logic we have.
We've checked some of the existing issues on Cypress & Vite, tried various workaround but no luck with any of them.
What we think is happening is that Cypress is not waiting for Vite to "boot up" properly and retries don't help with it, only when new spec is run, it works.
Note: it only happens with component tests. For E2E tests we had similar stability issues but we solved them by building production version and then just serving it with
vite preview
. This made integration tests faster and very stable. Previously they were also timeouting.Note 2: we have a lot of components and lot of dependencies in our project, we also use MUI as our base. But with CRA we were able to have stable tests, it was just around two times slower. That's why we want to use Vite now.
Note 3: we are running "Component tests" in parallel, currently in 4 groups.
Desired behavior
No random problems with Cypress + Vite combo.
Test code to reproduce
Unfortunately can't provide this since our project is private. And I'm afraid that it's related to project's complexity and that's why we can't easily create another one for reproduction.
Cypress Version
v12.6.0
Node version
v16.18.1
Operating System
Docker Hub image: cypress/included for v12.6.0 version
Debug Logs
No response
Other
No response
The text was updated successfully, but these errors were encountered: