-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cypress timeout reached when asset discovery waits on external (not captured) assets #371
Comments
as mentioned , here's what did the trick for me |
Is it possible to see a video run of a failing test like this from Cypress? It's very strange it takes up to 10 seconds to POST a DOM snapshot to the local Percy server (you can email it to me fwiw: robert at percy.io) |
@Robdel12 I sent you video. The list of failing tests is not exactly the same each time. I'm trying to increase default timeout like @kaminskypavel suggested. |
Timeout increase worked for me too. If I understand correctly it fails on this call Line 59 in c6c3992
Because cypress says
I.e. callback returned. This means |
That fixes it on our end as well. Hopefully an actual fix can be proposed as this is only viable temporarily. |
We do wait for asset discovery to finish on that snapshot— that’s always been true since the inception of the SDK though. I received the video, but sadly it doesn’t clear anything up (leaves me with more questions, so not all bad!) Interesting that it looks to fail “fast”. Has the network idle timeout been adjusted by any chance? I’ll try to comb the logs to see if asset discovery is taking a long time for a specific asset or something like that. |
Good news: we have a good idea what's happening and an idea for a fix. The problem: external requests are hanging asset discovery up, long enough to timeout the snapshot command. I saw a few requests taking 2-5 seconds to resolve on their own, which is more than enough hit this command timeout. These requests also were never going to be captured by asset discovery. The fix: not track those requests in asset discovery so the CLI isn't waiting for them to resolve (incoming tomorrow -- we're at the end of our work day here now). |
No I don't think so in my case |
Some snapshots always fail, doesn't matter if I increase the describe('with session', () => {
beforeEach(() => {
cy.login();
});
it('Dashboard Snapshots', () => {
cy.visit('/');
cy.percySnapshot('Dashboard');
cy.findByTestId('toggle-sidenav').click();
cy.percySnapshot('Hidden Sidenav');
});
it('Space Snapshots', () => {
cy.visit('/spaces');
cy.findByTestId('table-container').should('be.visible');
cy.percySnapshot('Spaces');
});
// More snapshots tests
});
}); In the above example, everything after "Dashboard Snapshots" (I removed all other tests so it was a shorter example) fails after the timeout. I increased the
|
Yeah one of my tests failed today with 10000 timeout too |
I have the same... Should we need to downgrade to a version? |
This should be taken care of once this PR ships today: percy/cli#400 |
|
@Robdel12 |
I got the error again :( |
5 tests failed with this error again nightly. We upgraded to "v1.0.0-beta.58" immediately when it was released. |
Can you post logs ( |
Hi @Robdel12 |
Tried to increase timeout also worked for me. It seems like a hack though |
Sharing the full verbose logs from the test run would be the only way to debug this -- the issue is there's assets take longer than the set timeout to resolve (so it could be the apps test server take a while to respond with the assets). The verbose logs will show which assets are taking a while and then we can figure out what's next from thre |
I don't know if you have any client side logging/viz.. recent 3 days it's becoming recurring. I believe it should be something all your users experience How do you enable verbose in percy/cypress? I can help add one. @Robdel12 |
Anything with the SDKs will be client side. You can pass |
@Robdel12 is this possible to catch this error on Percy level and re-throw with better error message explaining what caused the issue? |
I'm seeing an issue where a page that has Stripe Elements on it will hang indefinitely. Currently running @percy/cli 1.0.0-beta.60. When running percy/cli in verbose mode, I don't even see the debug logs output for this particular page. I then compared the output against a run against Percy 2.x. I see Cypress resolving an asset from Stripe against the wrong base URL. For example, I see
On Percy 2, it'll just ignore this and move on. That path does not exist on my local filesystem. The confusion seems to stem from it not taking into account the asset is being loaded inside the Stripe iframe. In the Stripe iframe, you'll see the path to the CSS defined as <link href="fingerprinted/css/ui-shared-bbb176702b532fdcf3153c8a7f0d754f.css" rel="stylesheet"> And the file should rightly be loaded from https://js.stripe.com/v3/fingerprinted/css/ui-shared-bbb176702b532fdcf3153c8a7f0d754f.css I believe Percy is stuck on waiting to fetch this file which Cypress wrongly reports the path as local. |
Hey @kamal! Would you be able to share the full verbose SDK logs from a run where it hangs? We fixed the issue that agent had (which carried over to CLI) in this PR: percy/cli#405 I'd like to see the full logs to see what's going on in asset discovery and which asset it is exactly that's causing tests to hang. |
@Robdel12 pasting it below. the problematic test is the 2nd one to run. there is no output from percy at all. It just hangs there until you cancel the workflow. `@percy/cypress@v3`
Compare this run against `@percy/cypress@v2`
|
🤨 Hm, I don't see any requests in Percy's logs for
Are these the full logs (unedited)? Any ideas for what caused this error?
Was it CI that canceled the run? I don't see any errors/exceptions in the logs to explain that. The second test ( I also noticed |
it's unedited. the error is caused by me cancelling the github action after letting it sit there for 20 minutes doing nothing. if i comment out the |
I think this is related to percy/cli#453 -- can you give |
@Robdel12 no luck, still same results |
Hm, it doesn't seem like this is related to the issue we're commenting on. Can you open an new issue with verbose logs from beta 61 (I see build |
This issue is stale because it has been open for more than 14 days with no activity. Remove stale label or comment or this will be closed in 14 days. |
🤦🏻♂️I knew I forgot an issue to update. Can you give beta 65 a try? I think percy/cli#490 fixed the issue you were seeing |
@Robdel12 i just tried this with beta 65 and it did not fix the issue unfortunately. |
@Robdel12 i no longer have this issue with beta 66! |
Going to close this now that it seems to be resolved. 👍 |
Edit: this will be fixed by an upstream CLI PR: percy/cli#400
Picking up from #367 (unrelated parallel issues going on)
cc: @kaminskypavel / @aleksandrlat
Does this happen for a specific snapshot each time? Or is it a different snapshot? If it's the same snapshot, that's good since it's reproducible/repeatable and likely something to do with the DOM.
The text was updated successfully, but these errors were encountered: