-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Frequent failures #77
Comments
I figured out how to enable more logging in this action, so that may help me identify which URL is getting the 400. - uses: volta-cli/action@v1
env:
ACTIONS_STEP_DEBUG: true Once I get some failures with the logging on I can see about retries. |
#97 is an initial stab at helping out here, we should now have logging to tell us if the failures are happening in I have a few more ideas to implement also:
|
#97 is published as |
https://github.com/embroider-build/embroider is now updated to v3, so we should hopefully see improvement from #97. |
#101 should also help a bit here (released in v3.0.2). I was able to get to the bottom of at least one of the major causes of request failure that we were hitting. |
I have been receiving intermittent "could not unpack" errors on the lerna repo for some time now: https://github.com/lerna/lerna/actions/runs/4500110940/jobs/7918811928?pr=3603
I recommend volta to everyone I interact with, it's so great for local usage. If we can somehow just resolve this weak link then it will basically be flawless in my eyes! I'm going to try switching to the setup-node action for now because apparently it will check for |
We're getting the same issue since a few days ago. |
Same here. Any updates on this issue @rwjblue? |
FWIW, the |
I've been avoiding volta for C.I., since |
The "could not unpack" issue that we continue to see is almost certainly a problem with |
This may not be the fault of this package, but I frequently lose a job to an HTTP 400 while running this action.
https://github.com/embroider-build/embroider/pull/1001/checks?check_run_id=3863504298
It's sporadic and it's not super common, but when you have enough jobs the chance of hitting it on any given workflow gets quite high.
Better logging of which HTTP request is failing might help, and maybe retry is appropriate.
The text was updated successfully, but these errors were encountered: