-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
skaffold verify fails immediately without any useful logs #9587
Comments
@nathanperkins hi! added some details within this PR #9589 , it should show fail reason and message now |
@idsulik that's awesome, thanks for following up with a quick improvement, it will definitely help. I'm not sure if the pod.status.message is going to be able to show why the pod is crashing in all cases, though. I'm able to briefly see in the GKE console that the job exited with code 128 before it's deleted. I'm pretty sure the error will be in the logs. I could be wrong though. I might be able to catch it if I'm quick enough, but what would really help the most is if I was able to prevent the job and pod from being deleted so I can inspect them freely. |
@nathanperkins , maybe you need this https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/#cleanup-for-finished-jobs to keep the pod? |
Expected behavior
When using
skaffold verify
with thekubernetesCluster
and the pod fails immediately, skaffold should give useful logs in the CLI or the Job and Pod should persist on the cluster so that they can be inspected.I'd prefer to see the logs in the CLI, but if that is infeasible, it would be nice to have an option to keep skaffold from deleting the job and pod.
Actual behavior
Immediate
skaffold verify
failure results in no useful logs and there is no job or pod in the cluster. No logs are found in the GCP cloud logging console.Information
The text was updated successfully, but these errors were encountered: