-
Notifications
You must be signed in to change notification settings - Fork 925
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The documentation of kubectl wait is a bit miss-leading #754
Comments
Actually the
But watch request is actually a unlimited connection so every request has a timeout(default is 30s in wait). So the only way to do your second solution is average time for the timeout, but seems not very good to do that. |
I implemented sort of a wrapper for internal use with the following logic:
IMHO, this logic fits better to the use cases kubernetes users encounter on the day to day work but again, my opinion only. I'm not a go developer so I can't submit a PR with this logic but I would definitely like to hear more opinions about this logic. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten I just experienced exactly the issue described in the OP, and found it very confusing. I noticed these log messages from my e2e test:
The age of the pods is 90 minutes rather than the 30 minutes I expected, because |
We discovered that kubectl wait will actually timeout only after timeout*number of objects being waited on. So if you wait on 3 pods specifying a timeout of 10m, it will actually wait 30m. `kubectl sleep --timeout -1s` will wait a week, so plausibly "forever". kubernetes-sigs#158 kubernetes/kubectl#754
/kind bug |
/area kubectl |
/priority backlog |
/assign |
This is easy to see with a small timeout if you have more than one resource For example with a 2s timeout it takes approximately Also note that there is no output until all of them timeout, which also adds up to the confusion $ kubectl get pods -l release=myrelease -o name | wc -l
17
$ time bash -c "kubectl wait --for=delete pods -l release=myrelease --timeout=2s 2>&1 | ts '[%Y-%m-%d %H:%M:%S]'"
[2020-09-28 20:40:15] timed out waiting for the condition on pods/myrelease-c66pr
[2020-09-28 20:40:15] timed out waiting for the condition on pods/myrelease-8tgq6
[2020-09-28 20:40:15] timed out waiting for the condition on pods/myrelease-gx5mn
[2020-09-28 20:40:15] timed out waiting for the condition on pods/myrelease-6xb9b
[2020-09-28 20:40:15] timed out waiting for the condition on pods/myrelease-s9cbv
[2020-09-28 20:40:15] timed out waiting for the condition on pods/myrelease-qtwx7
[2020-09-28 20:40:15] timed out waiting for the condition on pods/myrelease-qk842
[2020-09-28 20:40:15] timed out waiting for the condition on pods/myrelease-l6wq2
[2020-09-28 20:40:15] timed out waiting for the condition on pods/myrelease-zv25k
[2020-09-28 20:40:15] timed out waiting for the condition on pods/myrelease-diss0
[2020-09-28 20:40:15] timed out waiting for the condition on pods/myrelease-mh8h2
[2020-09-28 20:40:15] timed out waiting for the condition on pods/myrelease-cb2zn
[2020-09-28 20:40:15] timed out waiting for the condition on pods/myrelease-q879p
[2020-09-28 20:40:15] timed out waiting for the condition on pods/myrelease-skggg
[2020-09-28 20:40:15] timed out waiting for the condition on pods/myrelease-4mklb
[2020-09-28 20:40:15] timed out waiting for the condition on pods/myrelease-pn94r
[2020-09-28 20:40:15] timed out waiting for the condition on pods/myrelease-r2226
kubectl wait --for=delete pods -l release=myrelease --timeout=2s 0.23s user 0.09s system 0% cpu 40.267 total |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Rotten issues close after 30d of inactivity. Send feedback to sig-contributor-experience at kubernetes/community. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen This seemed to be a confirmed bug, and is definitely still present, quite confusing, and makes it basically impossible to use |
@mgabeler-lee-6rs: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
ok, fine, #1219 |
The
--timeout
option forkubectl wait
says:"The length of time to wait before giving up" .
I personally understand this as the timeout for the entire command but after experimenting with it a bit I realised that the value of this option is per resource. For example, if I am waiting for a set of 10 pods to be
Ready
and used--timeout=60s
, I might end up waiting for 10 minutes before the command exits, and not just 1 minute as I assumed.So as I see it, there are 2 possible solutions here:
--timeout
value will be for the entire command duration (preferable solution in my opinion).The text was updated successfully, but these errors were encountered: