You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First time I run this on 3 GKE clusters that are alive, I get 1 cluster. Second time I run it I get 3 clusters.
First time I run (with missing clusters) I saw this:
$ kubectl config-cleanup
E0813 10:11:17.775025 6318 round_trippers.go:174] CancelRequest not implemented by *gcp.conditionalTransport
I think this plugin has a wait timeout, and cancels after a while. GKE clusters use gcloud command to get a token (and cache it for an hour), so sometimes it can take several seconds to do something if you haven't used the cluster from that machine for a while.
I think the wait should be increased?
The text was updated successfully, but these errors were encountered:
The default wait timeout is set to 3s. I opted to keep it low because users would have to wait the full duration if one of the clusters is going to timeout, though this doesn't seem to happen as often as I expected, so I'm open to increasing the default. How about 7s instead?
In the short term, there's a --timeout cli option to override the default
As for the error log, it looks like kubernetes/kubernetes#73791 is still being investigated so I don't think I can fix that here
First time I run this on 3 GKE clusters that are alive, I get 1 cluster. Second time I run it I get 3 clusters.
First time I run (with missing clusters) I saw this:
I think this plugin has a wait timeout, and cancels after a while. GKE clusters use
gcloud
command to get a token (and cache it for an hour), so sometimes it can take several seconds to do something if you haven't used the cluster from that machine for a while.I think the wait should be increased?
The text was updated successfully, but these errors were encountered: