-
Notifications
You must be signed in to change notification settings - Fork 273
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Port forward URL doesn't work after re-deploy #1345
Comments
Huh. I somehow believed the underlying port-forward connection would close when the connected pod is killed. Guess we need more elaborate connection management to account for that then. |
Ran into this issue today and did a little research, here's how Google's Skaffold worked around it. |
If nobody is working on it, I'd like to help (and make my first contribution to garden). While I work, is the community slack the best place to ask questions? Current investigation lead me to this, thinking of trying to call killPortForward within garden/garden-service/src/plugins/kubernetes/port-forward.ts Lines 58 to 64 in 4fa85dc
My first thought though is that it's probably bad UX, since if we force a new one every time, the localhost port will be different every time the user saves and deploys again (and instead of just refreshing, they have to type in the new port if this is say, a frontend application they are working on). -- And how does this work? garden/garden-service/src/plugins/kubernetes/port-forward.ts Lines 85 to 94 in 4fa85dc
...we check to see if the registered portFwd is running, and log out that we'll reuse it. But then in line 93, we don't seem to use it (unless this get-port package is somehow aware of the registered port). |
Hey @olivercodes! Sorry about the late response, been taking it easy over the holidays :P Probably best to start by explaining the overall architecture here. The Garden CLI itself creates its own basic TCP proxy (see https://github.com/garden-io/garden/blob/master/garden-service/src/proxy.ts), and binds the user-facing local ports (the ones we print the CLI) as soon as we get an indication of which ports are exposed on the deployed services. Then, when the user connects to one of those ports, we call out to the What's broken here is the reconnection when the remote service is replaced. It's actually broken in two places, both in the k8s provider (where the port-forward process stays up even though we can no longer connect through it), and the failure handling in the Garden proxy is not really implemented ( garden/garden-service/src/proxy.ts Line 55 in b244c8e
I just posted #1467 which deals with a separate thing (and tbh has more to do with the introduction of the sqlite datastore) but is in the general realm of port forwarding. I'll take a quick look at this now to troubleshoot, and ping back with results :) |
kubectl port-forward errors are now caught and the process killed, and the Garden proxy now handles such errors. This handles cases where Pods are terminated for whatever reason, including after re-deployment. It would be possible to further improve this by moving to use the API directly instead of kubectl (which I spiked and looks like quite a bit of work), and potentially to somehow signal to the proxy that the underlying port forward has been terminated. It works fairly well as-is though, so I'll leave those improvements for later. Fixes #1345
@edvald - no worries, I had some free time on the holidays and went poking around. It seems like your pr may fix this issue, happy to help test if that's more valuable. |
kubectl port-forward errors are now caught and the process killed, and the Garden proxy now handles such errors. This handles cases where Pods are terminated for whatever reason, including after re-deployment. It would be possible to further improve this by moving to use the API directly instead of kubectl (which I spiked and looks like quite a bit of work), and potentially to somehow signal to the proxy that the underlying port forward has been terminated. It works fairly well as-is though, so I'll leave those improvements for later. Fixes #1345
kubectl port-forward errors are now caught and the process killed, and the Garden proxy now handles such errors. This handles cases where Pods are terminated for whatever reason, including after re-deployment. It would be possible to further improve this by moving to use the API directly instead of kubectl (which I spiked and looks like quite a bit of work), and potentially to somehow signal to the proxy that the underlying port forward has been terminated. It works fairly well as-is though, so I'll leave those improvements for later. Fixes #1345
Bug
Current Behavior
After a re-deploy in watch mode, the port-forward link that Garden prints stops working (
→ Forward: http://localhost:60749 → frontend:8080 (http)
) and the page returns an empty response.Expected behavior
The link continues to work.
Reproducible example
garden deploy -w
indemo-project
.Forward: http://localhost:<port>...
link for, say, the frontend serviceThe text was updated successfully, but these errors were encountered: