Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Port forward URL doesn't work after re-deploy #1345

Closed
eysi09 opened this issue Nov 13, 2019 · 5 comments · Fixed by #1467
Closed

Port forward URL doesn't work after re-deploy #1345

eysi09 opened this issue Nov 13, 2019 · 5 comments · Fixed by #1467
Assignees
Labels
bug priority:medium Medium priority issue or feature

Comments

@eysi09
Copy link
Collaborator

eysi09 commented Nov 13, 2019

Bug

Current Behavior

After a re-deploy in watch mode, the port-forward link that Garden prints stops working (→ Forward: http://localhost:60749 → frontend:8080 (http)) and the page returns an empty response.

Expected behavior

The link continues to work.

Reproducible example

  1. garden deploy -w in demo-project.
  2. Open the Forward: http://localhost:<port>... link for, say, the frontend service
  3. Change some code to trigger a re-deploy of the service
  4. Try the link again and notice that it doesn't work
@eysi09 eysi09 added the bug label Nov 13, 2019
@edvald
Copy link
Collaborator

edvald commented Nov 13, 2019

Huh. I somehow believed the underlying port-forward connection would close when the connected pod is killed. Guess we need more elaborate connection management to account for that then.

@mjgallag
Copy link
Contributor

Ran into this issue today and did a little research, here's how Google's Skaffold worked around it.

@edvald edvald added the priority:medium Medium priority issue or feature label Dec 19, 2019
@olivercodes
Copy link

olivercodes commented Dec 24, 2019

If nobody is working on it, I'd like to help (and make my first contribution to garden). While I work, is the community slack the best place to ask questions?

Current investigation lead me to this, thinking of trying to call killPortForward within

/**
* Creates or re-uses an existing tunnel to a Kubernetes resources.
*
* We maintain a simple in-process cache of randomly allocated local ports that have been port-forwarded to a
* given port on a given Kubernetes resource.
*/
export async function getPortForward({

My first thought though is that it's probably bad UX, since if we force a new one every time, the localhost port will be different every time the user saves and deploys again (and instead of just refreshing, they have to type in the new port if this is say, a frontend application they are working on).

--

And how does this work?

if (registered && !registered.proc.killed) {
log.debug(`Reusing local port ${registered.localPort} for ${targetResource}`)
return registered
}
const k8sCtx = <KubernetesPluginContext>ctx
// Forward random free local port to the remote rsync container.
localPort = await getPort()
const portMapping = `${localPort}:${port}`

...we check to see if the registered portFwd is running, and log out that we'll reuse it. But then in line 93, we don't seem to use it (unless this get-port package is somehow aware of the registered port).

@edvald
Copy link
Collaborator

edvald commented Dec 28, 2019

Hey @olivercodes! Sorry about the late response, been taking it easy over the holidays :P

Probably best to start by explaining the overall architecture here.

The Garden CLI itself creates its own basic TCP proxy (see https://github.com/garden-io/garden/blob/master/garden-service/src/proxy.ts), and binds the user-facing local ports (the ones we print the CLI) as soon as we get an indication of which ports are exposed on the deployed services. Then, when the user connects to one of those ports, we call out to the getPortForward handler on the provider (currently just implemented on the kubernetes provider) to start an actual tunnel to the service port. This means that A) we don't create unnecessary tunnels on startup, while at the same B) we keep a persistent user-facing port even if the k8s port forward needs to be replaced.

What's broken here is the reconnection when the remote service is replaced. It's actually broken in two places, both in the k8s provider (where the port-forward process stays up even though we can no longer connect through it), and the failure handling in the Garden proxy is not really implemented (

// TODO: handle dead port forwards
).

I just posted #1467 which deals with a separate thing (and tbh has more to do with the introduction of the sqlite datastore) but is in the general realm of port forwarding. I'll take a quick look at this now to troubleshoot, and ping back with results :)

@edvald edvald self-assigned this Dec 28, 2019
edvald added a commit that referenced this issue Dec 28, 2019
kubectl port-forward errors are now caught and the process killed, and
the Garden proxy now handles such errors. This handles cases where Pods
are terminated for whatever reason, including after re-deployment.

It would be possible to further improve this by moving to use
the API directly instead of kubectl (which I spiked and looks like quite
a bit of work), and potentially to somehow signal to the proxy that the
underlying port forward has been terminated. It works fairly well as-is
though, so I'll leave those improvements for later.

Fixes #1345
@olivercodes
Copy link

@edvald - no worries, I had some free time on the holidays and went poking around.

It seems like your pr may fix this issue, happy to help test if that's more valuable.

edvald added a commit that referenced this issue Jan 9, 2020
kubectl port-forward errors are now caught and the process killed, and
the Garden proxy now handles such errors. This handles cases where Pods
are terminated for whatever reason, including after re-deployment.

It would be possible to further improve this by moving to use
the API directly instead of kubectl (which I spiked and looks like quite
a bit of work), and potentially to somehow signal to the proxy that the
underlying port forward has been terminated. It works fairly well as-is
though, so I'll leave those improvements for later.

Fixes #1345
edvald added a commit that referenced this issue Jan 9, 2020
kubectl port-forward errors are now caught and the process killed, and
the Garden proxy now handles such errors. This handles cases where Pods
are terminated for whatever reason, including after re-deployment.

It would be possible to further improve this by moving to use
the API directly instead of kubectl (which I spiked and looks like quite
a bit of work), and potentially to somehow signal to the proxy that the
underlying port forward has been terminated. It works fairly well as-is
though, so I'll leave those improvements for later.

Fixes #1345
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug priority:medium Medium priority issue or feature
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants