Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Only print port forward success message on actual success #4968

Merged
merged 8 commits into from
Nov 2, 2020

Conversation

IsaacPD
Copy link
Contributor

@IsaacPD IsaacPD commented Oct 28, 2020

Fixes: #4897

Description
Added error return value for EntryForwarder, use this value to determine when to print the appropriate logs to the user on whether a port has been forwarded.

KubectlForwarder returns its error by watching the logs for kubectl port-forward and looking for an indication of success to a errChan that is being watched.

User facing changes

If kubectl port-forward fails, due to a reason other than conflicting ports, on the first run this is now presented to the user. The port forwarding confirmation message now happens after container logs appear in most cases, which could result in the message being lost if the terminal is flooded with logs.

Example Difference
Running skaffold dev --port-forward in skaffold/examples/buildpacks with the following yaml which has invalid resource "wb"

apiVersion: skaffold/v2beta8
kind: Config
build:
  artifacts:
  - image: skaffold-buildpacks
    buildpacks:
      builder: "gcr.io/buildpacks/builder:v1"
      env:
      - GOPROXY={{.GOPROXY}}
profiles:
- name: gcb
  build:
    googleCloudBuild: {}
portForward:
  - resourceType: deployment
    resourceName: wb
    port: 8080
    localPort: 8080

Before:
Screen Shot 2020-11-02 at 12 29 59 PM (2)

Prints that the resource was successfully port forward despite not actually being valid or port forwarded

After:
Screen Shot 2020-11-02 at 12 25 46 PM (2)

Correctly prints and displays that there is an error with the name of the resource and port forwarding failed, allowing the user to correct the issue that arised and trigger another dev loop to attempt again.

Caveats
In the event that a port becomes unavailable in between the time that the port is checked and kubectl port-forward is run, the target behavior is as follows.

INFO[0003] Streaming logs from pod: web-78548fbc59-hgjdl container: web 
DEBU[0003] Running command: [kubectl --context minikube logs --since=3s -f web-78548fbc59-hgjdl -c web --namespace default] 
INFO[0003] Streaming logs from pod: web-777c857955-td9xq container: web 
DEBU[0003] Running command: [kubectl --context minikube logs --since=3s -f web-777c857955-td9xq -c web --namespace default] 
[web] 2020/10/29 19:10:44 Listening on port 8080
DEBU[0033] Running command: [kubectl --context minikube port-forward --pod-running-timeout 1s --namespace default deployment/web 8080:8080] 
DEBU[0033] port forwarding deployment-web-default-8080 got terminated: exit status 1, output: Unable to listen on port 8080: Listeners failed to create with the following errors: [unable to create listener: Error listen tcp4 127.0.0.1:8080: bind: address already in use unable to create listener: Error listen tcp6 [::1]:8080: bind: address already in use]
error: unable to listen on any of the requested ports: [{8080 8080}] 
failed to port forward deployment-web-default-8080, port 8080 is taken, retrying...
failed to port forward deployment-web-default-8080, port 8080 is taken, retrying...
failed to port forward deployment-web-default-8080, port 8080 is taken, retrying...
failed to port forward deployment-web-default-8080, port 8080 is taken, retrying...
failed to port forward deployment-web-default-8080, port 8080 is taken, retrying...
port forwarding deployment-web-default-8080 recovered on port 8080
DEBU[0089] Running command: [kubectl --context minikube port-forward --pod-running-timeout 1s --namespace default deployment/web 8080:8080] 
TRAC[0090] [port-forward] Forwarding from 127.0.0.1:8080 -> 8080 
Port forwarding deployment/web in namespace default, remote port 8080 -> address 127.0.0.1 port 8080

The only difference here vs. before is that Port forwarding deployment/web in namespace default, remote port 8080 -> address 127.0.0.1 port 8080 is only printed after the ports have been resolved. The case where kubectl discovers that a port is taken already seems rare though since there are two checks prior if a port is taken or not. I will be sure that the kubectl error does not surface as well but only our own message.

@IsaacPD IsaacPD requested review from a team and tejal29 and removed request for a team October 28, 2020 23:32
@google-cla google-cla bot added the cla: yes label Oct 28, 2020
@codecov
Copy link

codecov bot commented Oct 29, 2020

Codecov Report

Merging #4968 into master will decrease coverage by 0.11%.
The diff coverage is 63.33%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #4968      +/-   ##
==========================================
- Coverage   72.34%   72.23%   -0.12%     
==========================================
  Files         360      361       +1     
  Lines       12564    12657      +93     
==========================================
+ Hits         9090     9143      +53     
- Misses       2808     2835      +27     
- Partials      666      679      +13     
Impacted Files Coverage Δ
...affold/kubernetes/portforward/kubectl_forwarder.go 66.42% <50.00%> (-1.61%) ⬇️
...g/skaffold/kubernetes/portforward/entry_manager.go 92.00% <83.33%> (-3.84%) ⬇️
pkg/skaffold/deploy/kpt/kpt.go 73.92% <0.00%> (-5.59%) ⬇️
pkg/skaffold/deploy/helm/helm.go 74.11% <0.00%> (-0.51%) ⬇️
pkg/skaffold/util/http.go 50.00% <0.00%> (ø)
pkg/skaffold/util/util.go 83.33% <0.00%> (+0.93%) ⬆️
pkg/skaffold/update/update.go 45.45% <0.00%> (+6.99%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d667827...a6e8e43. Read the comment docs.

@IsaacPD IsaacPD requested a review from a team as a code owner October 29, 2020 17:32
@IsaacPD IsaacPD marked this pull request as draft October 30, 2020 12:34
@IsaacPD IsaacPD marked this pull request as ready for review October 30, 2020 17:19
@tejal29
Copy link
Member

tejal29 commented Nov 2, 2020

Looks Great! in the PR description, please add section on what was the behavior before (on master) and the new behavior for the bug you are solving.
e.g.

  1. Make a change to the portForward section which points to invalid resource.
diff --git a/examples/microservices/skaffold.yaml b/examples/microservices/skaffold.yaml
index 33f085d6e..254716451 100644
--- a/examples/microservices/skaffold.yaml
+++ b/examples/microservices/skaffold.yaml
@@ -21,6 +21,6 @@ deploy:
       - leeroy-app/kubernetes/*
 portForward:
   - resourceType: deployment
-    resourceName: leeroy-web
+    resourceName: leeroy-wb
     port: 8080
     localPort: 9000

On master, this should print leeroy-wb was successfully portforwarded.

On your branch, this should not happen.

Use the before and after section

@IsaacPD
Copy link
Contributor Author

IsaacPD commented Nov 2, 2020

@tejal29 Thanks i've updated the PR description accordingly.

@tejal29
Copy link
Member

tejal29 commented Nov 2, 2020

Correctly prints and displays that there is an error with the name of the resource

Note: skaffold dev does not exist and users get a chance to fix their skaffold.yaml. A change in skaffold.yaml will trigger a dev loop giving users a chance to recover.

@IsaacPD
Copy link
Contributor Author

IsaacPD commented Nov 2, 2020

Good point, I have just added that as well to the description.

@tejal29 tejal29 merged commit e1fc079 into GoogleContainerTools:master Nov 2, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

skaffold port-forwarding does not throw error when port forwarding is not successful.
3 participants