Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Named container port gets different name than in manifest #906

Closed
Duologic opened this issue Aug 7, 2020 · 12 comments
Closed

Named container port gets different name than in manifest #906

Duologic opened this issue Aug 7, 2020 · 12 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/not-reproducible Indicates an issue can not be reproduced as described.

Comments

@Duologic
Copy link

Duologic commented Aug 7, 2020

What happened:

I have a deployment with a single container and ports:

[I] ➜ kubectl -n kube-system get deployment coredns -o yaml | grep ' ports' -A6
        ports:
        - containerPort: 53
          name: tcp
          protocol: TCP
        - containerPort: 53
          protocol: UDP
        - containerPort: 9153

I have a manifest that is the same:

[I] ➜ grep ' ports' -A6 a/apps-v1.Deployment-coredns.yaml
        ports:
        - containerPort: 53
          name: tcp
          protocol: TCP
        - containerPort: 53
          name: udp
          protocol: UDP

I compare both with kubectl diff:

[I] ➜ kubectl diff -f a/apps-v1.Deployment-coredns.yaml
diff -u -N /tmp/LIVE-920116509/apps.v1.Deployment.kube-system.coredns /tmp/MERGED-881599448/apps.v1.Deployment.kube-system.coredns
--- /tmp/LIVE-920116509/apps.v1.Deployment.kube-system.coredns  2020-08-07 13:43:55.134872796 +0200
+++ /tmp/MERGED-881599448/apps.v1.Deployment.kube-system.coredns        2020-08-07 13:43:55.288206694 +0200
@@ -6,7 +6,7 @@
     kubectl.kubernetes.io/last-applied-configuration: | <redacted>
   creationTimestamp: "2020-04-21T16:33:15Z"
-  generation: 6
+  generation: 7
   labels:
     name: coredns
     tanka.dev/environment: environments.dns.dev-us-central1.kube-system
@@ -53,7 +53,7 @@
         name: coredns
         ports:
         - containerPort: 53
-          name: tcp
+          name: udp
           protocol: TCP
         - containerPort: 53
           protocol: UDP

This is not what should happen, no difference is expected but considering comments #735, kubernetes or kubectl might consolidate the ports, so I apply:

[I] ✘1 ➜ kubectl apply -f a/apps-v1.Deployment-coredns.yaml
deployment.apps/coredns configured

Again, I check the diff, expecting idempotency:

[I] ➜ kubectl diff -f a/apps-v1.Deployment-coredns.yaml
diff -u -N /tmp/LIVE-700948331/apps.v1.Deployment.kube-system.coredns /tmp/MERGED-197268942/apps.v1.Deployment.kube-system.coredns
--- /tmp/LIVE-700948331/apps.v1.Deployment.kube-system.coredns  2020-08-07 13:51:17.023080611 +0200
+++ /tmp/MERGED-197268942/apps.v1.Deployment.kube-system.coredns        2020-08-07 13:51:17.149747700 +0200
@@ -6,7 +6,7 @@
     kubectl.kubernetes.io/last-applied-configuration: | <redacted>
   creationTimestamp: "2020-04-21T16:33:15Z"
-  generation: 7
+  generation: 8
   labels:
     name: coredns
     tanka.dev/environment: environments.dns.dev-us-central1.kube-system
@@ -53,7 +53,7 @@
         name: coredns
         ports:
         - containerPort: 53
-          name: udp
+          name: tcp
           protocol: TCP
         - containerPort: 53
           protocol: UDP

I get another diff, toggling the port name again.

What you expected to happen:
In the first case, I would expect that the Port/Protocol pair could be named differently.
If for some reason there is consolidation happening on solely the Port, then I expect idempotency.

How to reproduce it (as minimally and precisely as possible): see above

Anything else we need to know?:

Environment:

  • Kubernetes client and server versions (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"archive", BuildDate:"2020-05-22T20:04:08Z", GoVersion:"go1.14.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.10-gke.42", GitCommit:"42bef28c2031a74fc68840fce56834ff7ea08518", GitTreeState:"clean", BuildDate:"2020-06-02T16:07:00Z", GoVersion:"go1.12.12b4", Compiler:"gc", Platform:"linux/amd64"}

  • Cloud provider or hardware configuration: Google/GKE

  • OS (e.g: cat /etc/os-release): server: Google's COS, client: Arch Linux

@Duologic Duologic added the kind/bug Categorizes issue or PR as related to a bug. label Aug 7, 2020
@eddiezane
Copy link
Member

eddiezane commented Aug 7, 2020

@Duologic I'm not able to reproduce this. Here's the deployment I'm using.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: test
  labels:
    test: test
spec:
  replicas: 1
  selector:
    matchLabels:
      test: test
  template:
    metadata:
      labels:
        test: test
    spec:
      containers:
        - name: test
          image: busybox
          command:
            - sleep
            - "10000000000000"
          ports:
            - name: tcp
              protocol: TCP
              containerPort: 53
            - name: udp
              protocol: UDP
              containerPort: 53

Can you test with this or provide a fully reproducible manifest?

Are you able to reproduce on a supported version of Kubernetes (1.16, 1.17, 1.18)?

Also I noticed you have metadata.generation showing up in your diff which seems strange to me.

/triage not-reproducible

@k8s-ci-robot k8s-ci-robot added the triage/not-reproducible Indicates an issue can not be reproduced as described. label Aug 7, 2020
@Duologic
Copy link
Author

A newer Kubernetes version might resolve it, we're a in the process of upgrading our clusters, will reopen if this is still an issue after upgrade to 1.16.

@Duologic
Copy link
Author

I can report the same issue on

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"archive", BuildDate:"2020-05-22T20:04:08Z", GoVersion:"go1.14.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.13-gke.401", GitCommit:"eb94c181eea5290e9da1238db02cfef263542f5f", GitTreeState:"clean", BuildDate:"2020-09-09T00:57:35Z", GoVersion:"go1.13.9b4", Compiler:"gc", Platform:"linux/amd64"}

@Duologic Duologic reopened this Sep 29, 2020
@eddiezane
Copy link
Member

/assign @dougsland

@k8s-ci-robot
Copy link
Contributor

@eddiezane: GitHub didn't allow me to assign the following users: dougsland.

Note that only kubernetes members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time.
For more information please see the contributor guide

In response to this:

/assign @dougsland

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@dougsland
Copy link
Member

/assign

@dougsland
Copy link
Member

Unfortunately, I cannot reproduce the issue.

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.5", GitCommit:"e338cf2c6d297aa603b50ad3a301f761b4173aa6", GitTreeState:"clean", BuildDate:"2020-12-09T11:18:51Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-13T16:04:18Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

@Duologic do you mind to share with us a manifest to deploy and a manifest to diff? So would help us to debug and catch this.

@Duologic
Copy link
Author

Duologic commented Jan 3, 2021

Hello, sorry for the delay. I surely have trouble reproducing this in an isolated way too, but I think I might have something.

The original Deployment manifest didn't have the ports 'named', we added that later. By the looks of it, that is what is causing the problem.

Version info:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"archive", BuildDate:"2020-11-25T13:19:56Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.15-gke.4300", GitCommit:"7ed5ddc0e67cb68296994f0b754cec45450d6a64", GitTreeState:"clean", BuildDate:"2020-10-28T09:23:22Z", GoVersion:"go1.13.15b4", Compiler:"gc", Platform:"linux/amd64"}

The used manifest:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: dnsutils-named-ports
  namespace: dns-example
spec:
  replicas: 1
  selector:
    matchLabels:
      name: dnsutils-named-ports
  template:
    metadata:
      labels:
        name: dnsutils-named-ports
    spec:
      containers:
      - command:
         - sleep
         - "3600"
        image: gcr.io/kubernetes-e2e-test-images/dnsutils:1.3
        imagePullPolicy: IfNotPresent
        name: dnsutils-named-ports
        ports:
        - containerPort: 53
          name: tcp
          protocol: TCP
        - containerPort: 53
          name: udp
          protocol: UDP

Screencast:
Peek 2021-01-03 20-49

@njuptlzf
Copy link

hey,
I think the kubectl apply problem is here

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 14, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 14, 2021
@Duologic
Copy link
Author

I guess this is a duplicate of kubernetes/kubernetes#39188

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/not-reproducible Indicates an issue can not be reproduced as described.
Projects
None yet
Development

No branches or pull requests

6 participants