Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Container name inside pod is different then what it is in yaml file #16544

Closed
queeup opened this issue Nov 17, 2022 · 18 comments · Fixed by #17412
Closed

Container name inside pod is different then what it is in yaml file #16544

queeup opened this issue Nov 17, 2022 · 18 comments · Fixed by #17412
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. kube locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@queeup
Copy link

queeup commented Nov 17, 2022

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description
Containers not named correctly inside pods. I am using this yaml file:

---
apiVersion: v1
kind: Pod
metadata:
  annotations:
    io.containers.autoupdate/portainer: registry
    io.podman.annotations.autoremove/portainer: "TRUE"
  labels:
    app: portainer-pod
  name: portainer-pod
spec:
  restartPolicy: OnFailure
  containers:
  - name: portainer
    image: docker.io/portainer/portainer-ce:latest

Steps to reproduce the issue:

  1. Run yaml file with podman kube play

  2. Check container name inside pod with podman pod ps --ctr-names

Describe the results you received:
Container name inside pod is portainer-pod-portainer

Describe the results you expected:
I expect container name inside pod is porainer

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

❯ podman version
Client:       Podman Engine
Version:      4.3.0
API Version:  4.3.0
Go Version:   go1.19.2
Built:        Fri Oct 21 11:09:51 2022
OS/Arch:      linux/amd64

Output of podman info:

❯ podman info
host:
  arch: amd64
  buildahVersion: 1.28.0
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.4-3.fc37.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.4, commit: '
  cpuUtilization:
    idlePercent: 92.98
    systemPercent: 1.95
    userPercent: 5.08
  cpus: 8
  distribution:
    distribution: fedora
    variant: silverblue
    version: "37"
  eventLogger: journald
  hostname: fedora-t480
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 6.0.8-300.fc37.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 2716590080
  memTotal: 33409441792
  networkBackend: netavark
  ociRuntime:
    name: crun
    package: crun-1.7-1.fc37.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.7
      commit: 40d996ea8a827981895ce22886a9bac367f87264
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.0-8.fc37.x86_64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.3
  swapFree: 8589930496
  swapTotal: 8589930496
  uptime: 29h 42m 14.00s (Approximately 1.21 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /var/home/queeup/.config/containers/storage.conf
  containerStore:
    number: 2
    paused: 0
    running: 1
    stopped: 1
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/home/queeup/.local/share/containers/storage
  graphRootAllocated: 998500204544
  graphRootUsed: 448343105536
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 18
  runRoot: /run/user/1000/containers
  volumePath: /var/home/queeup/.local/share/containers/storage/volumes
version:
  APIVersion: 4.3.0
  Built: 1666339791
  BuiltTime: Fri Oct 21 11:09:51 2022
  GitCommit: ""
  GoVersion: go1.19.2
  Os: linux
  OsArch: linux/amd64
  Version: 4.3.0

Package info (e.g. output of rpm -q podman or apt list podman or brew info podman):

❯ rpm -q podman
podman-4.3.0-2.fc37.x86_64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)

Yes

Additional environment details (AWS, VirtualBox, physical, etc.):

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Nov 17, 2022
@rhatdan rhatdan added the kube label Nov 18, 2022
@rhatdan
Copy link
Member

rhatdan commented Nov 18, 2022

podman play kube generates the container names by appending the pod name onto the container name.
This allows us to create a container with a name. Then use podman kube generate, and podman kube play without removing the original container. Otherwise their is a name conflict. (At least that is why I think we did it this way.

However we do not do this with pod names.

$ podman create --pod new:dan alpine echo hi
$ podman kube generate dan > /tmp/dan.yml 
WARN[0000] Truncation Annotation: "225e1166994c7a8bd799d7a6e0aaad5f3b42813dcb87ae6596511c830a7fca51" to "225e1166994c7a8bd799d7a6e0aaad5f3b42813dcb87ae6596511c830a7fca5": Kubernetes only allows 63 characters 
$ ./bin/podman kube play /tmp/dan.yml 
Error: adding pod to state: name "dan" is in use: pod already exists

So it can be argued that we should just use the container name within the yaml file and stop appending the two.

@umohnani8 @baude @vrothberg WDYT?

@queeup
Copy link
Author

queeup commented Nov 18, 2022

So it can be argued that we should just use the container name within the yaml file and stop appending the two.

Or, If pod_name == container_name: append pod_name to container_name?

@rhatdan
Copy link
Member

rhatdan commented Nov 18, 2022

Yes that seems reasonable.

@baude
Copy link
Member

baude commented Nov 18, 2022

i am onboard with that. Would this need to be a Podman 5 thing?

@mheon
Copy link
Member

mheon commented Nov 18, 2022

I don't really see why. If we're really concerned, we can add a network alias for the previously-used name to ensure DNS does not break.

@vrothberg
Copy link
Member

vrothberg commented Nov 18, 2022 via email

@rhatdan
Copy link
Member

rhatdan commented Nov 18, 2022

Lets examine what could break

If we name the container NAME and add an alias POD-NAME that should allow a container that assumed POD-NAME to work over DNS.

You already need to do --replace when playing if the NAME container exists or the POD-NAME exists or the POD exists.

@vrothberg
Copy link
Member

vrothberg commented Nov 18, 2022 via email

@umohnani8
Copy link
Member

@rhatdan @vrothberg @mheon I am planning on working on this during bug week, so wanted to see if we can come to a consensus on whether this change should happen or not.

We could also add the container NAME as an alias to the existing POD-NAME and add a deprecation warning and switch it out in podman 5.

@vrothberg
Copy link
Member

vrothberg commented Dec 6, 2022

I am against making a breaking change at this point - even aliasing can break existing deployments. We did not look up why the naming was done the way it is. I assume to namespace but I didn't look up commits/PRs etc. So I think we should do some research before deciding. Maybe this information is lacking in the docs and we can address the issue with a doc change?

Switching it entirely for Podman 5 is not an option to me as it would break existing deployments (our CI, Ansible roles, unknown amount of users and customers) and we do not have an urgent reason to break (e.g., a severe security fix). It may be less intrusive to add an alias but there is a fair chance to break existing deployments as well. The main benefit is that aliasing would break more often on kube play due to name conflicts but these conflicts would be "hidden" when running with --replace and then surface in some containers of some other workloads to just suddenly break. It would be nasty to debug and analyze.

@mheon
Copy link
Member

mheon commented Dec 6, 2022

I'm a little confused as to how adding an alias would break existing deployments... This is something that can happen in regular usage, apps should be ready to deal with it.

@vrothberg
Copy link
Member

An alias is an infinite resource. So running one YAML would occupy twice as many names with the second half having been "free" before.

@vrothberg
Copy link
Member

Or am I misreading/misinterpreting alias?

@mheon
Copy link
Member

mheon commented Dec 6, 2022

Aliases are not exclusive. I can add an alias "db-ctr" to as many containers in the same network as I wish. If 2+ containers have the same alias, we load-balance between them. So while there are probably some odd configurations we will break, where a user aliased a different container with a name that is also used by another container, but that seems like a rare/contrived example to justify not adding an alias (which otherwise seems like an easy usability win).

@vrothberg
Copy link
Member

Oh that is nice. So +1 from my side on adding an alias to the kube-network 👍 Thanks for correcting, Matt!

@github-actions
Copy link

github-actions bot commented Jan 6, 2023

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented Jan 6, 2023

@umohnani8 Please follow up on this one?

@github-actions
Copy link

github-actions bot commented Feb 6, 2023

A friendly reminder that this issue had no activity for 30 days.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 1, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 1, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. kube locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants