Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CNI cache error pops up when a container is removed #9602

Closed
linggao opened this issue Mar 3, 2021 · 1 comment · Fixed by #9614
Closed

CNI cache error pops up when a container is removed #9602

linggao opened this issue Mar 3, 2021 · 1 comment · Fixed by #9614
Assignees
Labels
In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@linggao
Copy link

linggao commented Mar 3, 2021

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description
I have seen different errors after the original network is removed from a container, including DNS error, resizing session error and CNI cache error.

Steps to reproduce the issue:

  1. podman network create foo-a
    podman network create foo-b

  2. podman run --name test --network foo-a -d alpine sleep 10000

  3. podman network connect foo-b test
    podman network disconnect foo-a test

  4. wait as long as you want

  5. podman rm -f test
    ERRO[0010] error loading cached network config: network "foo-b" not found in CNI cache
    WARN[0010] falling back to loading from existing plugins on disk
    a5092f2db3f45dde2e785fae5ef2ba4c70843a5e0c220744fe1f64a4e4234503

Describe the results you received:
Please see output from step 5.

Describe the results you expected:
No errors.

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

Version:      3.1.0-dev
API Version:  3.0.0
Go Version:   go1.14.12
Git Commit:   426178a49991106ffe222f12cc42409ae78dd257-dirty
Built:        Tue Mar  2 16:08:11 2021
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.19.6
  cgroupManager: systemd
  cgroupVersion: v1
  conmon:
    package: Unknown
    path: /usr/local/libexec/podman/conmon
    version: 'conmon version 2.0.27-dev, commit: 7310bf13319ee8ed50799b202509bedc27b36cf8'
  cpus: 2
  distribution:
    distribution: '"rhel"'
    version: "8.3"
  eventLogger: file
  hostname: lingvs4.dev.edge-fabric.com
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 4.18.0-240.15.1.el8_3.x86_64
  linkmode: dynamic
  memFree: 2636328960
  memTotal: 8342470656
  ociRuntime:
    name: runc
    package: Unknown
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc93+dev
      commit: f245a1d1edbf545549e5a16106cf1aec356a3c7d
      spec: 1.0.2-dev
      go: go1.14.12
      libseccomp: 2.4.3
  os: linux
  remoteSocket:
    exists: true
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_AUDIT_WRITE,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_MKNOD,CAP_NET_BIND_SERVICE,CAP_NET_RAW,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    selinuxEnabled: true
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 2146758656
  swapTotal: 2146758656
  uptime: 316h 24m 18.01s (Approximately 13.17 days)
registries:
  search:
  - registry.access.redhat.com
  - registry.redhat.io
  - docker.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 3
    paused: 0
    running: 3
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageStore:
    number: 10
  runRoot: /var/run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 3.0.0
  Built: 1614722891
  BuiltTime: Tue Mar  2 16:08:11 2021
  GitCommit: 426178a49991106ffe222f12cc42409ae78dd257-dirty
  GoVersion: go1.14.12
  OsArch: linux/amd64
  Version: 3.1.0-dev

Package info (e.g. output of rpm -q podman or apt list podman):

podman is built from the latest master.

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?

Yes/No

Additional environment details (AWS, VirtualBox, physical, etc.):

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Mar 3, 2021
@Luap99 Luap99 self-assigned this Mar 4, 2021
@Luap99 Luap99 added the In Progress This issue is actively being worked by the assignee, please do not work on this at this time. label Mar 4, 2021
Luap99 pushed a commit to Luap99/libpod that referenced this issue Mar 4, 2021
Make sure to pass the cni interface descriptions to cni teardowns.
Otherwise cni cannot find the correct cache files because the
interface name might not match the networks. This can only happen
when network disconnect was used.

Fixes containers#9602

Signed-off-by: Paul Holzinger <[email protected]>
@Luap99
Copy link
Member

Luap99 commented Mar 4, 2021

#9614 should fix this

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 22, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 22, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants