Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] Podman support #84

Closed
kkimdev opened this issue Jul 1, 2019 · 63 comments · Fixed by #987
Closed

[FEATURE] Podman support #84

kkimdev opened this issue Jul 1, 2019 · 63 comments · Fixed by #987
Assignees
Labels
enhancement New feature or request help wanted Extra attention is needed
Milestone

Comments

@kkimdev
Copy link

kkimdev commented Jul 1, 2019

Podman is a Docker drop-in alternative https://podman.io/ and it fixed some architecture issues that Docker has, e.g., no daemon, rootless.

More info: https://developers.redhat.com/articles/podman-next-generation-linux-container-tools/

@kkimdev kkimdev added the enhancement New feature or request label Jul 1, 2019
@iwilltry42
Copy link
Member

Thanks for opening this issue!
But I think that this would rather require a completely new project, since we heavily rely on the docker API/SDK..

@kkimdev kkimdev closed this as completed Jul 1, 2019
@minioin
Copy link

minioin commented Jul 18, 2020

Since podman 2.0 supports docker compatible REST API, revisiting it should be reconsidered?

@iwilltry42
Copy link
Member

@minioin , without having a look at the podman stuff: could we just continue using the Docker SDK with the Podman endpoint?

@minioin
Copy link

minioin commented Jul 23, 2020

That is the intended outcome(I'm not associated with podman). However, there could be some inconsistencies in both sides; the SDK and podman API. But they won't be found unless we start using them. I could lend a hand if you need one.

@iwilltry42 iwilltry42 reopened this Aug 5, 2020
@iwilltry42 iwilltry42 added the help wanted Extra attention is needed label Aug 5, 2020
@garrett
Copy link

garrett commented Aug 15, 2020

Copy/pasted here from @ inercia/k3x#16 (comment) (and also talked about a little in inercia/k3x#15):

Podman provides a Docker-like API in Podman 2.0. https://podman.io/blogs/2020/07/01/rest-versioning.html

API docs have the docker-compatible API under "compat" @ https://docs.podman.io/en/latest/_static/api.html (podman also has its own API to do additional things like handle pods)

I saw in a comment elswhere on GitHub that getting a podman service up an running is as running:

podman system service --time=0 &
export DOCKER_HOST=unix:/$XDG_RUNTIME_DIR/podman/podman.sock

That's for running podman without requiring root (in a user session), as it references $XDG_RUNTIME_DIR.

For system containers, it's:

sudo podman system service --time=0 &
export DOCKER_HOST=unix:/run/podman/podman.sock

To start up the service and specify a special URI, such as the Docker URI, for compatibility:

sudo podman system service --time=0 unix:/var/run/docker.sock

I found out some of this in the docs for podman system service. It's the same as running man podman-system-serice (with podman installed). There's help at the command line too: podman system service --help

@minioin
Copy link

minioin commented Aug 24, 2020

I tried to run k3d using sudo podman system service --time=0 unix:/var/run/docker.sock. Following output was observed.

ERRO[0000] Failed to list docker networks               
ERRO[0000] Failed to create cluster network             
ERRO[0000] Error response from daemon: filters for listing networks is not implemented 
ERRO[0000] Failed to create cluster >>> Rolling Back    
INFO[0000] Deleting cluster 'k3s-default'               
ERRO[0000] Failed to delete container ''                
WARN[0000] Failed to delete node '': Try to delete it manually 
INFO[0000] Deleting cluster network 'k3d-k3s-default'   
WARN[0000] Failed to delete cluster network 'k3d-k3s-default': 'Error: No such network: k3d-k3s-default' 
ERRO[0000] Failed to delete 1 nodes: Try to delete them manually 
FATA[0000] Cluster creation FAILED, also FAILED to rollback changes! 

@iwilltry42
Copy link
Member

I guess there will be some little things missing in the API (like the filter for network lists), but I also think that we'll get to it eventually 👍

@iwilltry42 iwilltry42 added this to the 3.2.0 milestone Sep 2, 2020
@iwilltry42 iwilltry42 self-assigned this Sep 2, 2020
@iwilltry42 iwilltry42 modified the milestones: 3.2.0, v3.4.0 Nov 24, 2020
@iwilltry42 iwilltry42 modified the milestones: v3.4.0, v4.1.0 Dec 4, 2020
@masterthefly
Copy link

Hi - is podman support now available for k3d?

@06kellyjac
Copy link

I'd imagine not since 4.0.0 only recently came out & this is in the 4.1.0 milestone

@iwilltry42
Copy link
Member

Hi @masterthefly , no, there's no progress on this so far. I'll happily accept any PR though, as we have some higher priorities at the moment 🤔
Thanks for chiming in @06kellyjac 👍

@iwilltry42 iwilltry42 modified the milestones: v4.1.0, v4.2.0 Feb 3, 2021
@masterthefly
Copy link

masterthefly commented Feb 3, 2021 via email

@06kellyjac
Copy link

https://www.github.com/rancher/k3d/tree/main/CONTRIBUTING.md

@iwilltry42 iwilltry42 modified the milestones: v4.3.0, v4.4.0 Mar 10, 2021
@iwilltry42 iwilltry42 removed this from the v4.4.5 milestone Jun 11, 2021
@geraldwuhoo
Copy link

With the PRs above, it works but I just realised k3d mounts /var/run/docker.sock into the tools container, which would fail when the socket does not exist.

Also, the output kubeconfig is broken (incorrectly parses DOCKER_HOST into https://unix:PORT)

I noticed this as well, and running in verbose mode, it appears that k3d reads an additional env var, DOCKER_SOCK. I've never seen it mentioned anywhere (it wasn't set on my system, so it defaulted to /var/run/docker.sock). Setting it equal to the DOCKER_HOST (minus the unix:// prefix) "resolved" this. Not sure if this is intentional behavior or not, but it does seem strange k3d doesn't extrapolate from DOCKER_HOST.

@serverwentdown
Copy link
Contributor

serverwentdown commented Feb 23, 2022

Even works okay if /var/run/docker.sock is just an empty file (the image imports will fail, but the cluster will still start and work)

@johnhamelink
Copy link

johnhamelink commented Feb 24, 2022

@johnhamelink It looks like you were able to get the bridge networking on rootless, do you mind posting your configuration? Do you mind creating the cluster again in verbose mode?

Certainly! See below from my notes - hope this is helpful!

Table of Contents

  1. Install Podman
  2. Rootless Podman
  3. Make Podman handle registries like Docker
  4. Set docker host
  5. Test that the image can be pulled from docker hub by default
  6. Test network creation using bridge mode:
  7. Run k3d

Install Podman

yay -Rs docker docker-compose
yay -S podman podman-docker

Rootless Podman

Follow the guide for setting up rootless podman in The Arch Wiki

Make Podman handle registries like Docker

Set unqualified-search-registries = ["docker.io"] in /etc/containers/registries.conf

Set docker host

Add export DOCKER_HOST="unix://$XDG_RUNTIME_DIR/podman/podman.sock" to ~/.zshenv and source

Test that the image can be pulled from docker hub by default

Run podman pull alpine to test everything so far

Test network creation using bridge mode:

podman network create foo
podman run --rm -it --network=foo docker.io/library/alpine:latest ip addr

This should return valid IPs like so :

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0@if4: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 6a:b6:d2:f5:61:00 brd ff:ff:ff:ff:ff:ff
    inet 10.88.2.2/24 brd 10.88.2.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::68b6:d2ff:fef5:6100/64 scope link 
       valid_lft forever preferred_lft forever

Run k3d

Run systemctl --user start podman, then with the following config:

---
apiVersion: k3d.io/v1alpha4
kind: Simple
metadata:
  name: MB
servers: 1
agents: 2
registries:
create:
    name: MB
    hostPort: "5000"
config: |
    mirrors:
    "k3d-registry":
	endpoint:
	- "http://k3d-registry.localhost:5000"

Run k3d cluster create --verbose -c k3d.yml

Which should produce the following:

❯ k3d cluster create --verbose -c k3d.yaml
DEBU[0001] DOCKER_SOCK=/var/run/docker.sock             
DEBU[0001] Runtime Info:
&{Name:docker Endpoint:/var/run/docker.sock Version:3.4.4 OSType:linux OS:arch Arch:amd64 CgroupVersion:2 CgroupDriver:systemd Filesystem:extfs} 
DEBU[0001] Additional CLI Configuration:
cli:
  api-port: ""
  env: []
  k3s-node-labels: []
  k3sargs: []
  ports: []
  registries:
    create: ""
  runtime-labels: []
  volumes: []
hostaliases: [] 
DEBU[0001] Validating file /tmp/k3d-config-tmp-k3d.yaml2874904885 against default JSONSchema... 
DEBU[0001] JSON Schema Validation Result: &{errors:[] score:46} 
INFO[0001] Using config file k3d.yaml (k3d.io/v1alpha3#simple) 
DEBU[0001] Configuration:
agents: 2
apiversion: k3d.io/v1alpha3
image: docker.io/rancher/k3s:v1.22.6-k3s1
kind: Simple
name: MB
network: ""
options:
  k3d:
    disableimagevolume: false
    disableloadbalancer: false
    disablerollback: false
    loadbalancer:
      configoverrides: []
    timeout: 0s
    wait: true
  kubeconfig:
    switchcurrentcontext: true
    updatedefaultkubeconfig: true
  runtime:
    agentsmemory: ""
    gpurequest: ""
    hostpidmode: false
    serversmemory: ""
registries:
  config: |
    mirrors:
      "k3d-registry":
	endpoint:
	  - "http://k3d-registry.localhost:5000"
  create:
    hostport: "5000"
    name: MB
  use: []
servers: 1
subnet: ""
token: "" 
WARN[0001] Default config apiVersion is 'k3d.io/v1alpha4', but you're using 'k3d.io/v1alpha3': consider migrating. 
DEBU[0001] Migrating v1alpha3 to v1alpha4               
DEBU[0001] Migrated config: {TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha4} ObjectMeta:{Name:MB} Servers:1 Agents:2 ExposeAPI:{Host: HostIP: HostPort:} Image:docker.io/rancher/k3s:v1.22.6-k3s1 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: HostPidMode:false Labels:[]}} Env:[] Registries:{Use:[] Create:0xc0004a0210 Config:mirrors:
  "k3d-registry":
    endpoint:
      - "http://k3d-registry.localhost:5000"
} HostAliases:[]} 
DEBU[0001] JSON Schema Validation Result: &{errors:[] score:100} 
DEBU[0001] ========== Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha4} ObjectMeta:{Name:MB} Servers:1 Agents:2 ExposeAPI:{Host: HostIP: HostPort:} Image:docker.io/rancher/k3s:v1.22.6-k3s1 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: HostPidMode:false Labels:[]}} Env:[] Registries:{Use:[] Create:0xc0004a0210 Config:mirrors:
  "k3d-registry":
    endpoint:
      - "http://k3d-registry.localhost:5000"
} HostAliases:[]}
========================== 
DEBU[0001] ========== Merged Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha4} ObjectMeta:{Name:MB} Servers:1 Agents:2 ExposeAPI:{Host: HostIP: HostPort:39337} Image:docker.io/rancher/k3s:v1.22.6-k3s1 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: HostPidMode:false Labels:[]}} Env:[] Registries:{Use:[] Create:0xc0004a0210 Config:mirrors:
  "k3d-registry":
    endpoint:
      - "http://k3d-registry.localhost:5000"
} HostAliases:[]}
========================== 
DEBU[0001] generated loadbalancer config:
ports:
  6443.tcp:
  - k3d-MB-server-0
settings:
  workerConnections: 1024 
DEBU[0001] Found multiline registries config embedded in SimpleConfig:
mirrors:
  "k3d-registry":
    endpoint:
      - "http://k3d-registry.localhost:5000" 
DEBU[0001] ===== Merged Cluster Config =====
&{TypeMeta:{Kind: APIVersion:} Cluster:{Name:MB Network:{Name:k3d-MB ID: External:false IPAM:{IPPrefix:zero IPPrefix IPsUsed:[] Managed:false} Members:[]} Token: Nodes:[0xc0003d64e0 0xc0003d69c0 0xc0003d6b60 0xc0003d6d00] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc000296c40 ServerLoadBalancer:0xc0001e0df0 ImageVolume: Volumes:[]} ClusterCreateOpts:{DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d] GlobalEnv:[] HostAliases:[] Registries:{Create:0xc000459e10 Use:[] Config:0xc00048c8d0}} KubeconfigOpts:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true}}
===== ===== ===== 
DEBU[0001] '--kubeconfig-update-default set: enabling wait-for-server 
INFO[0001] Prep: Network                                
INFO[0001] Created network 'k3d-MB'                     
INFO[0001] Created image volume k3d-MB-images           
INFO[0001] Creating node 'MB'                           
DEBU[0001] DOCKER_SOCK=/var/run/docker.sock             
DEBU[0001] Detected CgroupV2, enabling custom entrypoint (disable by setting K3D_FIX_CGROUPV2=false) 
WARN[0001] Failed to get network information: Error: No such network: bridge 
ERRO[0001] Failed Cluster Preparation: Failed to create registry: failed to create registry node 'MB': runtime failed to create node 'MB': failed to create container for node 'MB': docker failed to create container 'MB': Error response from daemon: container create: unable to find network configuration for bridge: network not found 
ERRO[0001] Failed to create cluster >>> Rolling Back    
INFO[0001] Deleting cluster 'MB'                        
ERRO[0001] failed to get cluster: No nodes found for given cluster 
FATA[0001] Cluster creation FAILED, also FAILED to rollback changes! 

Running podman network create bridge then allows us to progress further:

❯ k3d cluster create --verbose -c k3d.yaml
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock             
DEBU[0000] Runtime Info:
&{Name:docker Endpoint:/var/run/docker.sock Version:3.4.4 OSType:linux OS:arch Arch:amd64 CgroupVersion:2 CgroupDriver:systemd Filesystem:extfs} 
DEBU[0000] Additional CLI Configuration:
cli:
  api-port: ""
  env: []
  k3s-node-labels: []
  k3sargs: []
  ports: []
  registries:
    create: ""
  runtime-labels: []
  volumes: []
hostaliases: [] 
DEBU[0000] Validating file /tmp/k3d-config-tmp-k3d.yaml3044338603 against default JSONSchema... 
DEBU[0000] JSON Schema Validation Result: &{errors:[] score:46} 
INFO[0000] Using config file k3d.yaml (k3d.io/v1alpha3#simple) 
DEBU[0000] Configuration:
agents: 2
apiversion: k3d.io/v1alpha3
image: docker.io/rancher/k3s:v1.22.6-k3s1
kind: Simple
name: MB
network: ""
options:
  k3d:
    disableimagevolume: false
    disableloadbalancer: false
    disablerollback: false
    loadbalancer:
      configoverrides: []
    timeout: 0s
    wait: true
  kubeconfig:
    switchcurrentcontext: true
    updatedefaultkubeconfig: true
  runtime:
    agentsmemory: ""
    gpurequest: ""
    hostpidmode: false
    serversmemory: ""
registries:
  config: |
    mirrors:
      "k3d-registry":
	endpoint:
	  - "http://k3d-registry.localhost:5000"
  create:
    hostport: "5000"
    name: MB
  use: []
servers: 1
subnet: ""
token: "" 
WARN[0000] Default config apiVersion is 'k3d.io/v1alpha4', but you're using 'k3d.io/v1alpha3': consider migrating. 
DEBU[0000] Migrating v1alpha3 to v1alpha4               
DEBU[0000] Migrated config: {TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha4} ObjectMeta:{Name:MB} Servers:1 Agents:2 ExposeAPI:{Host: HostIP: HostPort:} Image:docker.io/rancher/k3s:v1.22.6-k3s1 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: HostPidMode:false Labels:[]}} Env:[] Registries:{Use:[] Create:0xc0002942a0 Config:mirrors:
  "k3d-registry":
    endpoint:
      - "http://k3d-registry.localhost:5000"
} HostAliases:[]} 
DEBU[0000] JSON Schema Validation Result: &{errors:[] score:100} 
DEBU[0000] ========== Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha4} ObjectMeta:{Name:MB} Servers:1 Agents:2 ExposeAPI:{Host: HostIP: HostPort:} Image:docker.io/rancher/k3s:v1.22.6-k3s1 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: HostPidMode:false Labels:[]}} Env:[] Registries:{Use:[] Create:0xc0002942a0 Config:mirrors:
  "k3d-registry":
    endpoint:
      - "http://k3d-registry.localhost:5000"
} HostAliases:[]}
========================== 
DEBU[0000] ========== Merged Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha4} ObjectMeta:{Name:MB} Servers:1 Agents:2 ExposeAPI:{Host: HostIP: HostPort:41963} Image:docker.io/rancher/k3s:v1.22.6-k3s1 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: HostPidMode:false Labels:[]}} Env:[] Registries:{Use:[] Create:0xc0002942a0 Config:mirrors:
  "k3d-registry":
    endpoint:
      - "http://k3d-registry.localhost:5000"
} HostAliases:[]}
========================== 
DEBU[0000] generated loadbalancer config:
ports:
  6443.tcp:
  - k3d-MB-server-0
settings:
  workerConnections: 1024 
DEBU[0000] Found multiline registries config embedded in SimpleConfig:
mirrors:
  "k3d-registry":
    endpoint:
      - "http://k3d-registry.localhost:5000" 
DEBU[0000] ===== Merged Cluster Config =====
&{TypeMeta:{Kind: APIVersion:} Cluster:{Name:MB Network:{Name:k3d-MB ID: External:false IPAM:{IPPrefix:zero IPPrefix IPsUsed:[] Managed:false} Members:[]} Token: Nodes:[0xc0005824e0 0xc0005829c0 0xc000582b60 0xc000582d00] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc00071c7c0 ServerLoadBalancer:0xc0002e74e0 ImageVolume: Volumes:[]} ClusterCreateOpts:{DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d] GlobalEnv:[] HostAliases:[] Registries:{Create:0xc000281e10 Use:[] Config:0xc00028e690}} KubeconfigOpts:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true}}
===== ===== ===== 
DEBU[0000] '--kubeconfig-update-default set: enabling wait-for-server 
INFO[0000] Prep: Network                                
DEBU[0000] Found network {Name:k3d-MB ID:8633a6bcaf70a010f6ad739f9e32cfa9cd751630215e818f2101f97f30914412 Created:2022-02-24 14:10:47.224368561 +0000 UTC Scope:local Driver:bridge EnableIPv6:false IPAM:{Driver:default Options:map[] Config:[{Subnet:10.88.2.0/24 IPRange: Gateway:10.88.2.1 AuxAddress:map[]}]} Internal:false Attachable:false Ingress:false ConfigFrom:{Network:} ConfigOnly:false Containers:map[] Options:map[] Labels:map[app:k3d] Peers:[] Services:map[]} 
INFO[0000] Re-using existing network 'k3d-MB' (8633a6bcaf70a010f6ad739f9e32cfa9cd751630215e818f2101f97f30914412) 
INFO[0000] Created image volume k3d-MB-images           
INFO[0000] Creating node 'MB'                           
DEBU[0001] DOCKER_SOCK=/var/run/docker.sock             
DEBU[0001] Detected CgroupV2, enabling custom entrypoint (disable by setting K3D_FIX_CGROUPV2=false) 
DEBU[0001] Created container MB (ID: 875cbc9340e268ffb682867eb97bbb874316b048e7202fc83123292b5de12249) 
INFO[0001] Successfully created registry 'MB'           
DEBU[0001] no netlabel present on container /MB         
DEBU[0001] failed to get IP for container /MB as we couldn't find the cluster network 
DEBU[0001] no netlabel present on container /MB         
DEBU[0001] failed to get IP for container /MB as we couldn't find the cluster network 
DEBU[0001] [Docker] DockerHost: 'unix:///run/user/1000/podman/podman.sock' (unix:///run/user/1000/podman/podman.sock) 
INFO[0001] Starting new tools node...                   
DEBU[0001] DOCKER_SOCK=/var/run/docker.sock             
DEBU[0001] DOCKER_SOCK=/var/run/docker.sock             
DEBU[0001] Created container k3d-MB-tools (ID: 26b261cc963636e5e8d3563ea37f844b73c19a91f4c88f5d540e4d9c91b1aadd) 
DEBU[0001] Node k3d-MB-tools Start Time: 2022-02-24 14:11:16.048582396 +0000 GMT m=+1.247392738 
INFO[0001] Starting Node 'k3d-MB-tools'                 
DEBU[0001] Truncated 2022-02-24 14:11:16.244614705 +0000 UTC to 2022-02-24 14:11:16 +0000 UTC 
INFO[0002] Creating node 'k3d-MB-server-0'              
DEBU[0002] Created container k3d-MB-server-0 (ID: 956e13dac76be6fe6c77f1d880c897a5ba79c3944f09799c6b6059c6d0bbcc99) 
DEBU[0002] Created node 'k3d-MB-server-0'               
INFO[0002] Creating node 'k3d-MB-agent-0'               
DEBU[0002] Created container k3d-MB-agent-0 (ID: 2cec6c0ca8bdb1683fe693b1b564fa2db74c7513adedecf9d6d71681090bb611) 
DEBU[0002] Created node 'k3d-MB-agent-0'                
INFO[0002] Creating node 'k3d-MB-agent-1'               
DEBU[0002] Created container k3d-MB-agent-1 (ID: 9d7e5c3c1ce2f94e61c9f3c9b1a335f20b4b78b01e3115ee1fdd32e7d78d9af3) 
DEBU[0002] Created node 'k3d-MB-agent-1'                
INFO[0002] Creating LoadBalancer 'k3d-MB-serverlb'      
DEBU[0002] Created container k3d-MB-serverlb (ID: 050333ef064bffd2aa51b42830dad49925572362f54400e1a3c06562e8b1f2e1) 
DEBU[0002] Created loadbalancer 'k3d-MB-serverlb'       
DEBU[0002] DOCKER_SOCK=/var/run/docker.sock             
INFO[0002] Using the k3d-tools node to gather environment information 
DEBU[0002] no netlabel present on container /k3d-MB-tools 
DEBU[0002] failed to get IP for container /k3d-MB-tools as we couldn't find the cluster network 
DEBU[0003] DOCKER_SOCK=/var/run/docker.sock             
INFO[0003] HostIP: using network gateway 10.88.2.1 address 
INFO[0003] Starting cluster 'MB'                        
INFO[0003] Starting servers...                          
DEBU[0003] >>> enabling cgroupsv2 magic                 
DEBU[0003] Node k3d-MB-server-0 Start Time: 2022-02-24 14:11:17.856348623 +0000 GMT m=+3.055158994 
DEBU[0003] Deleting node k3d-MB-tools ...               
INFO[0003] Starting Node 'k3d-MB-server-0'              
DEBU[0004] Truncated 2022-02-24 14:11:18.856413342 +0000 UTC to 2022-02-24 14:11:18 +0000 UTC 
DEBU[0004] Waiting for node k3d-MB-server-0 to get ready (Log: 'k3s is up and running') 
WARN[0018] warning: encountered fatal log from node k3d-MB-server-0 (retrying 0/10): Mtime="2022-02-24T14:11:32Z" level=fatal msg="failed to find cpu cgroup (v2)" 
ERRO[0018] Failed Cluster Start: Failed to start server k3d-MB-server-0: Node k3d-MB-server-0 failed to get ready: Failed waiting for log message 'k3s is up and running' from node 'k3d-MB-server-0': node 'k3d-MB-server-0' (container '956e13dac76be6fe6c77f1d880c897a5ba79c3944f09799c6b6059c6d0bbcc99') not running 
ERRO[0018] Failed to create cluster >>> Rolling Back    
INFO[0018] Deleting cluster 'MB'                        
DEBU[0018] no netlabel present on container /MB         
DEBU[0018] failed to get IP for container /MB as we couldn't find the cluster network 
DEBU[0018] Cluster Details: &{Name:MB Network:{Name:k3d-MB ID:8633a6bcaf70a010f6ad739f9e32cfa9cd751630215e818f2101f97f30914412 External:true IPAM:{IPPrefix:10.88.2.0/24 IPsUsed:[10.88.2.1] Managed:false} Members:[]} Token:ABmfBwcdGuaRlXkoYaTv Nodes:[0xc0005824e0 0xc0005829c0 0xc000582b60 0xc000582d00 0xc000583860] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc00071c7c0 ServerLoadBalancer:0xc0002e74e0 ImageVolume:k3d-MB-images Volumes:[k3d-MB-images k3d-MB-images]} 
DEBU[0018] Deleting node k3d-MB-serverlb ...            
DEBU[0018] Deleting node k3d-MB-server-0 ...            
DEBU[0019] Deleting node k3d-MB-agent-0 ...             
DEBU[0019] Deleting node k3d-MB-agent-1 ...             
DEBU[0019] Deleting node MB ...                         
DEBU[0019] Skip deletion of cluster network 'k3d-MB' because it's managed externally 
INFO[0019] Deleting 2 attached volumes...               
DEBU[0019] Deleting volume k3d-MB-images...             
DEBU[0019] Deleting volume k3d-MB-images...             
WARN[0019] Failed to delete volume 'k3d-MB-images' of cluster 'failed to find volume 'k3d-MB-images': Error: No such volume: k3d-MB-images': MB -> Try to delete it manually 
FATA[0019] Cluster creation FAILED, all changes have been rolled back! 

@geraldwuhoo
Copy link

geraldwuhoo commented Feb 24, 2022

@johnhamelink Wow, thank you for the detailed write-up! Unfortunately, I have already done all of these, and in fact I can even assign a static IP to rootless containers directly:

λ › podman network create foo
foo
λ › podman network inspect foo
[
     {
          "name": "foo",
          "id": "2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e886266e7ae",
          "driver": "bridge",
          "network_interface": "cni-podman1",
          "created": "2022-02-24T11:44:08.064526708-08:00",
          "subnets": [
               {
                    "subnet": "10.89.0.0/24",
                    "gateway": "10.89.0.1"
               }
          ],
          "ipv6_enabled": false,
          "internal": false,
          "dns_enabled": true,
          "ipam_options": {
               "driver": "host-local"
          }
     }
]
λ › podman run --rm -it --network=foo --ip=10.89.0.5 docker.io/library/alpine:latest ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0@if4: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
    link/ether ba:40:14:29:be:63 brd ff:ff:ff:ff:ff:ff
    inet 10.89.0.5/24 brd 10.89.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::b840:14ff:fe29:be63/64 scope link tentative
       valid_lft forever preferred_lft forever

However, I am still failing creation immediately at the beginning:

λ › k3d cluster create --config ~/.kube/k3d.yaml
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
DEBU[0000] Runtime Info:
&{Name:docker Endpoint:/var/run/docker.sock Version:4.0.0-dev OSType:linux OS:arch Arch:amd64 CgroupVersion:2 CgroupDriver:systemd Filesystem:zfs}
DEBU[0000] Additional CLI Configuration:
cli:
  api-port: ""
  env: []
  k3s-node-labels: []
  k3sargs: []
  ports: []
  registries:
    create: ""
  runtime-labels: []
  volumes: []
hostaliases: []
DEBU[0000] Validating file /tmp/k3d-config-tmp-k3d.yaml2080189530 against default JSONSchema...
DEBU[0000] JSON Schema Validation Result: &{errors:[] score:73}
INFO[0000] Using config file /home/jerry/.kube/k3d.yaml (k3d.io/v1alpha4#simple)
[truncated]
DEBU[0000] ===== Merged Cluster Config =====
&{TypeMeta:{Kind: APIVersion:} Cluster:{Name:k3s-default Network:{Name:k3d-k3s-default ID: External:false IPAM:{IPPrefix:zero IPPrefix IPsUsed:[] Managed:false} Members:[]} Token: Nodes:[0xc000405a00 0xc000405ba0] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc000119880 ServerLoadBalancer:0xc00029aa20 ImageVolume: Volumes:[]} ClusterCreateOpts:{DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d] GlobalEnv:[] HostAliases:[] Registries:{Create:<nil> Use:[] Config:0xc0002a80c0}} KubeconfigOpts:{UpdateDefaultKubeconfig:false SwitchCurrentContext:true}}
===== ===== =====
INFO[0000] Prep: Network
DEBU[0000] Found network {Name:k3d-k3s-default ID:89a5dde53e7c97671dfc4c2ede2d906feeac60b2bad51490f5683f379b649776 Created:0001-01-01 00:00:00 +0000 UTC Scope:local Driver:bridge EnableIPv6:false IPAM:{Driver:default Options:map[driver:host-local] Config:[{Subnet:10.89.0.0/24 IPRange: Gateway:10.89.0.1 AuxAddress:map[]}]} Internal:false Attachable:false Ingress:false ConfigFrom:{Network:} ConfigOnly:false Containers:map[] Options:map[] Labels:map[app:k3d] Peers:[] Services:map[]}
INFO[0000] Re-using existing network 'k3d-k3s-default' (89a5dde53e7c97671dfc4c2ede2d906feeac60b2bad51490f5683f379b649776)
INFO[0000] Created image volume k3d-k3s-default-images
INFO[0000] Starting new tools node...
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock
DEBU[0000] Detected CgroupV2, enabling custom entrypoint (disable by setting K3D_FIX_CGROUPV2=false)
ERRO[0000] Failed to run tools container for cluster 'k3s-default'
INFO[0001] Creating node 'k3d-k3s-default-server-0'
ERRO[0001] Failed Cluster Creation: failed setup of server/agent node k3d-k3s-default-server-0: failed to create node: runtime failed to create node 'k3d-k3s-default-server-0': failed to create container for node 'k3d-k3s-default-server-0': docker failed to create container 'k3d-k3s-default-server-0': Error response from daemon: container create: invalid config provided: Networks and static ip/mac address can only be used with Bridge mode networking
ERRO[0001] Failed to create cluster >>> Rolling Back
INFO[0001] Deleting cluster 'k3s-default'
ERRO[0001] failed to get cluster: No nodes found for given cluster

Even though the network k3d created is in bridge mode and I can create a static IP container on it manually:

λ › podman network inspect k3d-k3s-default
[
     {
          "name": "k3d-k3s-default",
          "id": "89a5dde53e7c97671dfc4c2ede2d906feeac60b2bad51490f5683f379b649776",
          "driver": "bridge",
          "network_interface": "cni-podman1",
          "created": "2022-02-24T11:55:39.831735268-08:00",
          "subnets": [
               {
                    "subnet": "10.89.0.0/24",
                    "gateway": "10.89.0.1"
               }
          ],
          "ipv6_enabled": false,
          "internal": false,
          "dns_enabled": true,
          "labels": {
               "app": "k3d"
          },
          "ipam_options": {
               "driver": "host-local"
          }
     }
]
λ › podman run --rm -it --network=k3d-k3s-default --ip=10.89.1.5 docker.io/library/alpine:latest ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0@if4: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
    link/ether ca:a4:24:a7:c8:f9 brd ff:ff:ff:ff:ff:ff
    inet 10.89.0.5/24 brd 10.89.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::c8a4:24ff:fea7:c8f9/64 scope link tentative
       valid_lft forever preferred_lft forever

This looks specific to my machine since rootless podman appears to get past this point for everyone else, so I'll work on my end to figure it out — don't want to turn the issue thread into a troubleshooting session.

@johnhamelink
Copy link

johnhamelink commented Feb 25, 2022

So after enabling cgroup v1 by setting the systemd.unified_cgroup_hierarchy=0 kernel parameter, k3d fails like so:

ERRO[0002] failed to gather environment information used for cluster creation: failed to run k3d-tools node for cluster 'MB': failed to create node 'k3d-MB-tools': runtime failed to create node 'k3d-MB-tools': failed to create container for node 'k3d-MB-tools': docker failed to create container 'k3d-MB-tools': Error response from daemon: container create: statfs /var/run/docker.sock: permission denied

After running podman system service --time=0 unix:///var/run/docker.sock and trying again, k3d succesfully registers a server, but then hangs while waiting for an agent to come up:

❯ k3d cluster create --verbose -c k3d.yaml
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock             
DEBU[0000] Runtime Info:
&{Name:docker Endpoint:/var/run/docker.sock Version:3.4.4 OSType:linux OS:arch Arch:amd64 CgroupVersion:1 CgroupDriver:cgroupfs Filesystem:extfs} 
DEBU[0000] Additional CLI Configuration:
cli:
  api-port: ""
  env: []
  k3s-node-labels: []
  k3sargs: []
  ports: []
  registries:
    create: ""
  runtime-labels: []
  volumes: []
hostaliases: [] 
DEBU[0000] Validating file /tmp/k3d-config-tmp-k3d.yaml841035332 against default JSONSchema... 
DEBU[0000] JSON Schema Validation Result: &{errors:[] score:54} 
INFO[0000] Using config file k3d.yaml (k3d.io/v1alpha4#simple) 
DEBU[0000] Configuration:
agents: 2
apiversion: k3d.io/v1alpha4
image: docker.io/rancher/k3s:v1.22.6-k3s1
kind: Simple
metadata:
  name: MB
network: bridge
options:
  k3d:
    disableimagevolume: false
    disableloadbalancer: false
    disablerollback: false
    loadbalancer:
      configoverrides: []
    timeout: 0s
    wait: true
  kubeconfig:
    switchcurrentcontext: true
    updatedefaultkubeconfig: true
  runtime:
    agentsmemory: ""
    gpurequest: ""
    hostpidmode: false
    serversmemory: ""
registries:
  config: |
    mirrors:
      "k3d-registry":
        endpoint:
          - "http://k3d-registry.localhost:5000"
  create:
    hostport: "5000"
    name: MB
  use: []
servers: 1
subnet: ""
token: "" 
DEBU[0000] ========== Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha4} ObjectMeta:{Name:MB} Servers:1 Agents:2 ExposeAPI:{Host: HostIP: HostPort:} Image:docker.io/rancher/k3s:v1.22.6-k3s1 Network:bridge Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: HostPidMode:false Labels:[]}} Env:[] Registries:{Use:[] Create:0xc00029dda0 Config:mirrors:
  "k3d-registry":
    endpoint:
      - "http://k3d-registry.localhost:5000"
} HostAliases:[]}
========================== 
DEBU[0000] ========== Merged Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha4} ObjectMeta:{Name:MB} Servers:1 Agents:2 ExposeAPI:{Host: HostIP: HostPort:46195} Image:docker.io/rancher/k3s:v1.22.6-k3s1 Network:bridge Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: HostPidMode:false Labels:[]}} Env:[] Registries:{Use:[] Create:0xc00029dda0 Config:mirrors:
  "k3d-registry":
    endpoint:
      - "http://k3d-registry.localhost:5000"
} HostAliases:[]}
========================== 
DEBU[0000] generated loadbalancer config:
ports:
  6443.tcp:
  - k3d-MB-server-0
settings:
  workerConnections: 1024 
DEBU[0000] Found multiline registries config embedded in SimpleConfig:
mirrors:
  "k3d-registry":
    endpoint:
      - "http://k3d-registry.localhost:5000" 
DEBU[0000] ===== Merged Cluster Config =====
&{TypeMeta:{Kind: APIVersion:} Cluster:{Name:MB Network:{Name:bridge ID: External:true IPAM:{IPPrefix:zero IPPrefix IPsUsed:[] Managed:false} Members:[]} Token: Nodes:[0xc0000cd6c0 0xc0000cd860 0xc0000cda00 0xc0000cdba0] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc00013ce80 ServerLoadBalancer:0xc0002fa3b0 ImageVolume: Volumes:[]} ClusterCreateOpts:{DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d] GlobalEnv:[] HostAliases:[] Registries:{Create:0xc0003a15f0 Use:[] Config:0xc00025a8a0}} KubeconfigOpts:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true}}
===== ===== ===== 
DEBU[0000] '--kubeconfig-update-default set: enabling wait-for-server 
INFO[0000] Prep: Network                                
DEBU[0000] Found network {Name:bridge ID:17f29b073143d8cd97b5bbe492bdeffec1c5fee55cc1fe2112c8b9335f8b6121 Created:2022-02-24 14:11:13.113752904 +0000 UTC Scope:local Driver:bridge EnableIPv6:false IPAM:{Driver:default Options:map[] Config:[{Subnet:10.88.3.0/24 IPRange: Gateway:10.88.3.1 AuxAddress:map[]}]} Internal:false Attachable:false Ingress:false ConfigFrom:{Network:} ConfigOnly:false Containers:map[] Options:map[] Labels:map[] Peers:[] Services:map[]} 
INFO[0000] Re-using existing network 'bridge' (17f29b073143d8cd97b5bbe492bdeffec1c5fee55cc1fe2112c8b9335f8b6121) 
INFO[0000] Created image volume k3d-MB-images           
INFO[0000] Creating node 'MB'                           
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock             
DEBU[0000] Created container MB (ID: 9ff08854c055b508207c902c631a4b38e459ee77f2365d2de518997a1f315987) 
INFO[0000] Successfully created registry 'MB'           
DEBU[0000] no netlabel present on container /MB         
DEBU[0000] failed to get IP for container /MB as we couldn't find the cluster network 
DEBU[0000] no netlabel present on container /MB         
DEBU[0000] failed to get IP for container /MB as we couldn't find the cluster network 
INFO[0000] Container 'MB' is already connected to 'bridge' 
DEBU[0000] [Docker] DockerHost: 'unix:///run/user/1000/podman/podman.sock' (unix:///run/user/1000/podman/podman.sock) 
INFO[0000] Starting new tools node...                   
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock             
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock             
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock             
DEBU[0000] Created container k3d-MB-tools (ID: e9d3b91904ed263dabc6eff2fbfda6661d9011ca9f5810093bf4e5e5754a38e9) 
DEBU[0000] Node k3d-MB-tools Start Time: 2022-02-25 10:25:41.091430337 +0000 GMT m=+0.917906869 
INFO[0000] Starting Node 'k3d-MB-tools'                 
DEBU[0001] Truncated 2022-02-25 10:25:41.312800917 +0000 UTC to 2022-02-25 10:25:41 +0000 UTC 
INFO[0001] Creating node 'k3d-MB-server-0'              
DEBU[0001] DOCKER_SOCK=/var/run/docker.sock             
DEBU[0001] Created container k3d-MB-server-0 (ID: 8de84df0fe2acb98bd404920a4b06898eea85504e975dcd29b041839f1aca81a) 
DEBU[0001] Created node 'k3d-MB-server-0'               
INFO[0001] Creating node 'k3d-MB-agent-0'               
DEBU[0002] DOCKER_SOCK=/var/run/docker.sock             
DEBU[0002] Created container k3d-MB-agent-0 (ID: bfb83a0f63dacd9d190cf2f20751d3b7d68ec713bfab2ee7b990b5b6073171a2) 
DEBU[0002] Created node 'k3d-MB-agent-0'                
INFO[0002] Creating node 'k3d-MB-agent-1'               
DEBU[0002] DOCKER_SOCK=/var/run/docker.sock             
DEBU[0002] Created container k3d-MB-agent-1 (ID: e995d2ae2272f56d6168c10c564b4d252732c110f38d54fba0ef9396ce8230f6) 
DEBU[0002] Created node 'k3d-MB-agent-1'                
INFO[0002] Creating LoadBalancer 'k3d-MB-serverlb'      
DEBU[0002] DOCKER_SOCK=/var/run/docker.sock             
DEBU[0002] Created container k3d-MB-serverlb (ID: ac4b9080126704e029cf38398623b3c445bec3b83404edf89bd9f55a1009f604) 
DEBU[0002] Created loadbalancer 'k3d-MB-serverlb'       
DEBU[0002] DOCKER_SOCK=/var/run/docker.sock             
INFO[0002] Using the k3d-tools node to gather environment information 
DEBU[0002] no netlabel present on container /k3d-MB-tools 
DEBU[0002] failed to get IP for container /k3d-MB-tools as we couldn't find the cluster network 
DEBU[0003] DOCKER_SOCK=/var/run/docker.sock             
INFO[0003] HostIP: using network gateway 10.88.3.1 address 
INFO[0003] Starting cluster 'MB'                        
INFO[0003] Starting servers...                          
DEBU[0003] Deleting node k3d-MB-tools ...               
DEBU[0003] DOCKER_SOCK=/var/run/docker.sock             
DEBU[0003] No fix enabled.                              
DEBU[0003] Node k3d-MB-server-0 Start Time: 2022-02-25 10:25:43.629199131 +0000 GMT m=+3.455675648 
INFO[0003] Starting Node 'k3d-MB-server-0'              
DEBU[0003] Truncated 2022-02-25 10:25:44.068160949 +0000 UTC to 2022-02-25 10:25:44 +0000 UTC 
DEBU[0003] Waiting for node k3d-MB-server-0 to get ready (Log: 'k3s is up and running') 
DEBU[0008] Finished waiting for log message 'k3s is up and running' from node 'k3d-MB-server-0' 
INFO[0008] Starting agents...                           
DEBU[0008] DOCKER_SOCK=/var/run/docker.sock             
DEBU[0008] No fix enabled.                              
DEBU[0008] Node k3d-MB-agent-1 Start Time: 2022-02-25 10:25:49.003795179 +0000 GMT m=+8.830271747 
DEBU[0008] DOCKER_SOCK=/var/run/docker.sock             
DEBU[0008] No fix enabled.                              
DEBU[0008] Node k3d-MB-agent-0 Start Time: 2022-02-25 10:25:49.016064825 +0000 GMT m=+8.842541386 
INFO[0009] Starting Node 'k3d-MB-agent-1'               
INFO[0009] Starting Node 'k3d-MB-agent-0'               
DEBU[0009] Truncated 2022-02-25 10:25:49.304455169 +0000 UTC to 2022-02-25 10:25:49 +0000 UTC 
DEBU[0009] Waiting for node k3d-MB-agent-1 to get ready (Log: 'Successfully registered node') 
DEBU[0009] Truncated 2022-02-25 10:25:49.401069603 +0000 UTC to 2022-02-25 10:25:49 +0000 UTC 
DEBU[0009] Waiting for node k3d-MB-agent-0 to get ready (Log: 'Successfully registered node')

Running podman logs on an agent shows a stream of the following error:

time="2022-02-25T10:29:19Z" level=error msg="failed to get CA certs: Get \"https://127.0.0.1:6444/cacerts\": read tcp 127.0.0.1:44918->127.0.0.1:6444: read: connection reset by peer"

@serverwentdown
Copy link
Contributor

@geraldwuhoo You're hitting the error I attempted to fix in #986, try applying that patch.

@johnhamelink Try using Podman v4

@serverwentdown
Copy link
Contributor

Running k3d on Podman

Requirements

Using Podman

Ensure the Podman system socket is available:

sudo systemctl enable --now podman.socket
# or sudo podman system service --time=0

To point k3d at the right Docker socket, create a symbolic link:

ln -s /run/podman/podman.sock /var/run/docker.sock
# or install your system podman-docker if available
sudo k3d cluster create

Using rootless Podman

Make a fake system-wide Docker socket (for now):

sudo touch /var/run/docker.sock
sudo chmod a+rw /var/run/docker.sock

Ensure the Podman user socket is available:

systemctl --user enable --now podman.socket
# or podman system service --time=0

Set DOCKER_HOST when running k3d:

XDG_RUNTIME_DIR=${XDG_RUNTIME_DIR:-/run/user/$(id -u)}
export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock
k3d cluster create

@johnhamelink
Copy link

johnhamelink commented Mar 1, 2022

@serverwentdown I had a go at your instructions above, but I'm still having issues with podman-rootless and bridge networking after installing podman-git, podman-docker-git and building k3d from #986:

❯ systemctl --user start podman.socket
export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/podman/podman.sock
bin/k3d cluster create --verbose
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock             
DEBU[0000] Runtime Info:
&{Name:docker Endpoint:/var/run/docker.sock Version:4.0.0-dev OSType:linux OS:arch Arch:amd64 CgroupVersion:1 CgroupDriver:cgroupfs Filesystem:extfs} 
DEBU[0000] Additional CLI Configuration:
cli:
  api-port: ""
  env: []
  k3s-node-labels: []
  k3sargs: []
  ports: []
  registries:
    create: ""
  runtime-labels: []
  volumes: []
hostaliases: [] 
DEBU[0000] Configuration:
agents: 0
image: docker.io/rancher/k3s:v1.22.6-k3s1
network: ""
options:
  k3d:
    disableimagevolume: false
    disableloadbalancer: false
    disablerollback: false
    loadbalancer:
      configoverrides: []
    timeout: 0s
    wait: true
  kubeconfig:
    switchcurrentcontext: true
    updatedefaultkubeconfig: true
  runtime:
    agentsmemory: ""
    gpurequest: ""
    hostpidmode: false
    serversmemory: ""
registries:
  config: ""
  use: []
servers: 1
subnet: ""
token: "" 
DEBU[0000] ========== Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha4} ObjectMeta:{Name:} Servers:1 Agents:0 ExposeAPI:{Host: HostIP: HostPort:} Image:docker.io/rancher/k3s:v1.22.6-k3s1 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: HostPidMode:false Labels:[]}} Env:[] Registries:{Use:[] Create:<nil> Config:} HostAliases:[]}
========================== 
DEBU[0000] ========== Merged Simple Config ==========
{TypeMeta:{Kind:Simple APIVersion:k3d.io/v1alpha4} ObjectMeta:{Name:} Servers:1 Agents:0 ExposeAPI:{Host: HostIP: HostPort:39181} Image:docker.io/rancher/k3s:v1.22.6-k3s1 Network: Subnet: ClusterToken: Volumes:[] Ports:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:false DisableImageVolume:false NoRollback:false NodeHookActions:[] Loadbalancer:{ConfigOverrides:[]}} K3sOptions:{ExtraArgs:[] NodeLabels:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest: ServersMemory: AgentsMemory: HostPidMode:false Labels:[]}} Env:[] Registries:{Use:[] Create:<nil> Config:} HostAliases:[]}
========================== 
DEBU[0000] generated loadbalancer config:
ports:
  6443.tcp:
  - k3d-k3s-default-server-0
settings:
  workerConnections: 1024 
DEBU[0000] ===== Merged Cluster Config =====
&{TypeMeta:{Kind: APIVersion:} Cluster:{Name:k3s-default Network:{Name:k3d-k3s-default ID: External:false IPAM:{IPPrefix:zero IPPrefix IPsUsed:[] Managed:false} Members:[]} Token: Nodes:[0xc0005036c0 0xc000503860] InitNode:<nil> ExternalDatastore:<nil> KubeAPI:0xc00041cd40 ServerLoadBalancer:0xc000426890 ImageVolume: Volumes:[]} ClusterCreateOpts:{DisableImageVolume:false WaitForServer:true Timeout:0s DisableLoadBalancer:false GPURequest: ServersMemory: AgentsMemory: NodeHooks:[] GlobalLabels:map[app:k3d] GlobalEnv:[] HostAliases:[] Registries:{Create:<nil> Use:[] Config:<nil>}} KubeconfigOpts:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true}}
===== ===== ===== 
DEBU[0000] '--kubeconfig-update-default set: enabling wait-for-server 
INFO[0000] Prep: Network                                
INFO[0000] Created network 'k3d-k3s-default'            
INFO[0000] Created image volume k3d-k3s-default-images  
DEBU[0000] [Docker] DockerHost: 'unix:///run/user/1000/podman/podman.sock' (unix:///run/user/1000/podman/podman.sock) 
INFO[0000] Starting new tools node...                   
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock             
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock             
DEBU[0000] DOCKER_SOCK=/var/run/docker.sock             
ERRO[0000] Failed to run tools container for cluster 'k3s-default' 
INFO[0001] Creating node 'k3d-k3s-default-server-0'     
DEBU[0001] DOCKER_SOCK=/var/run/docker.sock             
ERRO[0001] Failed Cluster Creation: failed setup of server/agent node k3d-k3s-default-server-0: failed to create node: runtime failed to create node 'k3d-k3s-default-server-0': failed to create container for node 'k3d-k3s-default-server-0': docker failed to create container 'k3d-k3s-default-server-0': Error response from daemon: container create: invalid config provided: Networks and static ip/mac address can only be used with Bridge mode networking 
ERRO[0001] Failed to create cluster >>> Rolling Back    
INFO[0001] Deleting cluster 'k3s-default'               
ERRO[0001] failed to get cluster: No nodes found for given cluster 
FATA[0001] Cluster creation FAILED, also FAILED to rollback changes!
❯ podman --version
podman version 4.0.0-dev
❯ bin/k3d --version
k3d version v5.1.0-74-gdd07011f
k3s version v1.22.6-k3s1 (default)
❯ podman network ls
NETWORK ID    NAME             DRIVER
89a5dde53e7c  k3d-k3s-default  bridge
2f259bab93aa  podman           bridge
❯ podman network inspect k3d-k3s-default
[
     {
          "name": "k3d-k3s-default",
          "id": "89a5dde53e7c97671dfc4c2ede2d906feeac60b2bad51490f5683f379b649776",
          "driver": "bridge",
          "network_interface": "cni-podman1",
          "created": "2022-03-01T17:29:49.104065781Z",
          "subnets": [
               {
                    "subnet": "10.89.0.0/24",
                    "gateway": "10.89.0.1"
               }
          ],
          "ipv6_enabled": false,
          "internal": false,
          "dns_enabled": false,
          "labels": {
               "app": "k3d"
          },
          "ipam_options": {
               "driver": "host-local"
          }
     }
]

@serverwentdown
Copy link
Contributor

serverwentdown commented Mar 15, 2022

There's still one more thing I need to check out:

  • Find out how reliably the k3d toolbox actually works with Podman

@jiridanek
Copy link

$ sudo Downloads/k3d-linux-amd64 cluster create
INFO[0000] Prep: Network                                
INFO[0000] Created network 'k3d-k3s-default'            
INFO[0000] Created image volume k3d-k3s-default-images  
INFO[0000] Starting new tools node...                   
ERRO[0000] Failed to run tools container for cluster 'k3s-default' 
INFO[0001] Creating node 'k3d-k3s-default-server-0'     
INFO[0001] Creating LoadBalancer 'k3d-k3s-default-serverlb' 
INFO[0001] Using the k3d-tools node to gather environment information 
INFO[0001] Starting new tools node...                   
ERRO[0001] Failed to run tools container for cluster 'k3s-default' 
ERRO[0001] failed to gather environment information used for cluster creation: failed to run k3d-tools node for cluster 'k3s-default': failed to create node 'k3d-k3s-default-tools': runtime failed to create node 'k3d-k3s-default-tools': failed to create container for node 'k3d-k3s-default-tools': docker failed to pull image 'docker.io/rancher/k3d-tools:5.3.0': docker failed to pull the image 'docker.io/rancher/k3d-tools:5.3.0': Error response from daemon: failed to resolve image name: short-name resolution enforced but cannot prompt without a TTY 
ERRO[0001] Failed to create cluster >>> Rolling Back    
INFO[0001] Deleting cluster 'k3s-default'               
INFO[0001] Deleting cluster network 'k3d-k3s-default'   
INFO[0001] Deleting 2 attached volumes...               
WARN[0001] Failed to delete volume 'k3d-k3s-default-images' of cluster 'failed to find volume 'k3d-k3s-default-images': Error: No such volume: k3d-k3s-default-images': k3s-default -> Try to delete it manually 
FATA[0001] Cluster creation FAILED, all changes have been rolled back! 

That Error response from daemon: failed to resolve image name: short-name resolution enforced but cannot prompt without a TTY sure was unexpected.

@serverwentdown
Copy link
Contributor

serverwentdown commented Apr 9, 2022

@jiridanek Which version of k3d and Podman are you using? It'd help me narrow down the cause. Anyway, you can find a solution on this blog post: https://www.redhat.com/sysadmin/container-image-short-names

@jiridanek
Copy link

@serverwentdown

[jdanek@fedora ~]$ Downloads/k3d-linux-amd64 --version
k3d version v5.3.0
k3s version v1.22.6-k3s1 (default)
[jdanek@fedora ~]$ podman --version
podman version 3.4.4

@jiridanek
Copy link

@serverwentdown After upgrading to the latest k3d, which reports k3d version v5.4.1; k3s version v1.22.7-k3s1 (default), the problem went away, and I got a different failure instead

[jdanek@fedora ~]$ sudo Downloads/k3d-linux-amd64 cluster create
INFO[0000] Prep: Network                                
INFO[0000] Created network 'k3d-k3s-default'            
INFO[0000] Created image volume k3d-k3s-default-images  
INFO[0000] Starting new tools node...                   
INFO[0000] Pulling image 'ghcr.io/k3d-io/k3d-tools:5.4.1' 
INFO[0001] Creating node 'k3d-k3s-default-server-0'     
INFO[0001] Pulling image 'docker.io/rancher/k3s:v1.22.7-k3s1' 
INFO[0012] Starting Node 'k3d-k3s-default-tools'        
INFO[0026] Creating LoadBalancer 'k3d-k3s-default-serverlb' 
INFO[0026] Pulling image 'ghcr.io/k3d-io/k3d-proxy:5.4.1' 
INFO[0034] Using the k3d-tools node to gather environment information 
INFO[0035] HostIP: using network gateway 10.89.1.1 address 
INFO[0035] Starting cluster 'k3s-default'               
INFO[0035] Starting servers...                          
INFO[0035] Starting Node 'k3d-k3s-default-server-0'     
INFO[0039] All agents already running.                  
INFO[0039] Starting helpers...                          
INFO[0039] Starting Node 'k3d-k3s-default-serverlb'     
ERRO[0047] Failed Cluster Start: error during post-start cluster preparation: failed to get cluster network k3d-k3s-default to inject host records into CoreDNS: failed to parse IP of container k3d-k3s-default: netaddr.ParseIPPrefix("10.89.1.4"): no '/' 
ERRO[0047] Failed to create cluster >>> Rolling Back    
INFO[0047] Deleting cluster 'k3s-default'               
INFO[0047] Deleting cluster network 'k3d-k3s-default'   
INFO[0047] Deleting 2 attached volumes...               
WARN[0047] Failed to delete volume 'k3d-k3s-default-images' of cluster 'k3s-default': failed to find volume 'k3d-k3s-default-images': Error: No such volume: k3d-k3s-default-images -> Try to delete it manually 
FATA[0047] Cluster creation FAILED, all changes have been rolled back! 

@jegger
Copy link

jegger commented Apr 12, 2022

I am facing the same issue (with the same version of k3d/k3s). Let me know if I can provide anything else which might be helpful.

@serverwentdown
Copy link
Contributor

serverwentdown commented Apr 16, 2022

[jdanek@fedora ~]$ podman --version
podman version 3.4.4

You'll need to upgrade to Podman v4. You can use the COPR if using Fedora: https://podman.io/blogs/2022/03/06/why_no_podman4_f35.html

@jiridanek
Copy link

@jegger can you try with podman 4? when I try it still does not work:

[root@fedora jdanek]# ~jdanek/Downloads/k3d-linux-amd64 cluster create
INFO[0000] Prep: Network                                
INFO[0000] Created network 'k3d-k3s-default'            
INFO[0000] Created image volume k3d-k3s-default-images  
INFO[0000] Starting new tools node...                   
INFO[0000] Starting Node 'k3d-k3s-default-tools'        
INFO[0001] Creating node 'k3d-k3s-default-server-0'     
INFO[0001] Creating LoadBalancer 'k3d-k3s-default-serverlb' 
INFO[0001] Using the k3d-tools node to gather environment information 
INFO[0001] HostIP: using network gateway 10.89.0.1 address 
INFO[0001] Starting cluster 'k3s-default'               
INFO[0001] Starting servers...                          
INFO[0001] Starting Node 'k3d-k3s-default-server-0'     
INFO[0005] All agents already running.                  
INFO[0005] Starting helpers...                          
INFO[0005] Starting Node 'k3d-k3s-default-serverlb'     
INFO[0012] Injecting records for hostAliases (incl. host.k3d.internal) and for 2 network members into CoreDNS configmap... 
INFO[0014] Cluster 'k3s-default' created successfully!  
INFO[0014] You can now use it like this:                
kubectl cluster-info
[root@fedora jdanek]# kubectl cluster-info
Kubernetes master is running at https://0.0.0.0:37659

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Unable to connect to the server: net/http: TLS handshake timeout

@jiridanek
Copy link

$ sudo podman logs k3d-k3s-default-server-0 |& grep luks
W0417 08:04:02.831384       2 fs.go:214] stat failed on /dev/mapper/luks-63cca6c4-98e1-467a-b8ee-acfac51b19ca with error: no such file or directory

I have btrfs LVM on LUKS, so I suspect openshift/microshift#629, kubernetes-sigs/kind#2411 could be a problem in k3s as well.

@jiridanek
Copy link

Also,

$ sudo podman exec -it k3d-k3s-default-server-0 kubectl logs pod/svclb-traefik-dj9dk lb-port-80 -n kube-system
+ trap exit TERM INT
+ echo 10.43.161.171
+ grep -Eq :
+ cat /proc/sys/net/ipv4/ip_forward
+ '[' 1 '!=' 1 ]
+ iptables -t nat -I PREROUTING '!' -s 10.43.161.171/32 -p TCP --dport 80 -j DNAT --to 10.43.161.171:80
iptables v1.8.4 (legacy): can't initialize iptables table `nat': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.

@jiridanek
Copy link

jiridanek commented Apr 17, 2022

Following the instructions from the kind issue (and loading iptables), I now got

# modprobe iptable-nat
# ~jdanek/Downloads/k3d-linux-amd64 cluster create --volume '/dev/mapper/luks-63cca6c4-98e1-467a-b8ee-acfac51b19ca:/dev/mapper/luks-63cca6c4-98e1-467a-b8ee-acfac51b19ca@server:0' --volume '/dev/dm-0:/dev/dm-0@server:0'

this allows k3s to start inside the containers, and I can use it with podman exec

$ sudo podman exec -it k3d-k3s-default-server-0 kubectl get nodes

but I cannot use it with k3d kubeconfig get config from my host machine

[jdanek@fedora ~]$ sudo netstat -nlpt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.1:631           0.0.0.0:*               LISTEN      2149/cupsd          
tcp       10      0 0.0.0.0:42451           0.0.0.0:*               LISTEN      133312/conmon       
tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      1631/systemd-resolv 
tcp        0      0 127.0.0.54:53           0.0.0.0:*               LISTEN      1631/systemd-resolv 
tcp        0      0 0.0.0.0:5355            0.0.0.0:*               LISTEN      1631/systemd-resolv 
tcp6       0      0 ::1:631                 :::*                    LISTEN      2149/cupsd          
tcp6       0      0 :::5355                 :::*                    LISTEN      1631/systemd-resolv 
[jdanek@fedora ~]$ sudo podman exec -it k3d-k3s-default-server-0 kubectl get nodes
NAME                       STATUS   ROLES                  AGE   VERSION
k3d-k3s-default-server-0   Ready    control-plane,master   11m   v1.22.7+k3s1
[jdanek@fedora ~]$ sudo ~/Downloads/k3d-linux-amd64 kubeconfig get --all > k3s.conf
[jdanek@fedora ~]$ KUBECONFIG=k3s.conf kubectl get nodes
^C
[jdanek@fedora ~]$ KUBECONFIG=k3s.conf kubectl cluster-info
Kubernetes master is running at https://0.0.0.0:42451

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Unable to connect to the server: net/http: TLS handshake timeout

@serverwentdown
Copy link
Contributor

serverwentdown commented Apr 18, 2022

@jiridanek Thanks for the debugging work! It seems your cluster has already started. Can you also confirm that the generated kubeconfig is correct (#1045). You can paste it (with credentials redacted) here.

@jiridanek
Copy link

@serverwentdown

---
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: 
    server: https://0.0.0.0:42451
  name: k3d-k3s-default
contexts:
- context:
    cluster: k3d-k3s-default
    user: admin@k3d-k3s-default
  name: k3d-k3s-default
current-context: k3d-k3s-default
kind: Config
preferences: {}
users:
- name: admin@k3d-k3s-default
  user:
    client-certificate-data: 
    client-key-data: 

@serverwentdown
Copy link
Contributor

@jiridanek I'll have to attempt to create a fresh VM to reproduce this, but I can only do that on Saturday. I'd suggest to try some things that might fix the connection problem:

  • Upgrade system packages
  • Reset podman (caution!!!) using podman system reset and then reboot

@jiridanek
Copy link

jiridanek commented Apr 19, 2022

I pretty much did the steps above as part of the Fedora 35 -> 36 upgrade, so I guess I'll have to wait for you to investigate. One thing I suspected was the https://0.0.0.0 address, which I know some tools don't accept as meaning "localhost" for purposes of connection (they are ok to listen on 0.0.0.0, but refuse to connect to 0.0.0.0). But I am not positive this is actually causing any problems here.

@manics
Copy link

manics commented Apr 19, 2022

I can reproduce the problem on a clean Fedora 35 system using Vagrant:
https://github.com/manics/k3s-rootless/tree/main/k3d-podman-root

[root@fedora ~]# podman ps
CONTAINER ID  IMAGE                               COMMAND               CREATED        STATUS            PORTS                    NAMES
721101e65215  docker.io/rancher/k3s:v1.22.7-k3s1  server --tls-san ...  7 minutes ago  Up 7 minutes ago                           k3d-k3s-default-server-0
b8d49b05a75e  ghcr.io/k3d-io/k3d-proxy:5.4.1                            7 minutes ago  Up 7 minutes ago  0.0.0.0:46543->6443/tcp  k3d-k3s-default-serverlb

[root@fedora ~]# curl https://localhost:46543 -m 5 
curl: (28) Operation timed out after 5001 milliseconds with 0 out of 0 bytes received

[root@fedora ~]# podman exec k3d-k3s-default-serverlb curl -sk https://localhost:6443
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401
}

Note port-forwarding works fine outside k3d:

[root@fedora ~]# podman run -d --name nginx -P docker.io/library/nginx        0c3e864e37cf0bf92ebe5634915f82d227b3a666f9b6038f07ba3bb4813ad240

[root@fedora ~]# podman ps
CONTAINER ID  IMAGE                               COMMAND               CREATED         STATUS             PORTS                    NAMES
721101e65215  docker.io/rancher/k3s:v1.22.7-k3s1  server --tls-san ...  10 minutes ago  Up 10 minutes ago                           k3d-k3s-default-server-0
b8d49b05a75e  ghcr.io/k3d-io/k3d-proxy:5.4.1                            10 minutes ago  Up 10 minutes ago  0.0.0.0:46543->6443/tcp  k3d-k3s-default-serverlb
0c3e864e37cf  docker.io/library/nginx:latest      nginx -g daemon o...  1 second ago    Up 2 seconds ago   0.0.0.0:34653->80/tcp    nginx

[root@fedora ~]# curl http://localhost:34653
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

@manics
Copy link

manics commented Apr 21, 2022

It looks like the k3d-k3s-default-serverlb container doesn't have an IP address in the output of podman inspect ("IPAddress":"), and there's also a warning about a missing /var/run:

[root@fedora ~]# podman inspect k3d-k3s-default-serverlb -f '{{json .NetworkSettings}}'
WARN[0000] Could not find mount at destination "/var/run" when parsing user volumes for container 0c8ae183c8e3fca92a1a07930a0aa5cb5bd9dba0228eb645393a0c418f95a4a1
{"EndpointID":"","Gateway":"","IPAddress":"","IPPrefixLen":0,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"","Bridge":"","SandboxID":"","HairpinMode":false,"LinkLocalIPv6Address":"","LinkLocalIPv6PrefixLen":0,"Ports":{"6443/tcp":[{"HostIp":"0.0.0.0","HostPort":"42791"}],"80/tcp":null},"SandboxKey":"/run/netns/netns-42e8574e-ffba-7388-cdcd-6c771a24ad79","Networks":{"k3d-k3s-default":{"EndpointID":"","Gateway":"10.89.0.1","IPAddress":"10.89.0.7","IPPrefixLen":24,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"da:c9:28:93:5a:c7","NetworkID":"k3d-k3s-default","DriverOpts":null,"IPAMConfig":null,"Links":null,"Aliases":["0c8ae183c8e3"]}}}

An IP address is seen inside the container:

[root@fedora ~]# podman exec k3d-k3s-default-serverlb ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP qlen 1000
    link/ether da:c9:28:93:5a:c7 brd ff:ff:ff:ff:ff:ff
    inet 10.89.0.7/24 brd 10.89.0.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::d8c9:28ff:fe93:5ac7/64 scope link
       valid_lft forever preferred_lft forever

For comparison my nginx container has an IP visible to podman inspect and no warning:

[root@fedora ~]# podman inspect nginx -f '{{json .NetworkSettings}}' 
{"EndpointID":"","Gateway":"10.88.0.1","IPAddress":"10.88.0.4","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"e6:55:1f:76:5b:bf","Bridge":""
,"SandboxID":"","HairpinMode":false,"LinkLocalIPv6Address":"","LinkLocalIPv6PrefixLen":0,"Ports":{"80/tcp":[{"HostIp":"","HostPort":"34653"}]},"SandboxKey":"/run/netns/netns-0bcf8132-131f-
85d0-a046-3dcd64aa492e","Networks":{"podman":{"EndpointID":"","Gateway":"10.88.0.1","IPAddress":"10.88.0.4","IPPrefixLen":16,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0
,"MacAddress":"e6:55:1f:76:5b:bf","NetworkID":"podman","DriverOpts":null,"IPAMConfig":null,"Links":null,"Aliases":["0c3e864e37cf"]}}}
``

@archseer
Copy link

This seems to be a netavark issue. I've gotten the same problem as @manics after NixOS switched their networking stack over to netavark and everything started working again once I've switched back to cni. Fedora switched to netavark already much earlier last year.

@archseer
Copy link

Since this is a closed issue we should probably open a separate ticket

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

Successfully merging a pull request may close this issue.