Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[k3s] no additonal interface in pod #564

Closed
tina-junold opened this issue Oct 5, 2020 · 9 comments
Closed

[k3s] no additonal interface in pod #564

tina-junold opened this issue Oct 5, 2020 · 9 comments
Labels

Comments

@tina-junold
Copy link

Hi,

I've installed multus-cni but it seems that i've something missed...

I'm on a 2 master kubernetes build with k3sup:

$ uname -a:

Linux master-100 5.4.0-48-generic #52-Ubuntu SMP Thu Sep 10 10:58:49 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Linux master-101 5.4.0-48-generic #52-Ubuntu SMP Thu Sep 10 10:58:49 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

$ kubectl get nodes -o wide:

NAME         STATUS   ROLES         AGE   VERSION        INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
master-100   Ready    etcd,master   18m   v1.19.2+k3s1   10.10.10.100   <none>        Ubuntu 20.04.1 LTS   5.4.0-48-generic   containerd://1.4.0-k3s1
master-101   Ready    etcd,master   17m   v1.19.2+k3s1   10.10.10.101   <none>        Ubuntu 20.04.1 LTS   5.4.0-48-generic   containerd://1.4.0-k3s1

I've cloned the repo with git clone https://github.com/intel/multus-cni.git and modfied the multus-daemonset.yml the argsby adding - "--multus-log-level=debug"

Run

$ kubectl apply -f multus-cni/images/multus-daemonset.yml     
#customresourcedefinition.apiextensions.k8s.io/network-attachment-definitions.k8s.cni.cncf.io created
clusterrole.rbac.authorization.k8s.io/multus created
clusterrolebinding.rbac.authorization.k8s.io/multus created
serviceaccount/multus created
configmap/multus-cni-config created
daemonset.apps/kube-multus-ds-amd64 created
daemonset.apps/kube-multus-ds-ppc64le created
daemonset.apps/kube-multus-ds-arm64v8 created

Take a look into the the multus pod...

kubectl logs -n kube-system kube-multus-ds-amd64-qxlq8
2020-10-04T23:34:33+0000 Generating Multus configuration file using files in /host/etc/cni/net.d...
2020-10-04T23:34:33+0000 Attemping to find master plugin configuration, attempt 0
2020-10-04T23:34:39+0000 Attemping to find master plugin configuration, attempt 5
2020-10-04T23:34:44+0000 Attemping to find master plugin configuration, attempt 10
2020-10-04T23:34:49+0000 Attemping to find master plugin configuration, attempt 15
2020-10-04T23:34:54+0000 Attemping to find master plugin configuration, attempt 20
2020-10-04T23:34:59+0000 Attemping to find master plugin configuration, attempt 25

witch tried to find the config... so I've created one:

$ cat /etc/cni/net.d/10-multus.conf
{
   "name":"multus-cni-network",
   "type":"multus",
   "logFile":"/var/log/multus.log",
   "logLevel":"debug",
   "capabilities":{
      "portMappings":true
   },
   "delegates":[
      {
	 "cniVersion":"0.3.1",
	 "name":"default-cni-network",
	 "plugins":[
	    {
	       "type":"flannel",
	       "name":"flannel.1",
	       "delegate":{
	          "isDefaultGateway":true,
	          "hairpinMode":true
	       }
	    },
	    {
	       "type":"portmap",
	       "capabilities":{
	          "portMappings":true
	       }
	    }
	 ]
      }
   ],
   "kubeconfig":"/etc/cni/net.d/multus.d/multus.kubeconfig"
}

So the pod created a new config:

2020-10-04T23:35:58+0000 Using /host/etc/cni/net.d/10-multus.conf as a source to generate the Multus configuration
2020-10-04T23:35:59+0000 Config file created @ /host/etc/cni/net.d/00-multus.conf
{ "cniVersion": "0.3.1", "name": "multus-cni-network", "type": "multus", "logLevel": "debug", "kubeconfig": "/etc/cni/net.d/multus.d/multus.kubeconfig", "delegates": [ { "name":"multus-cni-network", "type":"multus", "logFile":"/var/log/multus.log", "logLevel":"debug", "capabilities":{ "portMappings":true }, "delegates":[ { "cniVersion":"0.3.1", "name":"default-cni-network", "plugins":[ { "type":"flannel", "name":"flannel.1", "delegate":{ "isDefaultGateway":true, "hairpinMode":true } }, { "type":"portmap", "capabilities":{ "portMappings":true } } ] } ], "kubeconfig":"/etc/cni/net.d/multus.d/multus.kubeconfig" } ] }
2020-10-04T23:35:59+0000 Entering sleep (success)...

Now I've created a new definition:

$ cat <<EOF | kubectl create -f -
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: macvlan-conf-1
spec:
  config: '{
	    "cniVersion": "0.3.1",
	    "type": "macvlan",
	    "master": "eno1",
	    "mode": "bridge",
	    "ipam": {
	        "type": "host-local",
	        "ranges": [
	            [ {
	                 "subnet": "10.10.0.0/16",
	                 "rangeStart": "10.10.1.20",
	                 "rangeEnd": "10.10.3.50",
	                 "gateway": "10.10.0.254"
	            } ]
	        ]
	    }
	}'
EOF

And create a pod, which using it:

$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
  name: pod-case-01
  annotations:
    k8s.v1.cni.cncf.io/networks: macvlan-conf-1
spec:
  containers:
  - name: pod-case-01
    image: docker.io/centos/tools:latest
    command:
    - /sbin/init
EOF

But quering the pod for the interfaces still only 1(2) interfaces listed:

$ kubectl exec pod-case-01 -- ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if27: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether 26:fc:32:bf:a1:2b brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.42.1.4/24 brd 10.42.1.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::24fc:32ff:febf:a12b/64 scope link 
       valid_lft forever preferred_lft forever

Nothing shown in kubernetes events, no further log in the multus pods and no /var/log/multus.log file

@dougbtv
Copy link
Member

dougbtv commented Oct 15, 2020

Ahhh ha! I think you found it, where it's reading: Attemping to find master plugin configuration

One thing that you need to do is install a "default network CNI first" -- you'll see in the requirements in the quickstart: https://github.com/intel/multus-cni/blob/master/doc/quickstart.md#prerequisites

Typically, what I'd do at least for testing is to install Flannel.

I'm unfamiliar with k3sup, but, looking at this link: https://github.com/alexellis/k3sup#-setup-a-kubernetes-server-with-k3sup

I notice there's a --flannel-backend flag, and also, you could install it by just applying the flannel yaml specs @ https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml

Once you have any file (that's not a multus config) in /etc/cni/net.d -- then Multus can be installed. Multus uses the default network's configuration to configure itself, so it can use that configuration to create your default interfaces (the eth0)

@tina-junold
Copy link
Author

k3s comes with flannel default, so the --flannel-backend flag is only required, if the backend should be changed (e.g. to host-gw)
The /etc/cni/net.d directory is empty, so k3s doesn't write a config in there...
Since i now know from the flannel cni gitlab repo, that the config is a .conflist file it was easy to find the file on the host system. k3s stores it under /var/lib/rancher/k3s/agent/etc/cni/net.d/10-flannel.conflist.
So my idea was to modify the multus-daemonset.yml changing:

https://gist.github.com/tburschka/155c0eba756672505cd31582c2875fca

while detection of config now works, testing the new configuration leads to another error:

default           0s          Normal    Scheduled                pod/web-dhcp                                                          Successfully assigned default/web-dhcp to master-101
default           0s          Warning   FailedCreatePodSandBox   pod/web-static                                                        Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "13ce07fb1325d9f92eb28308173604fa4c8f358998edf4b76635e19ca2b84433": failed to find plugin "multus" in path [/var/lib/rancher/k3s/data/07bf5246a6ab2127234428fbf3023ce85b0376a7b946bbdaee2726b7d9a6fad8/bin]

in this directory are a lot of files symlinked (filtered here for files with cni):

lrwxrwxrwx 1 root root    3 Oct 15 21:53 bridge -> cni
-rwxr-xr-x 1 root root 3.2M Sep 21 17:00 cni
lrwxrwxrwx 1 root root    3 Oct 15 21:53 flannel -> cni
lrwxrwxrwx 1 root root    3 Oct 15 21:53 host-local -> cni
lrwxrwxrwx 1 root root    3 Oct 15 21:53 loopback -> cni
lrwxrwxrwx 1 root root    3 Oct 15 21:53 portmap -> cni

so my last idea was to symlink multus as well (tried cni and /opt/cni/bin/multus) which causes different errors:

default       0s          Warning   FailedCreatePodSandBox   pod/web-static                   Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "1a8c051849cb3034d067feca5fd44f5062c0ebd94cf735fc1e06aafaf89da8e9": unexpected end of JSON input

default       0s          Warning   FailedCreatePodSandBox   pod/web-static                   Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "5fb36c8f9ca50131f915f203a1e38c3cee179be00f7b77a38a3458bdb856cb59": Multus: [default/web-static]: error getting pod: Unauthorized

(This seems more to be an k3s issue, that multus isn't symlinked)

Little bit off-topic... i've got an alternative working, but the flannel variant would be nice since less overhead: https://gist.github.com/tburschka/4bc938c6ca733cbd366a24e301753685 )

@dougbtv
Copy link
Member

dougbtv commented Nov 12, 2020

Sorry for slow response!

I see that it uses /var/lib/rancher/k3s/agent/etc/cni/net.d/10-flannel.conflist -- I think you need to configure the Multus daemonset to say "Hey you need to look in a specific directory for the configuration"

You can find the configuration parameters here @ https://github.com/intel/multus-cni/blob/master/doc/how-to-use.md#entrypoint-parameters and then modify the Multus daemonset as provided by the quickstart.

...If they can get the right parameters, we might consider adding this to the reference deployments @ https://github.com/k8snetworkplumbingwg/reference-deployment -- to have a reference deployment against k3s

@dougbtv dougbtv changed the title no additonal interface in pod [k3s] no additonal interface in pod Nov 12, 2020
@MaddSauer
Copy link

MaddSauer commented Dec 12, 2020

Hi,
I also struggle with multus on k3s.
I have changed 4 settings:

  • configmap: multus-cni-config => kubeconfig to k3s-cni-directory
  • volumes:
    • cni: => /var/lib/rancher/k3s/agent/etc/cni/net.d/
    • cnibin => /var/lib/rancher/k3s/data/3a24132c2ddedfad7f599daf636840e8a14efd70d4992a1b3900d2617ed89893/bin/
  • changed numbering of cni-config => 00-multus.conf

but after all I got the same errors that @tburschka described in her thread.

  Warning  FailedCreatePodSandBox  2m7s  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "62b50ccabe413f782b4b7b8fd1f45db54dbcb8e5862dd466b9ed62543a167355": Multus: [default/nettools-multus-86b7f46f97-v2mg6]: error getting pod: Unauthorized
  Warning  FailedCreatePodSandBox  113s  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "af953a4ce2c1c27bc8ae220ab19b77a114b455f27e85a1242b999e7b6256dc8c": Multus: [default/nettools-multus-86b7f46f97-v2mg6]: error getting pod: Unauthorized
[...]

my deployment looks like that

---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: network-attachment-definitions.k8s.cni.cncf.io
spec:
  group: k8s.cni.cncf.io
  scope: Namespaced
  names:
    plural: network-attachment-definitions
    singular: network-attachment-definition
    kind: NetworkAttachmentDefinition
    shortNames:
    - net-attach-def
  versions:
    - name: v1
      served: true
      storage: true
      schema:
        openAPIV3Schema:
          description: 'NetworkAttachmentDefinition is a CRD schema specified by the Network Plumbing
            Working Group to express the intent for attaching pods to one or more logical or physical
            networks. More information available at: https://github.com/k8snetworkplumbingwg/multi-net-spec'
          type: object
          properties:
            apiVersion:
              description: 'APIVersion defines the versioned schema of this represen
                tation of an object. Servers should convert recognized schemas to the
                latest internal value, and may reject unrecognized values. More info:
                https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
              type: string
            kind:
              description: 'Kind is a string value representing the REST resource this
                object represents. Servers may infer this from the endpoint the client
                submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
              type: string
            metadata:
              type: object
            spec:
              description: 'NetworkAttachmentDefinition spec defines the desired state of a network attachment'
              type: object
              properties:
                config:
                  description: 'NetworkAttachmentDefinition config is a JSON-formatted CNI configuration'
                  type: string
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: multus
rules:
  - apiGroups: ["k8s.cni.cncf.io"]
    resources:
      - '*'
    verbs:
      - '*'
  - apiGroups:
      - ""
    resources:
      - pods
      - pods/status
    verbs:
      - get
      - update
  - apiGroups:
      - ""
      - events.k8s.io
    resources:
      - events
    verbs:
      - create
      - patch
      - update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: multus
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: multus
subjects:
- kind: ServiceAccount
  name: multus
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: multus
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: multus-cni-config
  namespace: kube-system
  labels:
    tier: node
    app: multus
data:
  # NOTE: If you'd prefer to manually apply a configuration file, you may create one here.
  # In the case you'd like to customize the Multus installation, you should change the arguments to the Multus pod
  # change the "args" line below from
  # - "--multus-conf-file=auto"
  # to:
  # "--multus-conf-file=/tmp/multus-conf/70-multus.conf"
  # Additionally -- you should ensure that the name "70-multus.conf" is the alphabetically first name in the
  # /etc/cni/net.d/ directory on each node, otherwise, it will not be used by the Kubelet.
  cni-conf.json: |
    {
      "name": "multus-cni-network",
      "type": "multus",
      "capabilities": {
        "portMappings": true
      },
      "delegates": [
        {
          "cniVersion": "0.3.1",
          "name": "default-cni-network",
          "plugins": [
            {
              "type": "flannel",
              "name": "flannel.1",
                "delegate": {
                  "isDefaultGateway": true,
                  "hairpinMode": true
                }
              },
              {
                "type": "portmap",
                "capabilities": {
                  "portMappings": true
                }
              }
          ]
        }
      ],
      "kubeconfig": "/var/lib/rancher/k3s/agent/etc/cni/net.d/multus.d/multus.kubeconfig"
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-multus-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: multus
    name: multus
spec:
  selector:
    matchLabels:
      name: multus
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        tier: node
        app: multus
        name: multus
    spec:
      hostNetwork: true
      nodeSelector:
        kubernetes.io/arch: amd64
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: multus
      containers:
      - name: kube-multus
        image: docker.io/nfvpe/multus:stable
        command: ["/entrypoint.sh"]
        args:
        - "--multus-conf-file=auto"
        - "--cni-version=0.3.1"
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        volumeMounts:
        - name: cni
          mountPath: /host/etc/cni/net.d
        - name: cnibin
          mountPath: /host/opt/cni/bin
        - name: multus-cfg
          mountPath: /tmp/multus-conf
      volumes:
        - name: cni
          hostPath:
            path: /var/lib/rancher/k3s/agent/etc/cni/net.d/
        - name: cnibin
          hostPath:
            path: /var/lib/rancher/k3s/data/3a24132c2ddedfad7f599daf636840e8a14efd70d4992a1b3900d2617ed89893/bin/
        - name: multus-cfg
          configMap:
            name: multus-cni-config
            items:
            - key: cni-conf.json
              path: 00-multus.conf
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-multus-ds-ppc64le
  namespace: kube-system
  labels:
    tier: node
    app: multus
    name: multus
spec:
  selector:
    matchLabels:
      name: multus
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        tier: node
        app: multus
        name: multus
    spec:
      hostNetwork: true
      nodeSelector:
        kubernetes.io/arch: ppc64le
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: multus
      containers:
      - name: kube-multus
        # ppc64le support requires multus:latest for now. support 3.3 or later.
        image: docker.io/nfvpe/multus:stable-ppc64le
        command: ["/entrypoint.sh"]
        args:
        - "--multus-conf-file=auto"
        - "--cni-version=0.3.1"
        resources:
          requests:
            cpu: "100m"
            memory: "90Mi"
          limits:
            cpu: "100m"
            memory: "90Mi"
        securityContext:
          privileged: true
        volumeMounts:
        - name: cni
          mountPath: /host/etc/cni/net.d
        - name: cnibin
          mountPath: /host/opt/cni/bin
        - name: multus-cfg
          mountPath: /tmp/multus-conf
      volumes:
        - name: cni
          hostPath:
            path: /var/lib/rancher/k3s/agent/etc/cni/net.d/
        - name: cnibin
          hostPath:
            path: /var/lib/rancher/k3s/data/3a24132c2ddedfad7f599daf636840e8a14efd70d4992a1b3900d2617ed89893/bin/
        - name: multus-cfg
          configMap:
            name: multus-cni-config
            items:
            - key: cni-conf.json
              path: 00-multus.conf
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-multus-ds-arm64v8
  namespace: kube-system
  labels:
    tier: node
    app: multus
    name: multus
spec:
  selector:
    matchLabels:
      name: multus
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        tier: node
        app: multus
        name: multus
    spec:
      hostNetwork: true
      nodeSelector:
        kubernetes.io/arch: arm64
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: multus
      containers:
      - name: kube-multus
        image: docker.io/nfvpe/multus:stable-arm64v8
        command: ["/entrypoint.sh"]
        args:
        - "--multus-conf-file=auto"
        - "--cni-version=0.3.1"
        resources:
          requests:
            cpu: "100m"
            memory: "90Mi"
          limits:
            cpu: "100m"
            memory: "90Mi"
        securityContext:
          privileged: true
        volumeMounts:
        - name: cni
          mountPath: /host/etc/cni/net.d/
        - name: cnibin
          mountPath: /host/opt/cni/bin
        - name: multus-cfg
          mountPath: /tmp/multus-conf
      volumes:
        - name: cni
          hostPath:
            path: /var/lib/rancher/k3s/agent/etc/cni/net.d/
        - name: cnibin
          hostPath:
            path: /var/lib/rancher/k3s/data/3a24132c2ddedfad7f599daf636840e8a14efd70d4992a1b3900d2617ed89893/bin/
        - name: multus-cfg
          configMap:
            name: multus-cni-config
            items:
            - key: cni-conf.json
              path: 00-multus.conf

@ivan4th
Copy link

ivan4th commented Dec 26, 2020

I followed these instructions (mostly) and they worked for me: https://www.reddit.com/r/rancher/comments/ilzdp7/howto_set_up_k8s_or_k3s_so_pods_get_ip_from_lan/

With this addition beforehand (you need CNI plugins):

wget https://github.com/containernetworking/plugins/releases/download/v0.9.0/cni-plugins-linux-amd64-v0.9.0.tgz
mkdir -p /opt/cni/bin
tar -C /opt/cni/bin -xvzf cni-plugins-linux-amd64-v0.9.0.tgz

Basically, don't let k3s install flannel for you if you want to use Multus, deploy flannel by hand instead.

@lu1as
Copy link

lu1as commented Dec 27, 2020

I had the same issue, the 00-multus.conf points to the wrong kubconfig file.
Fixed it by setting "kubeconfig": "/var/lib/rancher/k3s/agent/etc/cni/net.d/multus.d/multus.kubeconfig" in the multus-cni-config configmap. I also changed the name of the mounted configmap to 00-multus.conf, so that it's the first file which gets loaded by kubelet.
I also discovered that you can set cnibin to /var/lib/rancher/k3s/data/current/bin, which then points to the correct data directory.
Here my configmap and daemonset:

kind: ConfigMap
apiVersion: v1
metadata:
  name: multus-cni-config
  namespace: kube-system
  labels:
    tier: node
    app: multus
data:
  # NOTE: If you'd prefer to manually apply a configuration file, you may create one here.
  # In the case you'd like to customize the Multus installation, you should change the arguments to the Multus pod
  # change the "args" line below from
  # - "--multus-conf-file=auto"
  # to:
  # "--multus-conf-file=/tmp/multus-conf/70-multus.conf"
  # Additionally -- you should ensure that the name "70-multus.conf" is the alphabetically first name in the
  # /etc/cni/net.d/ directory on each node, otherwise, it will not be used by the Kubelet.
  cni-conf.json: |
    {
        "cniVersion": "0.3.1",
        "name": "multus-cni-network",
        "type": "multus",
        "kubeconfig": "/var/lib/rancher/k3s/agent/etc/cni/net.d/multus.d/multus.kubeconfig",
        "delegates": [
            {
                "name": "cbr0",
                "cniVersion": "0.3.1",
                "plugins": [
                    {
                        "type": "flannel",
                        "delegate": {
                            "hairpinMode": true,
                            "forceAddress": true,
                            "isDefaultGateway": true
                        }
                    },
                    {
                        "type": "portmap",
                        "capabilities": {
                            "portMappings": true
                        }
                    }
                ]
            }
        ]
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-multus-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: multus
    name: multus
spec:
  selector:
    matchLabels:
      name: multus
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        tier: node
        app: multus
        name: multus
    spec:
      hostNetwork: true
      nodeSelector:
        kubernetes.io/arch: amd64
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: multus
      containers:
      - name: kube-multus
        image: docker.io/nfvpe/multus:stable
        command: ["/entrypoint.sh"]
        args:
        - "--multus-conf-file=/tmp/multus-conf/00-multus.conf"
        - "--cni-version=0.3.1"
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: true
        volumeMounts:
        - name: cni
          mountPath: /host/etc/cni/net.d
        - name: cnibin
          mountPath: /host/opt/cni/bin
        - name: multus-cfg
          mountPath: /tmp/multus-conf
      volumes:
        - name: cni
          hostPath:
            path: /var/lib/rancher/k3s/agent/etc/cni/net.d
        - name: cnibin
          hostPath:
            path: /var/lib/rancher/k3s/data/current/bin
        - name: multus-cfg
          configMap:
            name: multus-cni-config
            items:
            - key: cni-conf.json
              path: 00-multus.conf

@tina-junold
Copy link
Author

While i'm using calico (for the moment) i read the comments and updated my flannel config based on the comments:

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: multus-cni-config
  namespace: kube-system
  labels:
    tier: node
    app: multus
data:
  # NOTE: If you'd prefer to manually apply a configuration file, you may create one here.
  # In the case you'd like to customize the Multus installation, you should change the arguments to the Multus pod
  # change the "args" line below from
  # - "--multus-conf-file=auto"
  # to:
  # "--multus-conf-file=/tmp/multus-conf/70-multus.conf"
  # Additionally -- you should ensure that the name "70-multus.conf" is the alphabetically first name in the
  # /etc/cni/net.d/ directory on each node, otherwise, it will not be used by the Kubelet.
  cni-conf.json: |
    {
      "name": "multus-cni-network",
      "type": "multus",
      "capabilities": {
        "portMappings": true
      },
      "delegates": [
        {
          "cniVersion": "0.3.1",
          "name": "default-cni-network",
          "plugins": [
            {
              "type": "flannel",
              "name": "flannel.1",
                "delegate": {
                  "isDefaultGateway": true,
                  "hairpinMode": true
                }
              },
              {
                "type": "portmap",
                "capabilities": {
                  "portMappings": true
                }
              }
          ]
        }
      ],
      "kubeconfig": "/var/lib/rancher/k3s/agent/etc/cni/net.d/multus.d/multus.kubeconfig"
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-multus-ds-amd64
  namespace: kube-system
  labels:
    tier: node
    app: multus
    name: multus
spec:
  selector:
    matchLabels:
      name: multus
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        tier: node
        app: multus
        name: multus
    spec:
      hostNetwork: true
      nodeSelector:
        kubernetes.io/arch: amd64
      tolerations:
        - operator: Exists
          effect: NoSchedule
      serviceAccountName: multus
      containers:
        - name: kube-multus
          image: docker.io/nfvpe/multus:stable
          command: ["/entrypoint.sh"]
          args:
          - "--multus-conf-file=auto"
          - "--cni-version=0.3.1"
          resources:
            requests:
              cpu: "100m"
              memory: "50Mi"
            limits:
              cpu: "100m"
              memory: "50Mi"
          securityContext:
            privileged: true
          volumeMounts:
            - name: cni
              mountPath: /host/etc/cni/net.d
            - name: cnibin
              mountPath: /host/opt/cni/bin
            - name: multus-cfg
              mountPath: /tmp/multus-conf
      volumes:
        - name: cni
          hostPath:
            path: /var/lib/rancher/k3s/agent/etc/cni/net.d
        - name: cnibin
          hostPath:
            path: /usr/lib/cni
        - name: multus-cfg
          configMap:
            name: multus-cni-config
            items:
              - key: cni-conf.json
                path: 70-multus.conf

Now i have the issue, that k3s can't find multus:

default       0s          Warning   FailedCreatePodSandBox   pod/web-dhcp                     Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "b8f95d23e47b2ea1cec335d42dbc2a32b0f69d9f3934a7466890a0a24ef07578": failed to find plugin "multus" in path [/var/lib/rancher/k3s/data/986d5e8cf570f904598f9a5d531da2430e5a6171d22b7addb1e4a7c5b87a47d0/bin]
default       0s          Warning   FailedCreatePodSandBox   pod/web-static                   Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "e146b306829ac251670a027846aae78341d3e6b570d28f2b35205c0f3f1188d3": failed to find plugin "multus" in path [/var/lib/rancher/k3s/data/986d5e8cf570f904598f9a5d531da2430e5a6171d22b7addb1e4a7c5b87a47d0/bin]

Since the host system is an ubuntu and the cni plugins are located in /usr/lib/cni the error message is correct, but i couldn't find a way (yet) to fix this issue (and symlink files is a possible workaround but it would be nice, if i don't need to to this).

@github-actions
Copy link

github-actions bot commented Apr 5, 2021

This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 7 days.

@EmanueleGallone
Copy link

I followed these instructions (mostly) and they worked for me: https://www.reddit.com/r/rancher/comments/ilzdp7/howto_set_up_k8s_or_k3s_so_pods_get_ip_from_lan/

With this addition beforehand (you need CNI plugins):

wget https://github.com/containernetworking/plugins/releases/download/v0.9.0/cni-plugins-linux-amd64-v0.9.0.tgz
mkdir -p /opt/cni/bin
tar -C /opt/cni/bin -xvzf cni-plugins-linux-amd64-v0.9.0.tgz

Basically, don't let k3s install flannel for you if you want to use Multus, deploy flannel by hand instead.

Any way to uninstall the default flannel installed by k3s and deploy it by hand without uninstalling/re-installing the entire k3s cluster?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

6 participants