-
Notifications
You must be signed in to change notification settings - Fork 594
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[k3s] no additonal interface in pod #564
Comments
Ahhh ha! I think you found it, where it's reading: One thing that you need to do is install a "default network CNI first" -- you'll see in the requirements in the quickstart: https://github.com/intel/multus-cni/blob/master/doc/quickstart.md#prerequisites Typically, what I'd do at least for testing is to install Flannel. I'm unfamiliar with k3sup, but, looking at this link: https://github.com/alexellis/k3sup#-setup-a-kubernetes-server-with-k3sup I notice there's a Once you have any file (that's not a multus config) in /etc/cni/net.d -- then Multus can be installed. Multus uses the default network's configuration to configure itself, so it can use that configuration to create your default interfaces (the |
k3s comes with flannel default, so the https://gist.github.com/tburschka/155c0eba756672505cd31582c2875fca while detection of config now works, testing the new configuration leads to another error:
in this directory are a lot of files symlinked (filtered here for files with cni):
so my last idea was to symlink multus as well (tried
(This seems more to be an k3s issue, that multus isn't symlinked) Little bit off-topic... i've got an alternative working, but the flannel variant would be nice since less overhead: https://gist.github.com/tburschka/4bc938c6ca733cbd366a24e301753685 ) |
Sorry for slow response! I see that it uses You can find the configuration parameters here @ https://github.com/intel/multus-cni/blob/master/doc/how-to-use.md#entrypoint-parameters and then modify the Multus daemonset as provided by the quickstart. ...If they can get the right parameters, we might consider adding this to the reference deployments @ https://github.com/k8snetworkplumbingwg/reference-deployment -- to have a reference deployment against k3s |
Hi,
but after all I got the same errors that @tburschka described in her thread.
my deployment looks like that ---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: network-attachment-definitions.k8s.cni.cncf.io
spec:
group: k8s.cni.cncf.io
scope: Namespaced
names:
plural: network-attachment-definitions
singular: network-attachment-definition
kind: NetworkAttachmentDefinition
shortNames:
- net-attach-def
versions:
- name: v1
served: true
storage: true
schema:
openAPIV3Schema:
description: 'NetworkAttachmentDefinition is a CRD schema specified by the Network Plumbing
Working Group to express the intent for attaching pods to one or more logical or physical
networks. More information available at: https://github.com/k8snetworkplumbingwg/multi-net-spec'
type: object
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this represen
tation of an object. Servers should convert recognized schemas to the
latest internal value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: 'NetworkAttachmentDefinition spec defines the desired state of a network attachment'
type: object
properties:
config:
description: 'NetworkAttachmentDefinition config is a JSON-formatted CNI configuration'
type: string
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: multus
rules:
- apiGroups: ["k8s.cni.cncf.io"]
resources:
- '*'
verbs:
- '*'
- apiGroups:
- ""
resources:
- pods
- pods/status
verbs:
- get
- update
- apiGroups:
- ""
- events.k8s.io
resources:
- events
verbs:
- create
- patch
- update
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: multus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: multus
subjects:
- kind: ServiceAccount
name: multus
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: multus
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: multus-cni-config
namespace: kube-system
labels:
tier: node
app: multus
data:
# NOTE: If you'd prefer to manually apply a configuration file, you may create one here.
# In the case you'd like to customize the Multus installation, you should change the arguments to the Multus pod
# change the "args" line below from
# - "--multus-conf-file=auto"
# to:
# "--multus-conf-file=/tmp/multus-conf/70-multus.conf"
# Additionally -- you should ensure that the name "70-multus.conf" is the alphabetically first name in the
# /etc/cni/net.d/ directory on each node, otherwise, it will not be used by the Kubelet.
cni-conf.json: |
{
"name": "multus-cni-network",
"type": "multus",
"capabilities": {
"portMappings": true
},
"delegates": [
{
"cniVersion": "0.3.1",
"name": "default-cni-network",
"plugins": [
{
"type": "flannel",
"name": "flannel.1",
"delegate": {
"isDefaultGateway": true,
"hairpinMode": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
],
"kubeconfig": "/var/lib/rancher/k3s/agent/etc/cni/net.d/multus.d/multus.kubeconfig"
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-multus-ds-amd64
namespace: kube-system
labels:
tier: node
app: multus
name: multus
spec:
selector:
matchLabels:
name: multus
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
tier: node
app: multus
name: multus
spec:
hostNetwork: true
nodeSelector:
kubernetes.io/arch: amd64
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: multus
containers:
- name: kube-multus
image: docker.io/nfvpe/multus:stable
command: ["/entrypoint.sh"]
args:
- "--multus-conf-file=auto"
- "--cni-version=0.3.1"
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: true
volumeMounts:
- name: cni
mountPath: /host/etc/cni/net.d
- name: cnibin
mountPath: /host/opt/cni/bin
- name: multus-cfg
mountPath: /tmp/multus-conf
volumes:
- name: cni
hostPath:
path: /var/lib/rancher/k3s/agent/etc/cni/net.d/
- name: cnibin
hostPath:
path: /var/lib/rancher/k3s/data/3a24132c2ddedfad7f599daf636840e8a14efd70d4992a1b3900d2617ed89893/bin/
- name: multus-cfg
configMap:
name: multus-cni-config
items:
- key: cni-conf.json
path: 00-multus.conf
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-multus-ds-ppc64le
namespace: kube-system
labels:
tier: node
app: multus
name: multus
spec:
selector:
matchLabels:
name: multus
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
tier: node
app: multus
name: multus
spec:
hostNetwork: true
nodeSelector:
kubernetes.io/arch: ppc64le
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: multus
containers:
- name: kube-multus
# ppc64le support requires multus:latest for now. support 3.3 or later.
image: docker.io/nfvpe/multus:stable-ppc64le
command: ["/entrypoint.sh"]
args:
- "--multus-conf-file=auto"
- "--cni-version=0.3.1"
resources:
requests:
cpu: "100m"
memory: "90Mi"
limits:
cpu: "100m"
memory: "90Mi"
securityContext:
privileged: true
volumeMounts:
- name: cni
mountPath: /host/etc/cni/net.d
- name: cnibin
mountPath: /host/opt/cni/bin
- name: multus-cfg
mountPath: /tmp/multus-conf
volumes:
- name: cni
hostPath:
path: /var/lib/rancher/k3s/agent/etc/cni/net.d/
- name: cnibin
hostPath:
path: /var/lib/rancher/k3s/data/3a24132c2ddedfad7f599daf636840e8a14efd70d4992a1b3900d2617ed89893/bin/
- name: multus-cfg
configMap:
name: multus-cni-config
items:
- key: cni-conf.json
path: 00-multus.conf
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-multus-ds-arm64v8
namespace: kube-system
labels:
tier: node
app: multus
name: multus
spec:
selector:
matchLabels:
name: multus
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
tier: node
app: multus
name: multus
spec:
hostNetwork: true
nodeSelector:
kubernetes.io/arch: arm64
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: multus
containers:
- name: kube-multus
image: docker.io/nfvpe/multus:stable-arm64v8
command: ["/entrypoint.sh"]
args:
- "--multus-conf-file=auto"
- "--cni-version=0.3.1"
resources:
requests:
cpu: "100m"
memory: "90Mi"
limits:
cpu: "100m"
memory: "90Mi"
securityContext:
privileged: true
volumeMounts:
- name: cni
mountPath: /host/etc/cni/net.d/
- name: cnibin
mountPath: /host/opt/cni/bin
- name: multus-cfg
mountPath: /tmp/multus-conf
volumes:
- name: cni
hostPath:
path: /var/lib/rancher/k3s/agent/etc/cni/net.d/
- name: cnibin
hostPath:
path: /var/lib/rancher/k3s/data/3a24132c2ddedfad7f599daf636840e8a14efd70d4992a1b3900d2617ed89893/bin/
- name: multus-cfg
configMap:
name: multus-cni-config
items:
- key: cni-conf.json
path: 00-multus.conf |
I followed these instructions (mostly) and they worked for me: https://www.reddit.com/r/rancher/comments/ilzdp7/howto_set_up_k8s_or_k3s_so_pods_get_ip_from_lan/ With this addition beforehand (you need CNI plugins):
Basically, don't let k3s install flannel for you if you want to use Multus, deploy flannel by hand instead. |
I had the same issue, the kind: ConfigMap
apiVersion: v1
metadata:
name: multus-cni-config
namespace: kube-system
labels:
tier: node
app: multus
data:
# NOTE: If you'd prefer to manually apply a configuration file, you may create one here.
# In the case you'd like to customize the Multus installation, you should change the arguments to the Multus pod
# change the "args" line below from
# - "--multus-conf-file=auto"
# to:
# "--multus-conf-file=/tmp/multus-conf/70-multus.conf"
# Additionally -- you should ensure that the name "70-multus.conf" is the alphabetically first name in the
# /etc/cni/net.d/ directory on each node, otherwise, it will not be used by the Kubelet.
cni-conf.json: |
{
"cniVersion": "0.3.1",
"name": "multus-cni-network",
"type": "multus",
"kubeconfig": "/var/lib/rancher/k3s/agent/etc/cni/net.d/multus.d/multus.kubeconfig",
"delegates": [
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"forceAddress": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
]
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-multus-ds-amd64
namespace: kube-system
labels:
tier: node
app: multus
name: multus
spec:
selector:
matchLabels:
name: multus
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
tier: node
app: multus
name: multus
spec:
hostNetwork: true
nodeSelector:
kubernetes.io/arch: amd64
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: multus
containers:
- name: kube-multus
image: docker.io/nfvpe/multus:stable
command: ["/entrypoint.sh"]
args:
- "--multus-conf-file=/tmp/multus-conf/00-multus.conf"
- "--cni-version=0.3.1"
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: true
volumeMounts:
- name: cni
mountPath: /host/etc/cni/net.d
- name: cnibin
mountPath: /host/opt/cni/bin
- name: multus-cfg
mountPath: /tmp/multus-conf
volumes:
- name: cni
hostPath:
path: /var/lib/rancher/k3s/agent/etc/cni/net.d
- name: cnibin
hostPath:
path: /var/lib/rancher/k3s/data/current/bin
- name: multus-cfg
configMap:
name: multus-cni-config
items:
- key: cni-conf.json
path: 00-multus.conf |
While i'm using calico (for the moment) i read the comments and updated my flannel config based on the comments: ---
kind: ConfigMap
apiVersion: v1
metadata:
name: multus-cni-config
namespace: kube-system
labels:
tier: node
app: multus
data:
# NOTE: If you'd prefer to manually apply a configuration file, you may create one here.
# In the case you'd like to customize the Multus installation, you should change the arguments to the Multus pod
# change the "args" line below from
# - "--multus-conf-file=auto"
# to:
# "--multus-conf-file=/tmp/multus-conf/70-multus.conf"
# Additionally -- you should ensure that the name "70-multus.conf" is the alphabetically first name in the
# /etc/cni/net.d/ directory on each node, otherwise, it will not be used by the Kubelet.
cni-conf.json: |
{
"name": "multus-cni-network",
"type": "multus",
"capabilities": {
"portMappings": true
},
"delegates": [
{
"cniVersion": "0.3.1",
"name": "default-cni-network",
"plugins": [
{
"type": "flannel",
"name": "flannel.1",
"delegate": {
"isDefaultGateway": true,
"hairpinMode": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
],
"kubeconfig": "/var/lib/rancher/k3s/agent/etc/cni/net.d/multus.d/multus.kubeconfig"
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-multus-ds-amd64
namespace: kube-system
labels:
tier: node
app: multus
name: multus
spec:
selector:
matchLabels:
name: multus
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
tier: node
app: multus
name: multus
spec:
hostNetwork: true
nodeSelector:
kubernetes.io/arch: amd64
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: multus
containers:
- name: kube-multus
image: docker.io/nfvpe/multus:stable
command: ["/entrypoint.sh"]
args:
- "--multus-conf-file=auto"
- "--cni-version=0.3.1"
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: true
volumeMounts:
- name: cni
mountPath: /host/etc/cni/net.d
- name: cnibin
mountPath: /host/opt/cni/bin
- name: multus-cfg
mountPath: /tmp/multus-conf
volumes:
- name: cni
hostPath:
path: /var/lib/rancher/k3s/agent/etc/cni/net.d
- name: cnibin
hostPath:
path: /usr/lib/cni
- name: multus-cfg
configMap:
name: multus-cni-config
items:
- key: cni-conf.json
path: 70-multus.conf Now i have the issue, that k3s can't find multus:
Since the host system is an ubuntu and the cni plugins are located in |
This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 7 days. |
Any way to uninstall the default flannel installed by k3s and deploy it by hand without uninstalling/re-installing the entire k3s cluster? |
Hi,
I've installed multus-cni but it seems that i've something missed...
I'm on a 2 master kubernetes build with k3sup:
$ uname -a
:$ kubectl get nodes -o wide
:I've cloned the repo with
git clone https://github.com/intel/multus-cni.git
and modfied themultus-daemonset.yml
theargs
by adding- "--multus-log-level=debug"
Run
Take a look into the the multus pod...
witch tried to find the config... so I've created one:
So the pod created a new config:
Now I've created a new definition:
And create a pod, which using it:
But quering the pod for the interfaces still only 1(2) interfaces listed:
Nothing shown in kubernetes events, no further log in the multus pods and no /var/log/multus.log file
The text was updated successfully, but these errors were encountered: