Skip to content
This repository has been archived by the owner on Oct 21, 2020. It is now read-only.

The container local-volume-provisioner run well, but it doesn't autocreate pv. #713

Closed
SFHfeihong opened this issue Apr 4, 2018 · 11 comments
Assignees

Comments

@SFHfeihong
Copy link

Kubernetes version: 1.9.3
Docker version: 17.03.2-ce
Local-volume-provisioner version: v2.0.0

The config files are as follow:
configmap file:

apiVersion: v1
kind: ConfigMap
metadata:
name: local-provisioner-config
namespace: default
data:
storageClassMap: |
local-storage:
hostDir: /mnt/disks/vol1
mountDir: /mnt/disks/vol1

local-volume-provisioner-create.yaml

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: local-volume-provisioner
namespace: default
labels:
app: local-volume-provisioner
spec:
selector:
matchLabels:
app: local-volume-provisioner
template:
metadata:
labels:
app: local-volume-provisioner
spec:
serviceAccountName: local-storage-admin
containers:
- image: "quay.io/external_storage/local-volume-provisioner:v2.0.0"
imagePullPolicy: "IfNotPresent"
name: provisioner
securityContext:
privileged: true
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- mountPath: /etc/provisioner/config
name: provisioner-config
readOnly: true
- mountPath: /mnt/disks/vol1
name: local-fast
volumes:
- name: provisioner-config
configMap:
name: local-provisioner-config
- name: local-fast
hostPath:
path: /mnt/disks/vol1

The serviceaccount, storageclass, clusterroles and clusterRoleBinding have been created.
The status of containers is running.
NAME READY STATUS RESTARTS AGE
local-volume-provisioner-6r95m 1/1 Running 0 23m
local-volume-provisioner-rq5gs 1/1 Running 0 23m

The mount dir on each node:
Filesystem Size Used Avail Use% Mounted on
vol1 924M 0 924M 0% /mnt/disks/vol1

But when i get pv, there is no resources. I don't know what's wrong. Is there error in config file?

@msau42
Copy link
Contributor

msau42 commented Apr 4, 2018

You should use /mnt/disks as the discovery directory, not the full volume path. The provisioner will detect all the mountpoints under the discovery directory

@SFHfeihong
Copy link
Author

SFHfeihong commented Apr 8, 2018

@msau42 I have used /mnt/disks as the discovery directory, but the result is not well. It still doesn't autocreate pvs.

The container error log is as below:

Error creating PV "local-pv-59de7908" for volume at "/mnt/disks/vol3": PersistentVolume "local-pv-59de7908" is invalid: [metadata.annotations: Forbidden: Storage node affinity is disabled by feature-gate, spec.local: Forbidden: Local volumes are disabled by feature-gate

@wenlxie
Copy link
Contributor

wenlxie commented Apr 8, 2018

@SFHfeihong You need enable Local volume feature (PersistentLocalVolumes=true ) at your cluster
link: https://kubernetes.io/docs/reference/feature-gates/

@SFHfeihong
Copy link
Author

@wenlxie Thanks. The problem has been solved.

@ianchakeres
Copy link
Contributor

/close

@AmreeshTyagi
Copy link

I am still facing similar problem. Everything seems good in my case. POD is up & running. Mount directories are also created on nodes, but PVs were not created automatically.

As per provisioner:v2.1.0 pod logs:
I1116 11:46:37.235094 1 common.go:259] StorageClass "local-scsi" configured with MountDir "/mnt/disks", HostDir "/mnt/disks", BlockCleanerCommand ["/scripts/quick_reset.sh"]
I1116 11:46:37.235681 1 main.go:42] Configuration parsing has been completed, ready to run...
I1116 11:46:37.236266 1 common.go:315] Creating client using in-cluster config
I1116 11:46:37.266784 1 main.go:52] Starting controller
I1116 11:46:37.266895 1 controller.go:42] Initializing volume cache
I1116 11:46:37.271524 1 populator.go:85] Starting Informer controller
I1116 11:46:37.271573 1 populator.go:89] Waiting for Informer initial sync
I1116 11:46:38.272051 1 controller.go:72] Controller started

This issue is closed, but I want to reopen it.

@msau42
Copy link
Contributor

msau42 commented Nov 16, 2018

Can you list the paths of your mount points and also show your daemonset spec?

@AmreeshTyagi
Copy link

One more information, I am running Kubernetes v1.11.3 in Rancher 2.0.
I have also enabled "feature-gates": "PersistentLocalVolumes=true, VolumeScheduling=true, MountPropagation=true"

local-storage-provisioner.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: local-provisioner-config
namespace: mysql
data:
storageClassMap: |
local-scsi:
hostDir: /mnt/disks
mountDir: /mnt/disks
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: local-volume-provisioner
namespace: mysql
labels:
app: local-volume-provisioner
spec:
selector:
matchLabels:
app: local-volume-provisioner
template:
metadata:
labels:
app: local-volume-provisioner
spec:
containers:
- image: "quay.io/external_storage/local-volume-provisioner:v2.1.0"
imagePullPolicy: "Always"
name: provisioner
securityContext: mysql-sa
privileged: true
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- mountPath: /etc/provisioner/config
name: provisioner-config
readOnly: true
- mountPath: /mnt/disks
name: local-scsi
volumes:
- name: provisioner-config
configMap:
name: local-provisioner-config
- name: local-scsi
hostPath:
path: /mnt/disks

local-storage-rbac.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: mysql-sa
namespace: mysql
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: mysql-cr
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["services"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get","watch","list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: mysql-crb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: mysql-cr
subjects:
- kind: ServiceAccount
name: mysql-sa
namespace: mysql

@msau42
Copy link
Contributor

msau42 commented Nov 19, 2018

@AmreeshTyagi the configuration looks correct to me. Where are you creating the mount points? Can you show the mount points on your host system?

@AmreeshTyagi
Copy link

Not sure, if you are asking for showmount or fstab on worker nodes.
Here is my fstab result on worker nodes. I did showmount, but it was giving me error clnt_create: RPC: Program not registered. May be because of nfs service is not running. I don't want to use NFS, so service is down.

amreesh@test:~$ cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/mapper/dom--ubuntu--template16--vg-root / ext4 errors=remount-ro 0 1
# /boot was on /dev/sda1 during installation
UUID=6f1f42b8-829b-4383-a98b-529459877ccf /boot ext2 defaults 0 2
/dev/mapper/dom--ubuntu--template16--vg-swap_1 none swap sw 0 0
/dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0

I also executed following commands to check mount bind created by provisioner on worker host machine.
sudo docker ps |grep local
68fc0305a04f quay.io/external_storage/local-volume-provisioner "/local-provisioner" 2 days ago Up 2 days

sudo docker inspect 68fc0305a04f

"HostConfig": {
            "Binds": [
                "/var/lib/kubelet/pods/09d46dc9-eaed-11e8-b0ed-005056b146f6/volumes/kubernetes.io~configmap/provisioner-config:/etc/provisioner/config:ro",
                "/mnt/disks:/mnt/disks",
                "/var/lib/kubelet/pods/09d46dc9-eaed-11e8-b0ed-005056b146f6/volumes/kubernetes.io~secret/mysql-sa-token-tz7ns:/var/run/secrets/kubernetes.io/serviceaccount:ro",
                "/var/lib/kubelet/pods/09d46dc9-eaed-11e8-b0ed-005056b146f6/etc-hosts:/etc/hosts",
                "/var/lib/kubelet/pods/09d46dc9-eaed-11e8-b0ed-005056b146f6/containers/provisioner/2f472bb6:/dev/termination-log"
            ],

 "Mounts": [
        {
            "Type": "bind",
            "Source": "/var/lib/kubelet/pods/09d46dc9-eaed-11e8-b0ed-005056b146f6/volumes/kubernetes.io~configmap/provisioner-config",
            "Destination": "/etc/provisioner/config",
            "Mode": "ro",
            "RW": false,
            "Propagation": "rprivate"
        },
        {
            "Type": "bind",
            "Source": "/mnt/disks",
            "Destination": "/mnt/disks",
            "Mode": "",
            "RW": true,
            "Propagation": "rprivate"
        },
        {
            "Type": "bind",
            "Source": "/var/lib/kubelet/pods/09d46dc9-eaed-11e8-b0ed-005056b146f6/volumes/kubernetes.io~secret/mysql-sa-token-tz7ns",
            "Destination": "/var/run/secrets/kubernetes.io/serviceaccount",
            "Mode": "ro",
            "RW": false,
            "Propagation": "rprivate"
        },
        {
            "Type": "bind",
            "Source": "/var/lib/kubelet/pods/09d46dc9-eaed-11e8-b0ed-005056b146f6/etc-hosts",
            "Destination": "/etc/hosts",
            "Mode": "",
            "RW": true,
            "Propagation": "rprivate"
        },
        {
            "Type": "bind",
            "Source": "/var/lib/kubelet/pods/09d46dc9-eaed-11e8-b0ed-005056b146f6/containers/provisioner/2f472bb6",
            "Destination": "/dev/termination-log",
            "Mode": "",
            "RW": true,
            "Propagation": "rprivate"
        }
    ],

@msau42
Copy link
Contributor

msau42 commented Nov 20, 2018

The purpose of the local static provisioner is to detect local precreated mount points, expose them as PVs, and manage their lifecycle. However, it does not dynamically create volumes. If you're looking for a simple dynamic provisioner that can create directories out of a shared volume, then you may want to take a look at this: https://github.com/rancher/local-path-provisioner

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants