-
-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VolumeDriver.Mount: exit status 1 #34
Comments
I haven't tested running it in parallel of original plugin but it should work. Can you test basic command ? like :
|
For debug, It is plugin wide config : https://github.com/sapk/docker-volume-gluster#additionnal-docker-plugin-config |
From the host that created the docker volume:
At this point I can see On a worker node:
|
Those plugin instructions are per docker engine? I.e. I would have to run them on each worker? |
Unfortunately, yes. |
Having launched the container on the manager node where I created the volume, and failed to launch the same on a worker node... I have set debug (I had to force disable your plugin - Docker thinks it is in use on all workers). I have re-established the
Launched the stack and the error now shows:
In more detail on the worker:
|
Ok it seems to panic at https://github.com/sapk/docker-volume-gluster/blob/master/gluster/driver/tools.go#L74. At first glance, I don't know why since volume uri seems good. |
Hi - have you had any further thoughts on this? Keen to get back to testing things internally ASAP. |
I have the same issue. 3-node (all managers) swarm cluster. It doesn't seem to work with Any way I could help you debug this issue? |
For the plugin with docker-machine and the driver virtualbox it is important to allocate a memory higher than 1GB of RAM. docker-machine create -d virtualbox --virtualbox memory 2048 for example Glusterfs specifies in its doc that for vms the ram must be higher tahn 1GB Then you can use the plugin with compose or with docker run and it works. |
I can confirm the error.
[
{
"CreatedAt": "0001-01-01T00:00:00Z",
"Driver": "glusterfs:latest",
"Labels": {
"com.docker.stack.namespace": "test"
},
"Mountpoint": "/var/lib/docker-volumes/gluster/test_gfs",
"Name": "test_gfs",
"Options": {
"voluri": "dh1.lei01,dh2.lei01:lei01"
},
"Scope": "global",
"Status": {
"TODO": "List"
}
}
] Then using the newly created volume: $ docker run -v test_gfs:/data --rm -ti alpine sh
docker: Error response from daemon: error while mounting volume '/var/lib/docker/plugins/c546ed4c7d978dfc03dbda98a451bbd17d5713e467817999feb7956b6585f296/rootfs': VolumeDriver.Mount: exit status 107. Logs
Mounting the volume with UPDATE 1
does not exist. There is no Any idea what could help or is more information needed? |
Me Too? Any workaround for this? |
Description
Six node swarm. Three are Swarm managers / Gluster servers, the other three are workers. They can all resolve each other and ping each other, all behind the same network switch.
We currently have the original
glusterfs
volume plugin installed on the workers. We want to migrate to your plugin so I have executedsudo docker plugin install sapk/plugin-gluster
on each worker.I created a new Gluster volume then a new docker volume using it and your plugin without incident.
I tried launching a stack using a docker-compose.yaml file:
This "landed" on virt-b, logs below. An earlier attempt, based on a top-level volumes definition and before using
docker volume create ...
also failed with the exact same error.In short, neither by creating the docker volume in advance nor allowing docker swarm to auto-create it worked for me. The worker can ping the gluster declared node(s) just fine.
It is unclear how to debug this. I tried adding a
debug: 1
to the volume definition without change observed.Could the fact that these workers already use Gluster volumes via the original
glusterfs
plugin be preventing your plugin's use? At a loss to explain otherwise since we have it working with the origin plugin fine.Logs
Jan 11 16:03:52 virt-b dockerd[1051]: time="2018-01-11T16:03:52.069898516Z" level=error msg="fatal task error" error="starting container failed: error while mounting volume '/var/lib/docker/plugins/ff58396a87804b347745e384c8021758fdb97d41d398767c7bf84fb3fabd1974/rootfs': VolumeDriver.Mount: exit status 1" module="node/agent/taskmanager" node.id=t72pzcfvekpi7zayyi1su185y service.id=cshb1o1btdblbjwko0xup5fno task.id=svlw41f8r26bck88233qxbdnb
Jan 11 16:03:52 virt-b dockerd[1051]: time="2018-01-11T16:03:52Z" level=info msg="2018/01/11 16:03:52 Entering go-plugins-helpers capabilitiesPath" plugin=ff58396a87804b347745e384c8021758fdb97d41d398767c7bf84fb3fabd1974
Jan 11 16:03:52 virt-b dockerd[1051]: time="2018-01-11T16:03:52Z" level=info msg="2018/01/11 16:03:52 Entering go-plugins-helpers capabilitiesPath" plugin=ff58396a87804b347745e384c8021758fdb97d41d398767c7bf84fb3fabd1974
Jan 11 16:03:52 virt-b dockerd[1051]: time="2018-01-11T16:03:52Z" level=info msg="2018/01/11 16:03:52 Entering go-plugins-helpers getPath" plugin=ff58396a87804b347745e384c8021758fdb97d41d398767c7bf84fb3fabd1974
Jan 11 16:03:52 virt-b kernel: [1354011.732285] aufs au_opts_verify:1597:dockerd[4919]: dirperm1 breaks the protection by the permission bits on the lower branch
Jan 11 16:03:52 virt-b dockerd[1051]: time="2018-01-11T16:03:52Z" level=info msg="2018/01/11 16:03:52 Entering go-plugins-helpers capabilitiesPath" plugin=ff58396a87804b347745e384c8021758fdb97d41d398767c7bf84fb3fabd1974
Jan 11 16:03:52 virt-b dockerd[1051]: time="2018-01-11T16:03:52Z" level=info msg="2018/01/11 16:03:52 Entering go-plugins-helpers capabilitiesPath" plugin=ff58396a87804b347745e384c8021758fdb97d41d398767c7bf84fb3fabd1974
Jan 11 16:03:52 virt-b dockerd[1051]: time="2018-01-11T16:03:52Z" level=info msg="2018/01/11 16:03:52 Entering go-plugins-helpers getPath" plugin=ff58396a87804b347745e384c8021758fdb97d41d398767c7bf84fb3fabd1974
Jan 11 16:03:52 virt-b kernel: [1354011.759236] aufs au_opts_verify:1597:dockerd[4919]: dirperm1 breaks the protection by the permission bits on the lower branch
The text was updated successfully, but these errors were encountered: