-
Notifications
You must be signed in to change notification settings - Fork 55
Share is mounted multiple times which deletes all data from the share #23
Comments
Thanks for reporting, seems like a valid issue. I was not aware that For (1), the os.RemoveAll is only executed when For (2), yes this certainly sounds like a bug, I was not aware that For (3), this goes beyond my knowledge, can |
For (1), i've updated the pull request and added a For (2), i think we should give For (3), It was more about an error that happens in the driver implementation after the mount. But looks like we dont need anything here... so forget about it. |
@sascha-egerer while implementing mount I actually looked at implementation at docker/pkg/mount and it still appears like Your pull request with Docker Engine doesn't keep track of volumes mounted, so it goes on and issues another Mount call to the driver when a container is created. We can be smart in the drive: We can see if a path has mounts and prevent further duplication. BUT when it comes to unmounting, we can't tell for sure if there are any other containers still running and using this mount. Best solution I can think of is, we let these duplicate mounts happen. We delete the mountpoint only if there's finally nothing mounted anymore on that path. This assumes |
Isn't that what the driver should do? I don't think that mounting multiple times is a good idea.
It does solve the problem that data is not accidentially deleted. If everything works correct the folder should always be empty when it will be removed. If not something went wrong and that should be visible and not silenty ignored.
Does the docker engine not use their own API? I've really no experiance with docker and go development and thats why i can't say anything about that. But if i look at the Docker Volume Plugin API that looks good to me. There is the |
this holds true only when you don't mount multiple times. It's easy to prevent duplicate mounts, however it's hard(read:impossible?) to tell when to actually unmount. If 2 containers use the same volume, if you stop one, it will issue an
It appears like it is not... 😦 I will check this closely to find out if Docker can do better here and open issue at docker/docker if that's the case. |
Indeed docker is being really dumb here. So I have:
docker run c1:
docker run c2:
so at this point, mount list has:
docker stop c1, plugin logs:
but it stops the container regardless.
and the
So at this /Unmount operation, apparently what I need to do is:
In this solution, I will let duplicate mounts happen, otherwise there is no way of telling when it is okay to |
I just submitted a patch implementing the logic I described above, so I don't prevent duplicate mount entries as they're helpful in determining if the path still has more active mounts, and while unmounting I remove the mountpoint (with I think it solves your problem here based on my testing. I will go ahead and merge that, please try downloading v0.2 in a minute. |
@sascha-egerer did you have a chance to try out v0.2. It should be addressing this issue and other problems caused by this. |
@ahmetalpbalkan I've updated now to v0.2 and could not find any problems so far. But I'm still very unhappy with the multiple mounts. If i start 100 containers i have 100 active mounts which really does not make sense. I would still try to prevent that... |
@sascha-egerer as I said, apparently there's no way to tell if it's okay to Unmount that one single entry because other containers might be still using it. It's a docker-engine issue. I'll open an issue entry over there soon. |
As described in #19 shares are somtimes mounted multiple times.
There is a big problem with that as in the
unmount
function the mount-path is removed. That means the volume is only umounted once and then the folder is removed recursivle which will remove everything from the share if it is still mounted.I think there theses tasks:
azurefile-dockervolumedriver/driver.go
Line 158 in f5cf9c9
The text was updated successfully, but these errors were encountered: