-
-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fail fast on errors during mount #22
Comments
How do you start container? Are you sure that it is a gluster volume that is started ? Now that I use glusterfs cli directly, I could detect when background process failed but after volume is mount docker drivers have no possibility to inform docker host that the mount point is unavailable. |
Its a docker stack ( with compose) and the plugin is also an container. When I inspect the container I looks like it mounted the gluster volume. However when I inspect the volume I see that the mountpoint doesn't exist.
Docker Container:
|
As for #18 it is mostly to try to resolve hostname of server (and bricks) before creation and various little pre-check to limit error related to configuration later at mount. |
Otherwise, it seems to be using the plugin (I suppose you use a custom alias glusterfs). I will look at it near
|
Do you need more info to track down the issue. currently we can't use the plugin due to this issue Config.json from
ls -la /var/lib/docker-volumes/gluster/portainer_portainer-data/ |
I try to setup tests in travis to deliver new image based on glusterfs cli. That could maybe fix the error since command could failed directly but I will try to improve handling cli by keep process monitored by the driver and logging output directly via the plugin. |
A new version should be released soon. It use gluster cli (plugin wasn't updated without a tag) that may return more error case at start-up when mount command would have detach it-self. I will definitively improve process handling but that could already fix your problem. |
Alright, I try that out later today and let you know about the outcome. |
We have a hyper-converged system that runs Gluster and Swarm on the same node.
so we are mounting the volumes like this
"voluri": "localhost:gv-portainer"
When there is an error in gluster cluster. eg. Gluster servie isn't stated, then it is still possible to spinup a container that mounts a gluster volume. It is impossible to know if a volume was really mounted or not because in both cases the application runs. One only see in the application itself that the data is missing.
It would be a good idea to fail fast if a volume can't be mounted, so the container never comes up and can be addressed accordingly.
I am not sure if that is related to #18.
The text was updated successfully, but these errors were encountered: