This repository has been archived by the owner on Jan 11, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 558
Cannot exec into a privileged container - glusterfs #2200
Comments
FYI: I've managed to remove the DenyEscalatingExec flag from the apiserver yaml to workaround this for now |
@zrahui you can customize/override the admission-controller flags passed to the API server using the
|
Thanks @pidah that's a more elegant solution 👍 |
@zrahui this requires a new deployment. How to fix it without creating a new cluster? |
@jalberto you can only change it on the master nodes at |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Is this a request for help?: Yes
Is this an ISSUE or FEATURE REQUEST? (choose one): Issue
What version of acs-engine?: 0.12.5
Orchestrator and version (e.g. Kubernetes, DC/OS, Swarm)
Kubernetes 1.9.1
What happened:
Cannot exec into glusterfs pods is preventing the installation of GlusterFS into ACS-Engine enabled cluster. This used to work but i believe #1961 has killed this.
What you expected to happen:
Successfully install GlusterFS using the gluster-kubernetes repo.
How to reproduce it (as minimally and precisely as possible):
Run the ./gk-deploy scripts as part of the gluster-kubernetes repo to provision Gluster into the cluster.
Anything else we need to know:
Ideally, I'd like to know if this can be disabled for specific pods to enable GlusterFS to be deployed. Any help is much appreciated.
We've also got issues with using hostNetwork as part of that same repo which seems to kill DNS in an Azure CNI enabled cluster but that's another issue altogether....
The text was updated successfully, but these errors were encountered: