You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 19, 2023. It is now read-only.
Summary
When Octant deployed in the Kubernetes cluster as a pod, it leaks the cluster's admin.conf and a malicious pod/exposed worker node can gain cluster administrator privileges What steps did you take and what happened:
1.Deploy Octant as a Pod:
Because Octant does not provide any official guide for deploying the Octant as a pod in a Kubernetes cluster, I deploy the Octnat as a Pod following the guide of antrea.io(https://antrea.io/docs/v1.9.0/docs/octant-plugin-installation/) kubectl create secret generic octant-kubeconfig --from-file=admin.conf=/home/younaman/.kube/conf -n kube-system kubectl apply -f https://github.com/antrea-io/antrea/blob/main/build/yamls/antrea-octant.yml
2.Access Octant Web Service from a pod/worker node:
Octant automatically loads admin.conf when it starts up, and accesses the UI with the kubernetes-admin context.
A malicious pod can easily obtain the "data" of the secret named "octant-kubeconfig" from the "Config and Storage/Secret/" in dashboard UI, which is my cluster's admin.conf, and do whatever he/she likes to the whole cluster.
What did you expect to happen:
It looks like the Octant intended to be run as a desktop/client application, and the Octant has no access control to deploy inside a cluster at all. As far as I am concerned, the Octant should enable the authentication feature(i.e., ask for token or kubeconfig when a user access the dashboard UI), and do not use the cluster's admin.conf to access the cluster automatically when accessing the dashboard UI
Anything else you would like to add:
As mitigation, I try to mitigate the risk following these steps:
1.Do not use the "admin.conf" to deploy and install the Octant:
This will prevent Octant from finding /kube/admin.conf from KUBECONFIG on startup, and the Octant can not create context automatically. As a result, the Octant will force the user to upload the admin.conf they have when they first access the
web service. And create context according to the file upload by user.
To be more specifically, it triggers the func (l *LoadingManager) UploadKubeConfig(), which create a temporary file named kubeconfigxxxxxxxx under the /tmp/octant directory. And the
Octant create context according to this temporary file.
For example:
kubectl create secret generic octant-kubeconfig --from-file=/home/younaman/.kube/conf -n kube-system.
kubectl apply -f https://github.com/antrea-io/antrea/blob/main/build/yamls/antrea-octant.yml
2.Restart the Octant pod after each access to the dashboard UI. Thus, the Octant can not find /kube/admin.conf on start up, and will force the user to upload his/her own admin.conf and create a
new /tmp/octant/kubeconfigxxxxxxxx temporary file.
Please notice that this so-called "mitigation method requires a lot of manual steps. To solve this problem fundamentally, the Octnat may need to add authentication/ access control restrictions when accessing the dashboard UI.
OS (macOS 10.15, Windows 10, Ubuntu 19.10 etc):
A three-node Kubernetes cluster, each node is Ubuntu 20.04 TLS.
By the way, I have reported this vulnerability by mailing the [email protected], but there is no response last three weeks. So I report this issue through github.
The text was updated successfully, but these errors were encountered:
As far as I know, WMware Tanzu / Octant do not officially support deploying to Kubernetes. In that regard, I'm not sure your issue will be addressed because it concerns an unsupported deployment scenario.
Nonetheless, I have my own repo that provide an Helm chart to deploy Octant on a Kubernetes cluster. The Helm chart is configurable so you can customize or provide your own RBACs (excluding get on secret for example).
@aleveille Thanks for your reply! For your first point, Octant officially does not support deploying to Kubernetes, however, as I listed in my comments(https://antrea.io/docs/v1.9.0/docs/octant-plugin-installation/), other third-party apps (such as antrea) deploy the octant as a dashboard UI in a cluster officially. So my issue is not an "unsupported deployment scenario", and octant/other third-party apps should consider this scenario as far as I am concerned.
For your second point, you offer me a good chart! I will try on my local environment, and I will tell you if it solves my considerations:)
Besides, are there any "official developers" or "official maintainers" who can give us (me and @aleveille) more comments?
Summary
When Octant deployed in the Kubernetes cluster as a pod, it leaks the cluster's admin.conf and a malicious pod/exposed worker node can gain cluster administrator privileges
What steps did you take and what happened:
1.Deploy Octant as a Pod:
Because Octant does not provide any official guide for deploying the Octant as a pod in a Kubernetes cluster, I deploy the Octnat as a Pod following the guide of antrea.io(https://antrea.io/docs/v1.9.0/docs/octant-plugin-installation/)
kubectl create secret generic octant-kubeconfig --from-file=admin.conf=/home/younaman/.kube/conf -n kube-system
kubectl apply -f https://github.com/antrea-io/antrea/blob/main/build/yamls/antrea-octant.yml
2.Access Octant Web Service from a pod/worker node:
Octant automatically loads admin.conf when it starts up, and accesses the UI with the kubernetes-admin context.
A malicious pod can easily obtain the "data" of the secret named "octant-kubeconfig" from the "Config and Storage/Secret/" in dashboard UI, which is my cluster's admin.conf, and do whatever he/she likes to the whole cluster.
What did you expect to happen:
It looks like the Octant intended to be run as a desktop/client application, and the Octant has no access control to deploy inside a cluster at all. As far as I am concerned, the Octant should enable the authentication feature(i.e., ask for token or kubeconfig when a user access the dashboard UI), and do not use the cluster's admin.conf to access the cluster automatically when accessing the dashboard UI
Anything else you would like to add:
As mitigation, I try to mitigate the risk following these steps:
1.Do not use the "admin.conf" to deploy and install the Octant:
This will prevent Octant from finding /kube/admin.conf from KUBECONFIG on startup, and the Octant can not create context automatically. As a result, the Octant will force the user to upload the admin.conf they have when they first access the
web service. And create context according to the file upload by user.
To be more specifically, it triggers the func (l *LoadingManager) UploadKubeConfig(), which create a temporary file named kubeconfigxxxxxxxx under the /tmp/octant directory. And the
Octant create context according to this temporary file.
For example:
kubectl create secret generic octant-kubeconfig --from-file=/home/younaman/.kube/conf -n kube-system.
kubectl apply -f https://github.com/antrea-io/antrea/blob/main/build/yamls/antrea-octant.yml
2.Restart the Octant pod after each access to the dashboard UI. Thus, the Octant can not find /kube/admin.conf on start up, and will force the user to upload his/her own admin.conf and create a
new /tmp/octant/kubeconfigxxxxxxxx temporary file.
Please notice that this so-called "mitigation method requires a lot of manual steps. To solve this problem fundamentally, the Octnat may need to add authentication/ access control restrictions when accessing the dashboard UI.
Environment:
octant version
):Version: 0.24.0
Git commit: 5a86489
Built: 2021-09-09T01:54:00Z
kubectl version
):Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.2", GitCommit:"f66044f4361b9f1f96f0053dd46cb7dce5e990a8", GitTreeState:"clean", BuildDate:"2022-06-15T14:22:29Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.2", GitCommit:"f66044f4361b9f1f96f0053dd46cb7dce5e990a8", GitTreeState:"clean", BuildDate:"2022-06-15T14:15:38Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"}
A three-node Kubernetes cluster, each node is Ubuntu 20.04 TLS.
By the way, I have reported this vulnerability by mailing the [email protected], but there is no response last three weeks. So I report this issue through github.
The text was updated successfully, but these errors were encountered: