Skip to content

Latest commit

 

History

History
214 lines (144 loc) · 16.7 KB

README.md

File metadata and controls

214 lines (144 loc) · 16.7 KB

Security

Securing your Kubernetes cluster is a large topic. Here we discuss Kubernetes native means to securing your cluster.

The primary K8S capabilities for securing your workloads are:

Of course, there are many other considerations to securing your cluster depending on how it is deployed. For example, key management, patching your OS, process, and many more. Here we only cover the main K8S considerations for securing your cluster

RBAC

If you need to provide other users/roles restricted access to the K8S cluster, for example, you want "QA" roles having read/write access to only their environment, and "Dev" roles having read/write access to only their environment, then RBAC is the means to do so going forward.

Authentication and Authorization in K8S

K8S supports a very flexible authentication/authorization model that is extensible through plugins and modules. There are different authentication and authorization modules to support different requirements - from simple username/password to x509 certificates to Open ID Connect with an external Identity Provider, and more.

You can configure multiple authentication modules to support different authentication scenarios within your cluster. Each of the enabled authentication modules will be invoked and short-circuited when the first module authenticates the request. If all the modules cannot authenticate then access is denied.

Here is an excellent diagram from the Kubernetes in Action book from Manning that describes this concept.

access-control-overview

See Controlling access to the K8S API to set up HTTP Basic Auth authentication, and see Azure Active Directory plugin for client authentication with OIDC to set up authentication with your Azure AD tenant.

Admission Controllers

After a request to the api server is authenticated and authorized, multiple admission controllers kick in. There are two types of admission contollers, Validating and Mutating. Validating admission controllers validates the resource to ensure it meets some policy prior to it being persisted in etcd. Mutating admission controlles mutates the resource to ensure some policy is met prior to it being persisted in etcd. Note, an admission controller can be both validating and mutating. Any admission controller can also reject a resource. For example, PodSecurityPolicy is enforced by the PodSecurityPolicy admission controller (see below). For the full list of built-in admission controllers (that need to be enabled/disabled by the cluster admin) see this reference. https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/

Note, there are also admission controller webhooks. These are essentially admission controller that delegate the responsibility of validating and mutating to an external service. As of v1.9, there is also the Dynamic Admission Controller which enable you to implement your own admission controllers and deploy them separately from K8S. An example is Kelsey Hightower's Grafeas example that leverages Grafeas meta-data API to verify your images are properly signed prior to deploying the pods.

Below we will show setting up authentication and RBAC authorization using x509 certificates.

Authenticating using x509 certs

MAKE SURE THE PROPER APIVERSION IS SET IN YOUR RESOURCES

The apiVersion for RBAC has changed since v1.7 as it was still Beta at that time. Make sure you have set the proper version in the following files.

  • production-role.yml
  • production-rolebinding.yml
  • qa-role.yml
  • qa-rolebinding.yml
#for v1.8.0 and later
apiVersion: rbac.authorization.k8s.io/v1

#for v1.7.x
apiVersion: rbac.authorization.k8s.io/v1beta1

Once the proper apiVersion has been set, see scripts to do this in the certs directory.

#1. Create credentials
openssl genrsa -out user.key 2048
openssl req -new -key user.key -out user.csr -subj "/CN=user/O=organization"
#CLUSTER_CA_LOCATION for minikube would be $HOME/.minikube
openssl x509 -req -in user.csr -CA CLUSTER_CA_LOCATION/ca.crt -CAkey CLUSTER_CA_LOCATION/ca.key -CAcreateserial -out user.crt -days 500

#2.  Create a new cluster/user context for kubectl using the user.crt/key that was just created
kubectl config set-credentials user --client-certificate=./user.crt  --client-key=./user.key
kubectl config set-context user-context --cluster=cluster --user=user

#3.  Set the current context to the user-context
kubectl config use-context user-context

#4.  Now try to do something
kubectl get pods

Finding out what you can do

To find what you can do:

kubectl auth can-i verb resource

#can i create pods?
kubectl auth can-i create pods

#can i create pods in the kube-system namespace
kubectl auth can-i create pods --namespace kube-system

#can i create pods in the kube-system namespace as the default systemaccount user?
kubectl auth can-i get configmaps --namespace kube-system --as system:serviceaccount:kube-system:default

Service Accounts

Your Pods can also have identity. This is necessary if your containers need to access the API server and you want to control what that container process can do via RBAC. This identity is represented by the ServiceAccount resource. ServiceAccounts are namespaced resources, hence the full name for a given service account is system:serviceaccount:NAMESPACE:NAME. When the Pod executes, the containers in the Pod executes as the identity represented by the ServiceAccount. You can bind RBAC permissions to this ServiceAccount - this is relevant when it comes to accessing the API Server; you can also leverage a Kubernetes service called SubjectAccessReview API to verify whether a given process can access other Kubernetes components (e.g. the kubelet).

By default, each Pod gets a default service account within the namespace it is deployed to. You can also create you own ServiceAccounts to have more granular control. When a ServiceAccount is created, regardless of whether it is a default account or custom, this triggers the creation of a secret. This secret is a token (a JWT token) and it used when accessing the API server.

See service-account.yaml for an example of creating a ServiceAccount and using it from a Pod.

Pod Security Policies

PodSecurityPolicy (PSP) resources enable the cluster admin to specify security contraints cluster wide for all pods. In order for a Pod to be deployed onto the cluster, the pod template must meet the requirements specified in the PSP. Note the policy is enforced at deployment time, not at runtime.

See here for examples of what constraints can be specified.

Note that PSPs are cluster wide resources. This means the policy applies to all PODs deployed to all namespaces by default. Clearly such a wide scope would lead to situations that may be overly restrictive in certain cases. For example, you may want to deploy system-level services that require hostPath or hostNetwork access. To control which policies are enforced at deployment time, you can leverage RBAC and bind policies to ClusterRoles, then through ClusterRoleBindings, you can control which users can deploy pods with certain capabilities. For example, you can define policies such that only the admin role can deploy pods that runs as a privileged user, and you can specify a more restrictive default policy that would be applied to all other users. See authorizing policies by RBAC for details.

In order to leverage PSPs, it needs to be enabled on the cluster. The quickest way to determine if PSPs are enabled is to run the following command:

kubectl get psp

#This means PSP is enabled
No resources found.

#This means PSP is not enabled
the server doesn't have a resource type "psp"

If PSP is not enabled, you will need to enable the PodSecurityPolicy AdmissionController on your cluster. This is done by passing an option to the kube-apiserver. Unfortunately, this is not possible on AKS as you do not have access to the master nodes. On ACS, the approach is to ssh into the master nodes, and update the kube-apiserver manifest and add the option. The manifests are located in the /etc/kubernetes/manifests directory. Once the manifest is updated, you need to restart the kubectl with sudo systemctl restart kubelet.

The other alternative is to use acs-engine to create a custom cluster with it enabled. Note, you also need to enable AppArmor on all your nodes.

See restrict-hostport.yaml and restrict-root.yaml for examples.

Network Security Policies

By default, any pod can communicate with any other pod in the cluster. However, what about if you wanted so control what pods can communicate with other pods? What would be the equivalent to Azure network security groups in Kubernetes? That would be Network Policies.

Network policies enable you to specify how a group of pods communicate with each other. For example, let's say you want to ensure pods can only communicate with other pods in the same namespace, you can achieve this with network policies.

Here is an example of a Network Policy that prevents ingress access to the database pods from anything except pods with the label access=true. Note, Network Policies are applied at the namespace level.

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: db-access
spec:
  #This policy applies to those pods with label 'service=database'
  podSelector:
    matchLabels:
      service: 'database'
  #This is the ingress rule.  It will only allow ingress from those
  #pods that have the label 'access=true'
  ingress:
  - from:
    - podSelector:
        matchLabels:
          access: 'true'

From above, you can see that NetworkPolicies are applied to pods within a given namespace (NetworkPolicy is a namespace level resource). You then apply ingress and egress rules that defines traffic to and from the pods. The ingress/egress rules can select the soure and destination using podSelectors, namespaceSelectors (e.g. all pods from a given namespace), and ipBlock (e.g. pods part of a given CIDR ip address range). For details, run kubectl explain networkpolicy.spec.ingress.from or kubectl explain networkpolicy.spec.egress.to.

See this this blog post on k8s.io for more info on Network Policies. See also the excellent Network Policy Recipes for some in-depth examples of policies.

Network Policies on AKS

As AKS is a managed Kubernetes service you do not have means of configuring the master nodes beyond the capabilities exposed by az cli e.g. you do not have access to the master nodes directly. Furthermore, today, AKS supports on the kubenet and Azure CNI network plugins and neither supports network policies. Support for the Calico plugin is coming but in the meantime, you will need to use kube-router. Kube-router is an open source project that enables you to enforce network policty without a CNI that supports it. It is deployed as a Daemonset (hence on every node) and leverages native Linux kernel features. See this excellent blog for how to get kube-router deployed on AKS. https://www.techdiction.com/2018/06/02/enforcing-network-policies-using-kube-router-on-aks/

pod.spec.securityContext

The pod specification enables you to set security constraints at the pod level (applies to all containers) and at the container level.

#with an active kubectl config run
kubectl explain pods.spec.securityContext

#Or run
kubectl explain pods.spec.containers.securityContext

Reviewing your pod spec from the security perspective should be part of your code review! Here is a tool you can use, https://kubesec.io/, to scan your specification, and it will make recommendations. Very helpful!

Verifying Your Security Configuration

Aquasec which provides a very comprehensive security monitoring platform for Kubernetes has opensourced their kube-bench tool. Kube-bench essentially audits your deployment from a security configuration perspective. It follows the guidelines defined within the CIS Kubernetes Benchmark. A related tool is kube-hunter that helps identifiy potential security holes within your cluster configuration. Think of it as pen-testing for kubernetes configuration. These tools can be incorporated into your CI/CD pipeline during development. For production, I do recommend a tool like Aquasec or Twislock as it provides many more capabilities.

Bleeding Edge

There are multiple opensource projects currently gaining attention that takes different approaches to providing additional security boundaries around containers. Both integrate with the CRI and hence is transparent to the Kubernetes layer. Definitely keep an eye on these projects.

Reference

Interesting Security Related Links