Kubernetes is a container orchestration framework commonly de- ployed and used by researchers and practitioners. It facilitates the deployment, (auto)scaling and management of container-based ap- plications through declarative configuration files
- Containers and Container image(A container image is a binary package that encapsulates all of the files necessary to run an application inside of an OS container)
- Docker image format
- Docker container runtime(The Docker Container Runtime Kubernetes provides an API for describing an application deployment, but relies on a container runtime to set up an application container using the container- specific APIs native to the target OS. On a Linux system that means configuring cgroups and namespaces. The default container runtime used by Kubernetes is Docker. Docker provides an API for creating application containers on Linux and Windows systems.)
- A Dockerfile can be used to automate the creation of a Docker container image.
- Creating and Running Containers
-
kubectl(use it to create objects and interact with the Kubernetes API.)
-
Viewing Kubernetes API Objects(Everything contained in Kubernetes is represented by a RESTful resource)
EXAMPLE:Kubernetes object exists at a unique HTTP path; for example, https://your-
k8s.com/api/v1/namespaces/default/pods/my-pod leads to the representation of a
pod in the default namespace named my-pod. The kubectl command makes
HTTP requests to these URLs to access the Kubernetes objects that reside at
these paths.
The command get helps to retreive it from the specified url over http using kubectl
kubectl get <resource-name>
- Creating, Updating, and Destroying Kubernetes Objects
Objects in the Kubernetes API are represented as JSON or YAML files. These
files are either returned by the server in response to a query or posted to the
server as part of an API request. You can use these YAML or JSON files to
create, update, or delete objects on the Kubernetes server.
$ kubectl apply -f object.yaml
$ kubectl delete <resource-name> <obj-name>
- Pods in Kubernetes
- A pod is a group of containers that share storage and network resources. A Pod represents a collection of application containers and volumes running in the same execution environment.
- Running Containers vs Pods
Running my-image by first fetching it from the GCR using the following Docker command:
$ docker run -d --name my-image \
--publish 8080:8080 \
gcr.io/my-image-demo/my-image-amd64:1
Creating and running a Pod is via the imperative kubectl run command, to run my-image server, use:
$ kubectl run my-image --image=gcr.io/my-image-demo/my-image-amd64:1
- Pod manifest(Similar result can be achieved by instead writing to a file named kuard-pod.yaml and then using kubectl commands to load that manifest to Kubernetes.)
apiVersion: v1
kind: Pod
metadata:
name: my-image
spec:
containers:
- image: gcr.io/my-image-demo/my-image-amd64:1
name: my-image
ports:
- containerPort: 8080
name: http
protocol: TCP
Use this to launch a single instance of my-image
$ kubectl apply -f my-image.yaml
- Liveness Probe
Liveness health checks run application-specific logic (e.g., loading a web page) to verify that the application is not just still running, but is functioning properly. Once a pod is up and running,the way to confirm that it is actually healthy and shouldn’t be restarted is by liveness probe
- Readiness Probe
Readiness describes when a container is ready to serve user requests.
-
tcpSocket health checks
-
exec probe
- Add a liveness probe to our existing pod manifest:
- Liveness probes are container/application specific, thus we have to define them in the pod manifest individually. - After applying this pod manifest (and after deleting the old pod), you can actually view these liveness probes by running the port-forward command from above and navigating to Liveness Probe. You can force failures here too!
apiVersion: v1
kind: Pod
metadata:
name: kuard
spec:
containers:
- image: gcr.io/kuar-demo/kuard-amd64:blue
name: kuard
livenessProbe:
httpGet:
path: /healthy
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 10
failureThreshold: 3
ports:
- containerPort: 8080
name: http
protocol: TCP
- Resource requests specify the minimum amount of a resource required to run the application.
- Resource limits specify the maximum amount of a resource that an application can consume.
NOTE:
- Resources are requested per container, not per Pod.
- The total resources requested by the Pod is the sum of all resources requested by all containers in the Pod.
- When you establish limits on a container, the kernel is configured to ensure that consumption cannot exceed these limits
To add a volume to a Pod manifest, there are two new stanzas to add to our configuration.
- The first is a new spec.volumes section(This array defines all of the volumes that may be accessed by containers in the Pod manifest)
- The second addition is the volumeMounts array in the container definition. This array defines the volumes that are mounted into a particular container, and the path where each volume should be mounted.
Note that two different containers in a Pod can mount the same volume at different mount paths.
- Label Selectors Label selectors are used to filter Kubernetes objects based on a set of labels. Selectors use a simple Boolean language. They are used both by end users (via tools like kubectl) and by different types of objects (such as how ReplicaSet relates to its Pods). Each deployment (via a ReplicaSet) creates a set of Pods using the labels specified in the template embedded in the deployment. This is configured by the kubectl run command.
- Annotations are used in various places in Kubernetes, with the primary use case being rolling deployments. During rolling deployments, annotations are used to track rollout status and provide the necessary information required to roll back a deployment to a previous state. Users should avoid using the Kubernetes API server as a general-purpose database. Annotations are good for small bits of data that are highly associated with a specific resource. If you want to store data in Kubernetes but you don’t have an obvious object to associate it with, consider storing that data in some other, more appropriate database
- The Domain Name System (DNS) is the traditional system of service discovery on the internet. DNS is designed for relatively stable name resolution with wide and efficient caching.
- Many systems (for example, Java, by default) look up a name in DNS directly and never re-resolve. This can lead to clients caching stale mappings and talking to the wrong IP. Even with short TTLs and well-behaved clients, there is a natural delay between when a name resolution changes and the client notices. There are natural limits to the amount and type of information that can be returned in a typical DNS query, too. Things start to break past 20–30 A records for a single name. SRV records solve some problems but are often very hard to use. Finally, the way that clients handle multiple IPs in a DNS record is usually to take the first IP address and rely on the DNS server to randomize or round-robin the order of records. This is no substitute for more purpose-built load balancing.
kernel provides a feature to run process's in isolation or a sandbox environment and this feature is the namespaces.
- namespace gives the process isolation from other processes that are running on the same host,that is a set of processes is running in a sandbox isolated environment that's the PID namespace.
Then there are a bunch of other namespaces which make it look like and feel like a virtual environment just like running a VM and that's what isolates one container from another one or one process rather from another one. In a sandbox environment there is a file system namespace so, each container can have its own operating system, other namespaces are each container can have its own network
-
overlayFS is a union mount filesystem implemenatation for Linux
-
Union mount is a combination of multiple directories into one that appears to contain their combined contents.
-
The dir1 lower layer contains file1 file2 file5
-
The dir2 upper layer contains file3 file4 file5
-
when the overlay is mounted on the upper layer the contents of file1 is same as file1 from dir2
-
removal of files from overlay
-
containers are not only light weight to run but tranfer of upadtes is also light weight because of adaptaion of overlayFS
the docker0 is the virtual bridge interface and is the default brige in docker list the docker networks
in this case network named bridge is of the type bridge
creating three containers con1,con2 and con3 makes it go into default network brige, docker automatically created 3 [veth] interfaces and connected it to the virtual docker bridge where it acts as a switch the link can be seen using the bride link command
Not only did it create virtual ethernet interfaces,but he also handed out IP addresses, which means it's also running some DHCP.which can be known by using this command.It does this using the host's /etc/resolv.conf which is the dns resolver present in the host machine and making a copy of it.
Also since the bridge interface acts as a switch the containers connected to them can talk to each other .also conatiners can talk to outer world using the docker0 route interface
we can ping other containers from con4 conatiner and also to outer world using docker interface route
but you cannot connect with the nginx containers con1,2,3 from outer world through their ip's whyy??? because we didnt expose the ports so redeploy them
the user denfined network gives you dns and hence ping using name of the container
While ConfigMaps are great for most configuration data, there is certain data that is extra-sensitive. This can include passwords, security tokens, or other types of private keys. Collectively, we call this type of data “secrets.” Kubernetes has native support for storing and handling this data with care. Secrets enable container images to be created without bundling sensitive data. This allows containers to remain portable across environments. Secrets are exposed to pods via explicit declaration in pod manifests and the Kubernetes API. In this way the Kubernetes secrets API provides an application-centric mechanism for exposing sensitive configuration information to applications in a way that’s easy to audit and leverages native OS isolation primitives.
A special use case for secrets is to store access credentials for private Docker registries. Kubernetes supports using images stored on private registries, but access to those images requires credentials. Private images can be stored across one or more private registries. This presents a challenge for managing credentials for each private registry on every possible node in the cluster. Image pull secrets leverage the secrets API to automate the distribution of private registry credentials. Image pull secrets are stored just like normal secrets but are consumed through the spec.imagePullSecrets Pod specification field. Use the create secret docker-registry to create this special kind of secret:
$ kubectl create secret docker-registry my-image-pull-secret
--docker-username=
--docker-password=
--docker-email=
Enable access to the private repository by referencing the image pull secret in the pod manifest file, as shown in kuard-secret-ips.yaml
apiVersion: v1
kind: Pod
metadata:
name: kuard-tls
spec:
containers:
- name: kuard-tls
image: gcr.io/kuar-demo/kuard-amd64:1
imagePullPolicy: Always
volumeMounts:
- name: tls-certs
mountPath: "/tls"
readOnly: true
imagePullSecrets:
- name: my-image-pull-secret
volumes:
- name: tls-certs
secret:
secretName: kuard-tls
The easiest way to create a secret or a ConfigMap is via kubectl create secret generic or kubectl create configmap. There are a variety of ways to specify the data items that go into the secret or ConfigMap. These can be combined in a single command:
--from-file=<filename>
Load from the file with the secret data key the same as the filename.
--from-file=<key>=<filename>
Load from the file with the secret data key explicitly specified.
--from-file=<directory>
Load all the files in the specified directory where the filename is an
acceptable key name.
--from-literal=<key>=<value>
Use the specified key/value pair directly.
The Deployment object exists to manage the release of new versions. Deployments represent deployed applications in a way that transcends any particular software version of the application. Additionally, Deployments enable you to easily move from one version of your code to the next version of your code. This “rollout” process is configurable and careful. It waits for a user- configurable amount of time between upgrading individual Pods. It also uses health checks to ensure that the new version of the application is operating correctly, and stops the deployment if too many failures occur. Using Deployments you can simply and reliably roll out new software versions without downtime or errors. The actual mechanics of the software rollout performed by a Deployment is controlled by a Deployment controller that runs in the Kubernetes cluster itself. This means you can let a Deployment proceed unattended and it will still operate correctly and safely. This makes it easy to integrate Deployments with numerous continuous delivery tools and services.
Deployment Strategies When it comes time to change the version of software implementing your service, a Kubernetes Deployment supports two different rollout strategies:
learn deployments using commands:
$ kubectl run nginx --image=nginx:1.7.12
view this Deployment object by running:
$ kubectl get deployments nginx
You can see the label selector by looking at the Deployment object:
$ kubectl get deployments nginx \
-o jsonpath --template {.spec.selector.matchLabels}
We can use this in a label selector query across ReplicaSets to
find that specific ReplicaSet:
$ kubectl get replicasets --selector=app=nginx
see the relationship between a Deployment and a ReplicaSet in action.
We can resize the Deployment using the imperative scale command:
$ kubectl scale deployments nginx --replicas=2
Now if we list that ReplicaSet again, we should see:
$ kubectl get replicasets --selector=app=nginx
let’s try the opposite, scaling the ReplicaSet:
$ kubectl scale replicasets <your-replicaset-id> --replicas=1
Now get that ReplicaSet again:
$ kubectl get replicasets --selector=app=nginx
When we first introduced services, we talked at length about label queries and how they were used to identify the dynamic set of Pods that were the backends for a particular service. With external services, however, there is no such label query. Instead, you generally have a DNS name that points to the specific server running the database. For our example, let’s assume that this server is named database.company.com. To import this external database service into Kubernetes, we start by creating a service without a Pod selector that references the DNS name of the database server dns-service.yaml
kind: Service
apiVersion: v1
metadata:
name: external-database
spec:
type: ExternalName
externalName: "database.company.com
When a typical Kubernetes service is created, an IP address is also created and the Kubernetes DNS service is populated with an A record that points to that IP address. When you create a service of type ExternalName, the Kubernetes DNS service is instead populated with a CNAME record that points to the external name you specified (database.company.com in this case). When an application in the cluster does a DNS lookup for the hostname external- database.svc.default.cluster, the DNS protocol aliases that name to “database.company.com.” This then resolves to the IP address of your external database server. In this way, all containers in Kubernetes believe that they are talking to a service that is backed with other containers, when in fact they are being redirected to the external database. Note that this is not restricted to databases you are running on your own infrastructure. Many cloud databases and other services provide you with a DNS name to use when accessing the database (e.g., my- database.databases.cloudprovider.com). You can use this DNS name as the externalName. This imports the cloud-provided database into the namespace of your Kubernetes cluster.Sometimes, however, you don’t have a DNS address for an external database service, just an IP address. In such cases, it is still possible to import this server as a Kubernetes service, but the operation is a little different. First, you create a Service without a label selector, but also without the ExternalName type we used before external-ip-service.yaml
kind: Service
apiVersion: v1
metadata:
name: external-ip-database
At this point, Kubernetes will allocate a virtual IP address for this service and populate an A record for it. However, because there is no selector for the service, there will be no endpoints populated for the load balancer to redirect traffic to. Given that this is an external service, the user is responsible for populating the endpoints manually with an Endpoints resource external-ip-endpoints.yaml
kind: Endpoints
apiVersion: v1
metadata:
name: external-ip-database
subsets:
- addresses:
- ip: 192.168.0.1
ports:
- port: 3306
If you have more than one IP address for redundancy, you can repeat them in the addresses array. Once the endpoints are populated, the load balancer will start redirecting traffic from your Kubernetes service to the IP address endpoint(s).
1.Running a MySQL Singleton To do this, we are going to create three basic objects:
-
A persistent volume to manage the lifespan of the on-disk storage independently from the lifespan of the running MySQL application
-
A MySQL Pod that will run the MySQL application
-
A service that will expose this Pod to other containers in the cluster
if your needs are simple and you can survive limited downtime in the face of a machine failure or a need to upgrade the database software, a reliable singleton may be the right approach to storage for your application.
2.Dynamic Volume Provisioning
Properties of StatefulSets
1.StatefulSets are replicated groups of Pods similar to ReplicaSets, but unlike a ReplicaSet, they have certain unique properties: Each replica gets a persistent hostname with a unique index (e.g., database-0, database-1, etc.).
2.Each replica is created in order from lowest to highest index, and creation will block until the Pod at the previous index is healthy and available. This also applies to scaling up.
3.When deleted, each replica will be deleted in order from highest to lowest. This also applies to scaling down the number of replicas.
kubectl get pod: Get information about all running pods
kubectl describe pod <pod>: Describe one pod
kubectl expose pod <pod> --port=444 --name=frontend: Expose the port of a pod (creates a new service)
kubectl port-forward <pod> 8080: Port forward the exposed pod port to your local machine
kubectl attach <podname> -i: Attach to the pod
kubectl exec <pod> -- command: Execute a command on the pod
kubectl label pods <pod> mylabel=NotALabel: Add a new label to a pod
kubectl run -i --tty busybox --image=busybox --restart=Never -- sh: Run a shell in a pod - very useful for debugging, connect to malfunctioning pod.
kubectl get deployments: Get information on current deployments
kubectl get rs: Get information about the replica sets
kubectl get pods --show-labels: get pods, and also show labels attached to those pods
kubectl rollout status deployment/helloworld: Get deployment status
kubectl set image deployment/helloworld k8s-demo=k8s-demo:2: Run k8s-demo with the image label version 2
kubectl edit deployment/helloworld: Edit the deployment object
kubectl rollout status deployment/helloworld: Get the status of the rollout
kubectl rollout history deployment/helloworld: Get the rollout history
kubectl rollout undo deployment/helloworld: Rollback to previous version
kubectl rollout undo deployment/helloworld --to-revision=n: Rollback to any version version
1. conatiner-runtime
2. Kube Proxy
3. Kubelet:
Application pods have containers running inside a container runtime needs to be installed on every node but the process that actually schedules those can those pods and the containers underneath is kubelet which is a process of kubernetes itself unlike container runtime that has interface with both container runtime and the machine the node itself because at the end of the day kubelet is responsible for taking that configuration and actually running a pod or starting a pod with a container inside and then assigning resources from that node to the container like CPU RAM and storage resources so usually kubernetes cluster is made up of multiple nodes which also must have container runtime and kubelet services installed.
2. Scheduler
3. Controller
4. etcd
if a pod dies then the ip address of the pods goes away with it hence we assign new ip so referring pod ip is unstable and hence we get stable ip by using service. Service also provides load balancing among pods
types of services offered are:-
how does service know which pods to forward to and port?It is done by labels selectors and target port
below image shows target port to match a pod with specific port
Overall concept is put into one and shown as below
NOTE:important point is when it is multi port serving service then we name the ports as shown below
when application wants to talk to a specific pod
how to find the ip address of the specific pod which u want?
So a headless service can be written by making the clusterIP field to none
however clusterIP service is automatically created its not very secure
In reallife setup it is preferred to use
HPA is a form of autoscaling that increases or decreases the number of pods in a replication controller, deployment, replica set, or stateful set based on CPU utilization—the scaling is horizontal because it affects the number of instances rather than the resources allocated to a single container.
HPA can make scaling decisions based on custom or externally provided metrics and works automatically after initial configuration. All you need to do is define the MIN and MAX number of replicas.
Once configured, the Horizontal Pod Autoscaler controller is in charge of checking the metrics and then scaling your replicas up or down accordingly. By default, HPA checks metrics every 15 seconds.
To check metrics, HPA depends on another Kubernetes resource known as the Metrics Server. The Metrics Server provides standard resource usage measurement data by capturing data from “kubernetes.summary_api” such as CPU and memory usage for nodes and pods. It can also provide access to custom metrics (that can be collected from an external source) like the number of active sessions on a load balancer indicating traffic volume.
In simple terms, HPA works in a “check, update, check again” style loop. Here’s how each of the steps in that loop work. 1.HPA continuously monitors the metrics server for resource usage. 2.Based on the collected resource usage, HPA will calculate the desired number of replicas required. 3.Then, HPA decides to scale up the application to the desired number of replicas. 4.Finally, HPA changes the desired number of replicas. 5.Since HPA is continuously monitoring, the process repeats from Step 1.
A service account in Kubernetes is a type of non-human account that provides a distinct identity for processes running in a Pod. Here are some key points about service accounts:
-
Identity for Pods: Service accounts provide an identity for processes running in Pods, allowing them to authenticate to the Kubernetes API server and perform actions based on their assigned permissions(https://kubernetes.io/docs/concepts/security/service-accounts/).
-
Namespaced: Each service account is bound to a specific namespace. This means you can have service accounts with the same name in different namespaces(https://kubernetes.io/docs/concepts/security/service-accounts/).
-
Default Service Account: Every namespace in a Kubernetes cluster has a default service account created automatically. If you don't specify a service account for a Pod, it will use the default service account of its namespace(https://kubernetes.io/docs/concepts/security/service-accounts/).
-
RBAC Integration: Service accounts work with Kubernetes Role-Based Access Control (RBAC) to define what actions the processes running in Pods can perform. This allows for fine-grained control over permissions (https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/).
-
Lightweight and Portable: Service accounts are lightweight and can be quickly created and managed. They are also portable, making it easy to include them in configuration bundles for complex workloads.
- API Server Communication: Pods that need to interact with the Kubernetes API server can use service accounts to authenticate and perform actions.
- Automation: Service accounts are used for automation tasks, such as running CI/CD pipelines or managing cluster resources.
- Security: By assigning specific permissions to service accounts, you can implement identity-based security policies and follow the principle of least privilege.
Here's an example of creating and using a service account in a Kubernetes deployment:
-
Create a Service Account:
kubectl create serviceaccount my-service-account
-
Create a Role and RoleBinding:
apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: default name: pod-reader rules: - apiGroups: [""] resources: ["pods"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: read-pods namespace: default subjects: - kind: ServiceAccount name: my-service-account namespace: default roleRef: kind: Role name: pod-reader apiGroup: rbac.authorization.k8s.io
-
Use the Service Account in a Pod:
apiVersion: v1 kind: Pod metadata: name: my-pod spec: serviceAccountName: my-service-account containers: - name: my-container image: my-image
This setup allows the Pod to use the my-service-account
service account, which has permissions to read Pods in the default
Role-Based Access Control (RBAC) is a method for regulating access to computer or network resources based on the roles of individual users within an organization. In the context of Kubernetes, RBAC is used to control who can perform specific actions on resources within a Kubernetes cluster.
-
Roles and ClusterRoles:
- Role: Defines a set of permissions within a specific namespace. It specifies what actions can be performed on which resources.
- ClusterRole: Similar to a Role, but it applies cluster-wide, across all namespaces.
-
RoleBindings and ClusterRoleBindings:
- RoleBinding: Associates a Role with a user, group, or service account within a specific namespace. It grants the permissions defined in the Role to the specified subjects.
- ClusterRoleBinding: Associates a ClusterRole with a user, group, or service account at the cluster level. It grants the permissions defined in the ClusterRole to the specified subjects across all namespaces.
-
Subjects: The entities (users, groups, or service accounts) that are granted permissions by RoleBindings or ClusterRoleBindings.
Here's an example of how to define and use RBAC in Kubernetes:
-
Define a Role:
apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: default name: pod-reader rules: - apiGroups: [""] resources: ["pods"] verbs: ["get", "watch", "list"]
-
Create a RoleBinding:
apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: read-pods namespace: default subjects: - kind: User name: jane apiGroup: rbac.authorization.k8s.io roleRef: kind: Role name: pod-reader apiGroup: rbac.authorization.k8s.io
- Granular Access Control: Allows you to define fine-grained permissions for different users and applications.
- Security: Helps enforce the principle of least privilege, ensuring that users and applications only have the permissions they need.
- Flexibility: Can be used to manage access at both the namespace level and the cluster level.
- User Management: Control which users can perform actions like creating, updating, or deleting resources.
- Service Account Management: Define what actions service accounts can perform, which is useful for managing permissions for applications running in Pods.
- Compliance and Auditing: Ensure that access controls are in place to meet compliance requirements and facilitate auditing.
RBAC is a powerful tool for managing access and ensuring security in a Kubernetes cluster. Here are examples of how to use different subjects (users, groups, and service accounts) in RoleBindings and ClusterRoleBindings in Kubernetes:
RoleBinding Example:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: default
subjects:
- kind: User
name: jane
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
In this example, the user jane
is granted the pod-reader
role within the default
namespace.
ClusterRoleBinding Example:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: read-pods-group
subjects:
- kind: Group
name: dev-team
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: pod-reader
apiGroup: rbac.authorization.k8s.io
In this example, the group dev-team
is granted the pod-reader
cluster role, which applies across all namespaces.
RoleBinding Example:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods-sa
namespace: default
subjects:
- kind: ServiceAccount
name: my-service-account
namespace: default
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
In this example, the service account my-service-account
in the default
namespace is granted the pod-reader
role within the same namespace.
- User: Represents an individual user. In the example, the user
jane
is given permissions to read Pods in thedefault
namespace. - Group: Represents a collection of users. In the example, the group
dev-team
is given cluster-wide permissions to read Pods. - Service Account: Represents an account for processes running in Pods. In the example, the service account
my-service-account
is given permissions to read Pods in thedefault
namespace.
A NetworkPolicy in Kubernetes is a resource that allows you to control the network traffic flow to and from Pods within your cluster. It provides a way to specify rules for how Pods are allowed to communicate with each other and with other network endpoints.
-
Traffic Control:
- Ingress Rules: Define rules for incoming traffic to Pods.
- Egress Rules: Define rules for outgoing traffic from Pods.
-
Isolation:
- By default, Pods are non-isolated, meaning they accept traffic from any source.
- Applying a NetworkPolicy can isolate Pods, allowing only specified traffic.
-
Label-Based:
- NetworkPolicies use labels to select Pods and define traffic rules based on these labels.
-
Policy Types:
- Ingress: Controls incoming traffic to Pods.
- Egress: Controls outgoing traffic from Pods.
Here's an example of a NetworkPolicy that allows incoming traffic to Pods with the label app: myapp
only from Pods with the label app: allowed-app
on port 80:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-specific-ingress
namespace: default
spec:
podSelector:
matchLabels:
app: myapp
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: allowed-app
ports:
- protocol: TCP
port: 80
egress:
- to:
- podSelector:
matchLabels:
app: allowed-app
ports:
- protocol: TCP
port: 80
- Security: Enhances security by restricting network traffic to and from Pods, reducing the attack surface.
- Compliance: Helps meet compliance requirements by enforcing network isolation and traffic control.
- Flexibility: Allows fine-grained control over network communication paths within the cluster.
- Microservices: Isolate microservices to ensure they only communicate with authorized services.
- Multi-Tenancy: Enforce network isolation between different tenants in a shared cluster.
- Security Policies: Implement security policies to control traffic flow and prevent unauthorized access.
- Network Plugin: Your cluster must use a network plugin that supports NetworkPolicy enforcement(https://kubernetes.io/docs/concepts/services-networking/network-policies/)(https://spacelift.io/blog/kubernetes-network-policy).
NetworkPolicies are a powerful tool for managing network security in Kubernetes clusters
Kubernetes Network Policies and Role-Based Access Control (RBAC) are both essential for securing a Kubernetes cluster, but they serve different purposes and operate at different levels. Here's a detailed comparison:
Purpose: RBAC is used to control access to Kubernetes resources based on the roles of individual users or service accounts within an organization.
Key Features:
- Roles and ClusterRoles: Define permissions at the namespace level (Role) or cluster-wide (ClusterRole).
- RoleBindings and ClusterRoleBindings: Assign roles to users, groups, or service accounts within a namespace (RoleBinding) or cluster-wide (ClusterRoleBinding).
- Principle of Least Privilege: Ensures that users and applications only have the permissions they need to function, reducing the risk of accidental or malicious actions (https://mogenius.com/blog-posts/enhancing-kubernetes-security-through-rbac-network-policies-and-kubernetes-policies).
Example:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: default
subjects:
- kind: User
name: jane
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
Purpose: Network Policies are used to control the communication between Pods and other network endpoints in the cluster.
Key Features:
- Ingress and Egress Rules: Define rules for incoming (ingress) and outgoing (egress) traffic at the Pod level.
- Label-Based: Use labels to select Pods and define traffic rules based on these labels.
- Isolation: Can be used to isolate Pods from each other or from external traffic, enhancing security by limiting the attack surface (https://dev.to/kubefeeds/how-to-secure-kubernetes-clusters-with-rbac-network-policies-and-encryption-1fc7).
Example:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-specific-ingress
namespace: default
spec:
podSelector:
matchLabels:
app: myapp
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: allowed-app
ports:
- protocol: TCP
port: 80
egress:
- to:
- podSelector:
matchLabels:
app: allowed-app
ports:
- protocol: TCP
port: 80
-
Scope:
- RBAC: Controls access to Kubernetes API resources (e.g., Pods, Services, ConfigMaps).
- Network Policies: Controls network traffic between Pods and other network endpoints.
-
Granularity:
- RBAC: Provides fine-grained control over who can perform specific actions on Kubernetes resources.
- Network Policies: Provides fine-grained control over network communication paths between Pods.
-
Use Cases:
- RBAC: Useful for managing user permissions, service account permissions, and ensuring that only authorized users can perform specific actions.
- Network Policies: Useful for securing network traffic, isolating applications, and preventing unauthorized communication between Pods.
Both RBAC and Network Policies are crucial for a secure Kubernetes environment. RBAC ensures that only authorized users and applications can access and modify resources, while Network Policies control the flow of network traffic to and from Pods, enhancing overall security