-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Host Site From Home #60
Comments
Came to know about Traefik which automates the HTTPS and other processes to run website from locally. Created repo for practice. |
Came to know about Traefik. It seems to automate the all alot of things including HTTPS management with LetsEncrypt. |
Tried to implement from another source
Could not make it workable and the last error is Added a subdomain. I think I need to look into Traefik documents a bit more. Another simple solution could be using Caddy server https://www.digitalocean.com/community/tutorials/how-to-host-a-website-with-caddy-on-ubuntu-18-04. |
https://docs.traefik.io/v2.0/user-guides/docker-compose/basic-example for basics on docker with Tarefik. |
Cups seems to be running. Stopped and disabled. sudo systemctl stop cups
sudo systemctl disable cups CUPS - Print Server CUPS manages print jobs and queues and provides network printing using the standard Internet Printing Protocol (IPP), while offering support for a very large range of printers, from dot-matrix to laser and many in between. CUPS also supports PostScript Printer Description (PPD) and auto-detection of network printers, and features a simple web-based configuration and administration tool. |
Ran a multi-node Kubernetes cluster. Control-plane node
Worker node
Coredns pods were running but were not ready. So I deleted the pods, and the pods were running fine. I think the other way would be just to redeploy CoreDNS pods kubectl logs --tail=100 -f Tail last 100 lines https://github.com/kubernetes/kubernetes/blob/master/cluster/kube-down.sh To confirm things are working. The simplest way to test our setup is to create Apache HTTP Server with a pod http-page and expose it via service named http-service with port 80 and NodePort type: kubectl run http-page --image=httpd --port=80 Verify that it works: Set host IP address to expose it from outerworld. Run other workload: kubectl exec --stdin --tty ngninx -- sh And other workload with deployment and add service to it: kubectl get pods --all-namespaces -o=jsonpath='{range .items[]}{"\n"}{.metadata.name}{":\t"}{range .spec.containers[]}{.image}{", "}{end}{end}' | sort : List container images by pods Some commands: kill -15 5 you're running the kill command to send signal "-15" to the process with the PID 5. This is how you tell a process that you would like for it to terminate (SIGTERM) and have it take the time to clean up any opened resources (temp files, rollback db transactions, close connections, whatever). Contrasted with -9 (SIGKILL), kills the process immediately, not allowing it to clean up any opened resources. Resource |
Setup LinkerdInstalled and enabled dashboard Tried to access the dashboard but it's only accessible in the admin server. And it is not accessible from the admin server's IP. So listed the pods, the dashboard services runs on web pod. Then checked if there are any services that exposes it.
The page suggests redeployment of deployment - https://linkerd.io/2.10/tasks/exposing-dashboard/#tweaking-host-requirement
My previous resources(pods, deployments and services, and default cluster) were not injected. So injected Linkerd: To test few things generated load with ab.
Resourcehttps://linkerd.io/2.10/reference/cli/viz |
Kubectl command cheatsheet
Issue: Linkerd DashboardDashboard has some issue. Metrics and Tap Injector pods were failing.
This is also not that helpful.
Nothing is else comes to mind so uninstall and reninstall.
Everything looks good
Config now has been reset
Set the external IP Everything is working fine. Resource |
Deploying Emojivoto and use Linkerd CLI tap, top, edges, and stat commands.Emojivoto is a gRPC application that has three services:
Deploy the Emojivoto application to your cluster: Review the yaml file and saw
Default setup to access is to port forward but as are accessing NOT from the host so we need to update the External IP to the service but before that lets check the current service status.
kubectl patch service web-svc -n emojivoto -p '{"spec":{"externalIPs":["192.168.1.100"]}}'
This annotation is all we need to inform Linkerd to inject the proxies into pods in this namespace. However, simply adding the annotation won't affect existing resources. We'll also need to restart the Emojivoto deployments:
Tap all the traffic from the web deployment: tap traffic from deploy/web and output as JSON: Use linkerd top to view traffic sorted by the most popular paths: See the golden metrics, the latencies, success/error rates, and requests per second that you saw in the dashboard, except this time, you want to see them in the terminal. Let's narrow the query to focus in on the traffic from the web deployment to the emoji deployment: Let's take a look at the traffic between the web and voting deployments to investigate View the metrics for traffic to all deployments in the emojivoto namespace that comes from the web deployment: The Linkerd control plane is not only "aware" of the services that are meshed, but it also is aware of which services communicate with each other. Get the edges for all the deployments in the emojivoto namespace: Zoom in on the service graph by looking at the edges between pods rather than deployments, like this: Notes
“containerPort” defines the port on which app can be reached out inside the container.
The
Create a pod running nginx that listens at port 80. Resource |
How to get per-route metrics using service profilesBy creating a ServiceProfile for a Kubernetes service, we can specify the routes available to the service and collect data. Use the Linkerd dashboard and linkerd routes command to view per-route metrics for services in the Emojivoto application. A Linkerd service profile is implemented by instantiating a Kubernetes Custom Resource Definitions (CRD) called, as you might expect, ServiceProfile. The ServiceProfile enumerates the routes that Linkerd should expect for that service. Use the linkerd profile command to generate ServiceProfile definitions. If the services in your application have OpenAPI or protobuf definitions, you can use those to create ServiceProfile definitions. Alternatively, Linkerd can observe requests in real time to generate the service profile for you. Finally, you can write the profile by hand. We create a per-route metric for UI with Viewing Per-Route Metrics with the Linkerd CLI linkerd viz top deploy/voting -n emojivoto |
Linkerd can help ensure reliability through features like traffic splitting, load balancing, retries, and timeouts. Each of these plays an important role in enhancing the overall reliability of the system. But what kind of reliability can these features add, exactly? The answer is that, ultimately, what Linkerd (or any service mesh!) can help protect against, is transient failures. If a service is completely down, or consistently returns a failure, or consistently hangs, no amount of retrying or load balancing can help. But if one single instance of a service is having an issue, or if the underlying issue is just a temporary one, well, that's where Linkerd can be useful. Happily—or unhappily?—these sorts of partial, transient failures are endemic to distributed systems! There are two important things to understand when it comes to Linkerd's core reliability featureset of load balancing, retries, and timeouts (traffic splitting, also a reliability feature, is a bit different—we'll be addressing that in a later chapter). They are:
Linkerd's actual and effective metrics can diverge in the presence of retries or timeouts, but the actual numbers represent what actually hit the server, and the effective numbers represent what the client effectively got in response to its request, after Linkerd's reliability logic did its duty. Requests with the HTTP POST method are not retryable in Linkerd today. This is for implementation reasons: POST requests almost always contain data in the request body, and retrying a request means that the proxy must store that data in memory. So, to maintain minimal memory usage, the proxy does not store POST request bodies, and they cannot be retried. As we discussed in earlier chapters, Linkerd considers only 5XX status codes in responses as errors. Both 2XX and 4XX are recognized as successful status codes. The difference is subtle, but important: 4XX status codes indicate that the server looked but couldn't find the resource—correct behavior on the part of the server. 5XX status codes indicate that the server ran into an error while processing the request—incorrect behavior. |
From the last setup, further research was done on Flannel, Linkerd, security of Kubernetes overall. Some conclusions made were:
The next I thought to move ahead with to use the setup and make it more practical would be run example Microservices applications completely with full build and deploy process. And for that a CI/CD system would be needed. Also, as I am trying to move away from Docker, I looked into tools to build images. Kaniko, Buildah/Podman seems to be the best options, with Kaniko fitting more with it's native feature to build image inside container, it will be used. Then, there was the case of local image registry was looked for and Harbor seems to be a good tool. Quay also seems to be good tool, Podman and Docker as well. But for now Harbor will be tried at first as an image registry. For trial purpose, Dockerhub public image repository will be used. For CI, Jenkins running on Kubernetes will implemented, and probably ArgoCD for CD. So the first step was to run Jenkins in Kubernetes. To run Jenkins on Kubernetes, requires Persistent Volume Claim. A better understanding of PVC was needed. The steps would be:
Notehttps://goharbor.io https://www.projectquay.io https://argoproj.github.io/argo-cd Kubernetes StorageA PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system. hostPath volume Type If you are circumventing the hostPath single-node limitation via nodeSelector while using multiple pods on the same node beware of the following issue:
Alternative Volume Types If you have only one drive which you can attach to one node, I think you should use the basic NFS volume type as it does not require replication. If however, you can afford another drive to plug in to the second node, you can take advantage of GlusterFS's replication feature. Volume Types
Converting a drive to a volume:
Resource
|
Security StatusSpend some time on understanding security status of the Kubernetes cluster. Found a good tool: Kubescape Kubescape is the first tool for testing if Kubernetes is deployed securely as defined in Kubernetes Hardening Guidance by to NSA and CISA Tests are configured with YAML files, making this tool easy to update as test specifications evolve. Installed and did a security scan:
The summary listed out many issues. The configuration used to access the was admin's which was created while creating clusters using Current context
So the question - How can I create a user and restrict user to access only one namespace in Kubernetes? Kubernetes gives us a way to regulate access to Kubernetes clusters and resources based on the roles of individual users through a feature called Role-based access control (RBAC) via rbac.authorization.k8s.io/v1 API. The RBAC API declares four kinds of Kubernetes object: Role, ClusterRole, RoleBinding and ClusterRoleBinding. You can describe objects, or amend them, using tools such as kubectl, just like any other Kubernetes object. Therefore to achieve a complete isolation in Kubernetes, the concepts on namespaces and role based access control is used.
Create and Limit Service account to a namespace in Kubernetes:
Confirm it -
Confirm it -
or
Need to confirm if this has to be done manually or it there a CLI way to do it?
An example of role with limited permissions:
RBAC Role vs ClusterRoleIf you want to define a role within a namespace, use a Role; if you want to define a role cluster-wide, use a ClusterRole. An RBAC Role or ClusterRole contains rules that represent a set of permissions. Permissions are purely additive (there are no "deny" rules). A Role always sets permissions within a particular namespace; when you create a Role, you have to specify the namespace it belongs in. ClusterRole, by contrast, is a non-namespaced resource. The resources have different names (Role and ClusterRole) because a Kubernetes object always has to be either namespaced or not namespaced; it can't be both. ClusterRoles have several uses. You can use a ClusterRole to:
A ClusterRole can be used to grant the same permissions as a Role. Because ClusterRoles are cluster-scoped, you can also use them to grant access to:
Here's an example Role in the "default" namespace that can be used to grant read access to pods:
Here is an example of a ClusterRole that can be used to grant read access to secrets in any particular namespace, or across all namespaces (depending on how it is bound): apiVersion: rbac.authorization.k8s.io/v1 "namespace" omitted since ClusterRoles are not namespacedname: secret-reader
Resource
|
Issue: Pods/Deployment not ready, and log from deployment unavailable ''' $ kubectl logs productpage-v1-84f77f8747-8zklx -c productpage Deleted the entire namespace and recreated new.
Create ServiceAccount
Create storage volume
Create deployment
List images and containers from Containerd
Inject Linerd to the namespace Set external IP
ConclusionFirst attempt worked. I wanted to learned each steps more and redid the steps which did not work out. Resource
|
ConclusionRe-installation worked. Default manifests used from https://devopscube.com/setup-jenkins-on-kubernetes-cluster. https://www.magalix.com/blog/create-a-ci/cd-pipeline-with-kubernetes-and-jenkins |
Objectives
Next Steps
Harbor is an open source registry that secures artifacts with policies and role-based access control, ensures images are scanned and free from vulnerabilities, and signs images as trusted. Harbor, a CNCF Graduated project, delivers compliance, performance, and interoperability to help you consistently and securely manage artifacts across cloud native compute platforms like Kubernetes and Docker. |
Continuing from the last installed state of Jenkins. Change namespace to jenkins List the resources in the namespace
Set external IP to access UI Build image with KanikoBuild image with Kaniko and deployed to DockerHub. For demo the process was implemented as a Kubernetes Job. To send image to Dockerhub, authentication is required.
Although there wasn't any configuration set for AWS, an error was logged. But this did not stop from
The tutorial, https://betterprogramming.pub/how-to-build-containers-in-a-kubernetes-cluster-with-kaniko-2d01cd3242a7, used FluxCD for deployment. FluxCD uses a push based GitOps, i.e. No Github webhooks are required, FluxCD pulls or checks the changes like cron jobs. Resource
|
After further reading on DevOps practices based on Kubernetes, JenkinsX seemed more relevant. Most example of service mesh included Istio, also in Google example: https://github.com/GoogleCloudPlatform/microservices-demo. So using Istio for this purpose. Uninstall Linkerd: https://linkerd.io/2.10/tasks/uninstall, https://linkerd.io/2.10/reference/cli/uninject
linkerd viz uninstall | kubectl delete -f -
linkerd jaeger uninstall | kubectl delete -f -
linkerd multicluster uninstall | kubectl delete -f -
kubectl get deploy -o yaml | linkerd uninject - | kubectl apply -f - A pod was stuck on terminating state so delete it forcefully: Also deleted tekton-pipelines ns as it was not useful but it was also stuck in terminating status. Tired another approach for that. Removed
And reapplied:
Flagger https://docs.flagger.appFlagger is a progressive delivery tool that automates the release process for applications running on Kubernetes. It reduces the risk of introducing a new software version in production by gradually shifting traffic to the new version while measuring metrics and running conformance tests. |
StateThe laptops with K8s installed were turned off. Kubelet was not active probably due to swapp was on. Re-instantiated Kubelet by executing shell script: https://github.com/SystemUtilities/utils/blob/main/k8s-init.sh Kubelet active. Node joined the cluster. Resource
|
The text was updated successfully, but these errors were encountered: