-
Notifications
You must be signed in to change notification settings - Fork 979
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Getting Started and Other Docs (#527)
* add provisioner crd page * rearrange sidebar of docs * remove fargate from get started guide * Apply suggestions from code review * update filenames and get started guide * remove search.md * revise based on changes from ellis in other pr * fix link to prov crd * bump prov crd api version to v1alpha3 * pesky newline eks-config.yaml * remove merge text * reset docs/ folder to main branch * remove images * move up compatability section of faq * Apply suggestions from code review Co-authored-by: Alex Kestner <[email protected]> Co-authored-by: Alex Kestner <[email protected]>
- Loading branch information
1 parent
8024273
commit 05abd58
Showing
8 changed files
with
1,396 additions
and
2 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,13 @@ | ||
|
||
--- | ||
title: "Documentation" | ||
linkTitle: "Documentation" | ||
weight: 20 | ||
menu: | ||
main: | ||
weight: 20 | ||
--- | ||
|
||
Karpenter is an open-source autoscaling project built for Kubernetes. It improves availability for Kubernetes applications without requiring manually or over-provisioning compute resources. Karpenter is designed to provide the right compute resources to match your application’s needs in seconds, instead of minutes by observing the aggregate resource requests of unschedulable pods and makes decisions to launch and terminate nodes to minimize scheduling latencies. | ||
|
||
Learn more about Karpetner and how to get started below. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,86 @@ | ||
--- | ||
title: "Development Guide" | ||
linkTitle: "Development Guide" | ||
weight: 80 | ||
--- | ||
|
||
## Dependencies | ||
|
||
The following tools are required for contributing to the Karpenter project. | ||
|
||
| Package | Version | Install | | ||
| ------------------------------------------------------------------ | -------- | ---------------------- | | ||
| [go](https://golang.org/dl/) | v1.15.3+ | `brew install go` | | ||
| [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) | | `brew install kubectl` | | ||
| [helm](https://helm.sh/docs/intro/install/) | | `brew install helm` | | ||
| Other tools | | `make toolchain` | | ||
|
||
## Developing | ||
|
||
### Setup / Teardown | ||
|
||
Based on how you are running your Kubernetes cluster, follow the [Environment specific setup](#environment-specific-setup) to configure your environment before you continue. Once you have your environment set up, to install Karpenter in the Kubernetes cluster specified in your `~/.kube/config` run the following commands. | ||
|
||
``` | ||
make codegen # Create auto-generated YAML files. | ||
make apply # Install Karpenter | ||
make delete # Uninstall Karpenter | ||
``` | ||
|
||
### Developer Loop | ||
* Make sure dependencies are installed | ||
* Run `make codegen` to make sure yaml manifests are generated | ||
* Run `make toolchain` to install cli tools for building and testing the project | ||
* You will need a personal development image repository (e.g. ECR) | ||
* Make sure you have valid credentials to your development repository. | ||
* `$KO_DOCKER_REPO` must point to your development repository | ||
* Your cluster must have permissions to read from the repository | ||
* If you created your cluster on version 1.19 or above, you may need to tag your subnets as mentioned [here](docs/aws/README.md). This is a temporary problem with our subnet discovery system, and is being tracked [here](https://github.com/awslabs/karpenter/issues/404#issuecomment-845283904). | ||
|
||
### Build and Deploy | ||
``` | ||
make dev # build and test code | ||
make apply # deploy local changes to cluster | ||
CLOUD_PROVIDER=<YOUR_PROVIDER> make apply # deploy for your cloud provider | ||
``` | ||
|
||
### Testing | ||
``` | ||
make test # E2e correctness tests | ||
make battletest # More rigorous tests run in CI environment | ||
``` | ||
|
||
### Verbose Logging | ||
```bash | ||
kubectl patch deployment karpenter -n karpenter --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/args", "value": ["--verbose"]}]' | ||
``` | ||
|
||
### Debugging Metrics | ||
```bash | ||
open http://localhost:8080/metrics && kubectl port-forward service/karpenter-metrics -n karpenter 8080 | ||
``` | ||
|
||
## Environment specific setup | ||
|
||
### AWS | ||
Set the CLOUD_PROVIDER environment variable to build cloud provider specific packages of Karpenter. | ||
|
||
``` | ||
export CLOUD_PROVIDER=aws | ||
``` | ||
|
||
For local development on Karpenter you will need a Docker repo which can manage your images for Karpenter components. | ||
You can use the following command to provision an ECR repository. | ||
``` | ||
aws ecr create-repository \ | ||
--repository-name karpenter/controller \ | ||
--image-scanning-configuration scanOnPush=true \ | ||
--region ${AWS_DEFAULT_REGION} | ||
``` | ||
|
||
Once you have your ECR repository provisioned, configure your Docker daemon to authenticate with your newly created repository. | ||
|
||
``` | ||
export KO_DOCKER_REPO="${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/karpenter" | ||
aws ecr get-login-password --region ${AWS_DEFAULT_REGION} | docker login --username AWS --password-stdin $KO_DOCKER_REPO | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,49 @@ | ||
--- | ||
title: "FAQs" | ||
linkTitle: "FAQs" | ||
weight: 30 | ||
--- | ||
|
||
## General | ||
### How does a Provisioner decide to manage a particular node? | ||
Each node will have a set of predetermined Karpenter labels. Provisioners will use the `name` and `namespace` labels to distinguish between Provisioners. Furthermore, a Provisioner will only take action on a node based on the label that details what phase a node is in, e.g. a Provisioner will only consider a node for termination if its phase label says `"underutilized"`. | ||
## Compatibility | ||
### Which Kubernetes versions does Karpenter support? | ||
Karpenter releases on a similar cadence to upstream Kubernetes releases. Currently, Karpenter is compatible with all Kubernetes versions greater than v1.16. However, this may change in the future as Karpenter takes dependencies on new Kubernetes features. | ||
### Can I use Karpenter alongside another node management solution? | ||
Provisioners are designed to work alongside static capacity management solutions like EKS Managed Node Groups and EC2 Auto Scaling Groups. Some customers may choose to (1) manage the entirety of their capacity using Provisioner, others may prefer (2) a mixed model with both dynamic and statically managed capacity, some may prefer (3) a fully static approach. We anticipate that most customers will fall into bucket (2) in the short term, and (1) in the long term. | ||
### Can I use Karpenter with the Kubernetes Cluster Autoscaler? | ||
Yes, with side effects. Karpenter is a Cluster Autoscaler replacement. Both systems scale up nodes in response to unschedulable pods. If configured together, both systems will race to launch new instances for these pods. Since Provisioners make binding decisions, Karpenter will typically win the scheduling race. In this case, the Cluster Autoscaler will eventually scale down the unnecessary capacity. If the Cluster Autoscaler is configured with Node Groups that have constraints that aren’t supported by any Provisioner, its behavior will continue unimpeded. | ||
## Provisioning | ||
### How should I define scheduling constraints? | ||
Karpenter takes a layered approach to scheduling constraints. Each Cloud Provider has its own set of global defaults, which are overriden by defaults specified in the Provisioner, which are overridden by Pod scheduling constraints. This model requires minimal configuration for most use cases, and supports diverse workloads using a single Provisioner. | ||
### Does Karpenter replace the Kube Scheduler? | ||
No. Provisioners work in tandem with the Kube Scheduler. When capacity is unconstrained, the Kube Scheduler will schedule pods as usual. It may schedule pods to nodes managed by Provisioners or other types of capacity in the cluster. Provisioners only attempt to schedule pods when `type=PodScheduled,reason=Unschedulable`. In this case, they will make a provisioning decision, launch new capacity, and bind pods to the provisioned nodes. Provisioners do not wait for the Kube Scheduler to make a scheduling decision in this case, as the decision is already made by nature of making a provisioning decision. It's possible that a node from another management solution, like the Cluster Autoscaler, could create a race between the `kube-scheduler` and Karpenter. In this case, the first binding call will win, although Karpenter will often win these race conditions due to its performance characteristics. If Karpenter loses this race, the node will eventually be cleaned up. | ||
### Does Karpenter support node selectors? | ||
Yes. Node selectors are an opt-in mechanism which allow customers to specify the nodes on which a pod can scheduled. Provisioners recognize well-known node selectors on incoming pods and use them to constrain the nodes they generate. You can read more about the well-known node selectors Karpenter supports in the [Concepts](/docs/concepts/#well-known-labels) documentation. For example, well known selectors like `node.kubernetes.io/instance-type`, `topology.kubernetes.io/zone`, `kubernetes.io/os`, `kubernetes.io/arch` are supported, and will ensure that provisioned nodes are constrained accordingly. Additionally, customers may specify arbitrary labels, which will be automatically applied to every node launched by the Provisioner. | ||
### Does Karpenter support taints? | ||
Yes. Taints are an opt-out mechanism which allows customers to specify the nodes on which a pod cannot be scheduled. Unlike labels, Provisioners do not automatically taint nodes in response to pod tolerations, since pod tolerations do not require that corresponding taints exist. However, similar to labels, customers may specify taints for their Provisioner, which will automatically be applied to every node in the group. This means that if a Provisioner is configured with taints, any incoming pods will not be provisioned unless they tolerate the taints. | ||
### Does Karpenter support topology spread constraints? | ||
Yes. Provisioners respect `pod.spec.topologySpreadConstraints`. Allocating pods with these constraints may yield highly fragmented nodes, due to their strict nature and complexity of “online binpacking” algorithms. | ||
### Does Karpenter support affinity? | ||
No. Karpenter intentionally does not support affinity due the to [scalability limitations](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity) outlined by SIG Scalability. Instead, we recommend using node selectors or taints instead of node affinity and pod topology spread instead of pod affinity. Do you have a use case for affinity that we're missing? Open an issue in our [GitHub repo](https://github.com/awslabs/karpenter/issues/new/choose) and tell us about it! | ||
### Does Karpenter support custom resource like accelerators or HPC? | ||
Yes. Support for specific custom resources can be implemented by your cloud provider. | ||
### Does Karpenter support daemonsets? | ||
Yes. Provisioners factor in daemonset overhead into all allocation calculations. They also respect daemonset scheduling constraints, such as Nvidia’s GPU Driver Installer. | ||
### Does Karpenter support multiple scheduling defaults? | ||
Provisioners are heterogeneous, which means that the nodes they manage are spread across multiple availability zones, instance types, and capacity types. This flexibility reduces the need for a large number of groups. However, customers may find multiple groups to be useful for more advanced use cases. For example, customers can create multiple groups, and then use the node selector `karpenter.sh/provisioner-name` to target specific groups. This enables advanced use cases like resource isolation and sharding. | ||
### What if my pod is schedulable for multiple Provisioners? | ||
It's possible that unconstrained pods could flexibly schedule in multiple groups. In this case, Provisioners will race to create a scheduling lease for the pod before launching new nodes, which avoids unnecessary scale out. | ||
## Deprovisioning | ||
### How does Karpenter decide which nodes it can terminate? | ||
A provisioner will only take action on nodes that it manages. This means that a node will only be considered for termination if it is labeled underutilized by the provisioner that manages it. | ||
### How do I know if a node is underutilized? | ||
Nodes are labeled underutilized if they have 0 non-daemonset pods scheduled. We plan to include more use cases in the future. A node needs to be underutilized for a period of time before being considered for termination. | ||
### How does Karpenter terminate nodes? | ||
Karpenter annotates nodes that are underutilized with a time to live (TTL). If the node remains underutilized after the TTL expires, Karpenter then [cordons](https://kubernetes.io/docs/concepts/architecture/nodes/#manual-node-administration) the node and uses the [Kubernetes Eviction API](https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/#eviction-api) to evict all non-daemonset pods. Once the node is empty, the node is terminated. | ||
### Does Karpenter support Pod Disruption Budgets? | ||
Yes. The Kubernetes Eviction API will not delete pods that violate a [Pod Disruption Budget (PDB)](https://kubernetes.io/docs/tasks/run-application/configure-pdb/). It also disallows eviction of any pod covered by multiple PDBs, so most users will want to avoid overlapping selectors. See [this](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#pod-disruption-budgets) for more. | ||
### Does Karpenter support scale to zero? | ||
Yes. Provisioners start at zero and launch or terminate nodes as necessary. We recommend that customers maintain a small amount of static capacity to bootstrap system controllers or run Karpenter outside of their cluster. | ||
|
Oops, something went wrong.