Skip to content

Commit

Permalink
document issues that arise from namespace based provisioning selection
Browse files Browse the repository at this point in the history
Closes aws#1493
  • Loading branch information
tzneal committed Mar 14, 2022
1 parent 3266cdc commit 6815d68
Showing 1 changed file with 10 additions and 0 deletions.
10 changes: 10 additions & 0 deletions website/content/en/preview/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,16 @@ This is analogous to the default scheduler.
To select an alternative provisioner, use the node selector `karpenter.sh/provisioner-name: alternative-provisioner`.
You must either define a default provisioner or explicitly specify `karpenter.sh/provisioner-name node selector`.

### How can I configure Karpenter to only provision pods for a particular namespace?

There is no native support for namespaced based provisioning.
Karpenter can be configured to provision a subset of pods based on a combination of taints/tolerations and node selectors.
This allows Karpenter to work in concert with the `kube-scheduler` in that the same mechanisms that `kube-scheduler` uses to determine if a pod can be bound to a node are also used by Karpenter.
This avoids scenarios where pods are bound to nodes that were provisioned by Karpenter which Karpenter would not have bound itself.
If this were to occur, a node could remain non-empty and have its lifetime extended due to a pod that wouldn't have caused the node to be provisioned had the pod been unschedulable.

Some users of Karpenter have had success using mutation admission webhooks to assign tolerations to pods upon admission to achieve namespace based provisioning. An example of this can be seen [here](https://blog.mikesir87.io/2022/01/creating-tenant-node-pools-with-karpenter/).

### Can I set total limits of CPU and memory for a provisioner?
Yes, the setting is provider-specific.
See examples in [Accelerators, GPU]({{< ref "./aws/provisioning/#accelerators-gpu" >}}) Karpenter documentation.
Expand Down

0 comments on commit 6815d68

Please sign in to comment.