Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: Add more scheduling examples #5977

Merged
merged 3 commits into from
Apr 8, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
88 changes: 88 additions & 0 deletions website/content/en/docs/concepts/scheduling.md
Original file line number Diff line number Diff line change
Expand Up @@ -532,6 +532,94 @@ Based on the way that Karpenter performs pod batching and bin packing, it is not

## Advanced Scheduling Techniques

### Scheduling based on Node Resources

You may want pods to be able to request resources of nodes that Kubernetes natively does not provide as a schedulable resource or that are aspects of certain nodes like
High Performance Networking or NVME Local Storage. You can use Karpenter's Well-Known Labels to accomplish this.

These can further be applied at the NodePool or Workload level using Requirements, NodeSelectors or Affinities

Pod example of requiring any NVME disk:
```yaml
...
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "karpenter.k8s.aws/instance-local-nvme"
operator: "Exists"
...
```

NodePool Example:
```yaml
...
requirement:
- key: "karpenter.k8s.aws/instance-local-nvme"
operator: "Exists"
...
```

Pod example of requiring at least 100GB of NVME disk:
```yaml
...
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "karpenter.k8s.aws/instance-local-nvme"
operator: Gt
values: ["99"]
...
```

NodePool Example:
```yaml
...
requirement:
- key: "karpenter.k8s.aws/instance-local-nvme"
operator: Gt
values: ["99"]
...
```

{{% alert title="Note" color="primary" %}}
Karpenter cannot yet take into account ephemeral-storage requests while scheduling pods, we're purely requesting attributes of nodes and getting X amount of resources
as a side effect. You may need to tweak schedulable resources like CPU or Memory to achieve desired fit, especially if Consolidation is enabled.

Your NodeClass will also need to support automatically formatting and mounting NVME Instance Storage if available.
{{% /alert %}}

Pod example of requiring at least 50 Gbps of network bandwidth:
```yaml
...
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "karpenter.k8s.aws/instance-network-bandwidth"
operator: Gt
values: ["49999"]
...
```

NodePool Example:
```yaml
...
requirement:
- key: "karpenter.k8s.aws/instance-network-bandwidth"
operator: Gt
values: ["49999"]
...
```

{{% alert title="Note" color="primary" %}}
If using Gt/Lt operators, make sure to use values under the actual label values of the desired resource.
{{% /alert %}}

### `Exists` Operator

The `Exists` operator can be used on a NodePool to provide workload segregation across nodes.
Expand Down
88 changes: 88 additions & 0 deletions website/content/en/preview/concepts/scheduling.md
Original file line number Diff line number Diff line change
Expand Up @@ -533,6 +533,94 @@ Based on the way that Karpenter performs pod batching and bin packing, it is not

## Advanced Scheduling Techniques

### Scheduling based on Node Resources

You may want pods to be able to request resources of nodes that Kubernetes natively does not provide as a schedulable resource or that are aspects of certain nodes like
High Performance Networking or NVME Local Storage. You can use Karpenter's Well-Known Labels to accomplish this.

These can further be applied at the NodePool or Workload level using Requirements, NodeSelectors or Affinities

Pod example of requiring any NVME disk:
```yaml
...
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "karpenter.k8s.aws/instance-local-nvme"
operator: "Exists"
...
```

NodePool Example:
```yaml
...
requirement:
- key: "karpenter.k8s.aws/instance-local-nvme"
operator: "Exists"
...
```

Pod example of requiring at least 100GB of NVME disk:
```yaml
...
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "karpenter.k8s.aws/instance-local-nvme"
operator: Gt
values: ["99"]
...
```

NodePool Example:
```yaml
...
requirement:
- key: "karpenter.k8s.aws/instance-local-nvme"
operator: Gt
values: ["99"]
...
```

{{% alert title="Note" color="primary" %}}
Karpenter cannot yet take into account ephemeral-storage requests while scheduling pods, we're purely requesting attributes of nodes and getting X amount of resources
as a side effect. You may need to tweak schedulable resources like CPU or Memory to achieve desired fit, especially if Consolidation is enabled.

Your NodeClass will also need to support automatically formatting and mounting NVME Instance Storage if available.
{{% /alert %}}

Pod example of requiring at least 50 Gbps of network bandwidth:
```yaml
...
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "karpenter.k8s.aws/instance-network-bandwidth"
operator: Gt
values: ["49999"]
...
```

NodePool Example:
```yaml
...
requirement:
- key: "karpenter.k8s.aws/instance-network-bandwidth"
operator: Gt
values: ["49999"]
...
```

{{% alert title="Note" color="primary" %}}
If using Gt/Lt operators, make sure to use values under the actual label values of the desired resource.
{{% /alert %}}

### `Exists` Operator

The `Exists` operator can be used on a NodePool to provide workload segregation across nodes.
Expand Down
88 changes: 88 additions & 0 deletions website/content/en/v0.32/concepts/scheduling.md
Original file line number Diff line number Diff line change
Expand Up @@ -532,6 +532,94 @@ Based on the way that Karpenter performs pod batching and bin packing, it is not

## Advanced Scheduling Techniques

### Scheduling based on Node Resources

You may want pods to be able to request resources of nodes that Kubernetes natively does not provide as a schedulable resource or that are aspects of certain nodes like
High Performance Networking or NVME Local Storage. You can use Karpenter's Well-Known Labels to accomplish this.

These can further be applied at the NodePool or Workload level using Requirements, NodeSelectors or Affinities

Pod example of requiring any NVME disk:
```yaml
...
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "karpenter.k8s.aws/instance-local-nvme"
operator: "Exists"
...
```

NodePool Example:
```yaml
...
requirement:
- key: "karpenter.k8s.aws/instance-local-nvme"
operator: "Exists"
...
```

Pod example of requiring at least 100GB of NVME disk:
```yaml
...
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "karpenter.k8s.aws/instance-local-nvme"
operator: Gt
values: ["99"]
...
```

NodePool Example:
```yaml
...
requirement:
- key: "karpenter.k8s.aws/instance-local-nvme"
operator: Gt
values: ["99"]
...
```

{{% alert title="Note" color="primary" %}}
Karpenter cannot yet take into account ephemeral-storage requests while scheduling pods, we're purely requesting attributes of nodes and getting X amount of resources
as a side effect. You may need to tweak schedulable resources like CPU or Memory to achieve desired fit, especially if Consolidation is enabled.

Your NodeClass will also need to support automatically formatting and mounting NVME Instance Storage if available.
{{% /alert %}}

Pod example of requiring at least 50 Gbps of network bandwidth:
```yaml
...
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "karpenter.k8s.aws/instance-network-bandwidth"
operator: Gt
values: ["49999"]
...
```

NodePool Example:
```yaml
...
requirement:
- key: "karpenter.k8s.aws/instance-network-bandwidth"
operator: Gt
values: ["49999"]
...
```

{{% alert title="Note" color="primary" %}}
If using Gt/Lt operators, make sure to use values under the actual label values of the desired resource.
{{% /alert %}}

### `Exists` Operator

The `Exists` operator can be used on a NodePool to provide workload segregation across nodes.
Expand Down
88 changes: 88 additions & 0 deletions website/content/en/v0.33/concepts/scheduling.md
Original file line number Diff line number Diff line change
Expand Up @@ -533,6 +533,94 @@ Based on the way that Karpenter performs pod batching and bin packing, it is not

## Advanced Scheduling Techniques

### Scheduling based on Node Resources

You may want pods to be able to request resources of nodes that Kubernetes natively does not provide as a schedulable resource or that are aspects of certain nodes like
High Performance Networking or NVME Local Storage. You can use Karpenter's Well-Known Labels to accomplish this.

These can further be applied at the NodePool or Workload level using Requirements, NodeSelectors or Affinities

Pod example of requiring any NVME disk:
```yaml
...
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "karpenter.k8s.aws/instance-local-nvme"
operator: "Exists"
...
```

NodePool Example:
```yaml
...
requirement:
- key: "karpenter.k8s.aws/instance-local-nvme"
operator: "Exists"
...
```

Pod example of requiring at least 100GB of NVME disk:
```yaml
...
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "karpenter.k8s.aws/instance-local-nvme"
operator: Gt
values: ["99"]
...
```

NodePool Example:
```yaml
...
requirement:
- key: "karpenter.k8s.aws/instance-local-nvme"
operator: Gt
values: ["99"]
...
```

{{% alert title="Note" color="primary" %}}
Karpenter cannot yet take into account ephemeral-storage requests while scheduling pods, we're purely requesting attributes of nodes and getting X amount of resources
as a side effect. You may need to tweak schedulable resources like CPU or Memory to achieve desired fit, especially if Consolidation is enabled.

Your NodeClass will also need to support automatically formatting and mounting NVME Instance Storage if available.
{{% /alert %}}

Pod example of requiring at least 50 Gbps of network bandwidth:
```yaml
...
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: "karpenter.k8s.aws/instance-network-bandwidth"
operator: Gt
values: ["49999"]
...
```

NodePool Example:
```yaml
...
requirement:
- key: "karpenter.k8s.aws/instance-network-bandwidth"
operator: Gt
values: ["49999"]
...
```

{{% alert title="Note" color="primary" %}}
If using Gt/Lt operators, make sure to use values under the actual label values of the desired resource.
{{% /alert %}}

### `Exists` Operator

The `Exists` operator can be used on a NodePool to provide workload segregation across nodes.
Expand Down
Loading
Loading