-
Notifications
You must be signed in to change notification settings - Fork 985
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pod_anti_affinity does not force multiple nodes scheduling #4440
Labels
bug
Something isn't working
Comments
Can you show the actual deployment spec from the API Server as yaml? kube-scheduler schedules pods to nodes, so unless it has the same bug as well, there's some issue with the pod spec. |
@tzneal here's a StatefulSet's config: apiVersion: apps/v1
kind: StatefulSet
metadata:
creationTimestamp: "2023-08-16T12:31:35Z"
generation: 1
name: web
namespace: ss
resourceVersion: "622769"
uid: ba3bd7cc-ceb9-487a-a2d2-7560ac2aa3d7
spec:
podManagementPolicy: OrderedReady
replicas: 3
revisionHistoryLimit: 0
selector:
matchLabels:
app: nginx
serviceName: nginx
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: karpenter.sh/provisioner-name
operator: In
values:
- default
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx
topologyKey: topology.kubernetes.io/hostname
automountServiceAccountToken: true
containers:
- image: registry.k8s.io/nginx-slim:0.8
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
name: web
protocol: TCP
resources:
limits:
cpu: 50m
memory: 64Mi
requests:
cpu: 50m
memory: 64Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /usr/share/nginx/html
mountPropagation: None
name: www
dnsPolicy: ClusterFirst
enableServiceLinks: true
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
shareProcessNamespace: false
terminationGracePeriodSeconds: 30
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
name: www
namespace: default
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
volumeMode: Filesystem
status:
phase: Pending
status:
availableReplicas: 3
collisionCount: 0
currentReplicas: 3
currentRevision: web-596fd8747
observedGeneration: 1
readyReplicas: 3
replicas: 3
updateRevision: web-596fd8747
updatedReplicas: 3 |
|
Talked on slack, I'll close this one, but feel free to re-open if you still run into issues. |
thank you for a fast reply |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Description
Observed Behavior:
All StatefulSet's pods are scheduled onto one node
Expected Behavior:
Taking into account #942
pod_anti_affinity
has to be taken into consideration and 3 StatefulSet's pods have to be scheduled on separate nodes.If I'm missing something, please let me know.
Reproduction Steps (Please include YAML):
provisioner:
workload:
Versions:
v0.30.0-rc.0
kubectl version
):The text was updated successfully, but these errors were encountered: