You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
## Expected Behavior
We expected that pod will be deleted after it gets status completed and job is deleted(according to parameter successfulJobsHistoryLimit). Job was deleted but its pod still present during 12 minutes in status completed
## Actual Behavior
Pod gets status "completed" , job get status "completed" and is deleted but its pod still present during 12 minutes in status completed
we use this parameters
pollingInterval: 30
cooldownPeriod: 30
So it should not be 12 minutes
Other problem - rabbitmq trigger , if we have several jobs for the same queue - all pods will be deleted just when the last pod of last job will have completed age 12 minutes.
For example
pod-1 0/1 Completed 0 21m
pod-2 0/1 Completed 0 20m
pod-3 0/1 Completed 0 18m
pod-4 0/1 Completed 0 14m
pod-5 0/1 Completed 0 11m
when pod-5 has age 12m all pods will be destroyed
It looks like KEDA checks rabbitmq queue during 12 minutes(why not 30 seconds?) and if it is empty deletes all pods, if queue will not be empty all these pods will be present in kubernetes forever
Our config
apiVersion: keda.sh/v1alpha1
kind: ScaledJob
metadata:
name: job
spec:
jobTargetRef:
parallelism: 1
completions: 1
backoffLimit: 2
template:
spec:
containers:
**something that can handle queue**
pollingInterval: 30
cooldownPeriod: 30
minReplicaCount: 0
maxReplicaCount: 100
successfulJobsHistoryLimit: 0
failedJobsHistoryLimit: 0
advanced:
horizontalPodAutoscalerConfig:
resourceMetrics:
- name: cpu # Name of the metric to scale on
target:
type: utilization
averageUtilization: 50
- name: memory
target:
type: utilization
averageUtilization: 50
triggers:
- type: rabbitmq
metadata:
protocol: amqp
queueName: long
queueLength: "1"
authenticationRef:
name: keda-trigger-auth-rabbitmq-conn
Steps to Reproduce the Problem
Install Keda via Helm
Create Scaled Job with similar config
Create trigger authentication to get connect to your rabbitmq
Publish any message to rabbitmq and watch behavior of pod
Specifications
KEDA Version:2.0.0-beta1.2
Platform & Version: *Linux *
Kubernetes Version:EKS v.1.17
Scaler(s):RabbitMQ
The text was updated successfully, but these errors were encountered:
## Expected Behavior
We expected that pod will be deleted after it gets status completed and job is deleted(according to parameter successfulJobsHistoryLimit). Job was deleted but its pod still present during 12 minutes in status completed
## Actual Behavior
Pod gets status "completed" , job get status "completed" and is deleted but its pod still present during 12 minutes in status completed
we use this parameters
pollingInterval: 30
cooldownPeriod: 30
So it should not be 12 minutes
Other problem - rabbitmq trigger , if we have several jobs for the same queue - all pods will be deleted just when the last pod of last job will have completed age 12 minutes.
For example
pod-1 0/1 Completed 0 21m
pod-2 0/1 Completed 0 20m
pod-3 0/1 Completed 0 18m
pod-4 0/1 Completed 0 14m
pod-5 0/1 Completed 0 11m
when pod-5 has age 12m all pods will be destroyed
It looks like KEDA checks rabbitmq queue during 12 minutes(why not 30 seconds?) and if it is empty deletes all pods, if queue will not be empty all these pods will be present in kubernetes forever
Our config
Steps to Reproduce the Problem
Specifications
The text was updated successfully, but these errors were encountered: