Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GMC: add UT for reconcile filters #383

Merged
merged 10 commits into from
Sep 4, 2024
18 changes: 9 additions & 9 deletions .github/workflows/scripts/e2e/gmc_xeon_test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ function validate_audioqa() {
exit 1
fi

pods_count=$(kubectl get pods -n $AUDIOQA_NAMESPACE -o jsonpath='{.items[*].metadata.name}' | wc -w)
pods_count=$(kubectl get pods -n $AUDIOQA_NAMESPACE --no-headers | grep -v "Terminating" | wc -w)
# expected_ready_pods, expected_external_pods, expected_total_pods
# pods_count-1 is to exclude the client pod in this namespace
check_gmc_status $AUDIOQA_NAMESPACE 'audioqa' $((pods_count-1)) 0 7
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @KfreeZ, why do you use precise number instead pods_count ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hi @zhlsunshine because the pod count is not accurate sometimes when terminating pod is not cleared but counted

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I through that it should be okay after calling wait_until_all_pod_ready, if yes, could we just fix the bug for this function?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the latest wait_until_all_pod_ready has excluded the "Terminating" pods, so there is possibility that the pods are still existing as "Terminating" pods after wait_until_all_pod_ready. Yes I can also count the pods by excluding the "Terminating" pods, but for these test case, the ready counts and total counts are fixed numbers, for chatqa for instances, after a deploy, "8/0/8" is exactly we expected.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay, then let's exclude the Terminating pods when calculating the pods_count.

Expand Down Expand Up @@ -187,7 +187,7 @@ function validate_chatqna() {
exit 1
fi

pods_count=$(kubectl get pods -n $CHATQNA_NAMESPACE -o jsonpath='{.items[*].metadata.name}' | wc -w)
pods_count=$(kubectl get pods -n $CHATQNA_NAMESPACE --no-headers | grep -v "Terminating" | wc -w)
# expected_ready_pods, expected_external_pods, expected_total_pods
# pods_count-1 is to exclude the client pod in this namespace
check_gmc_status $CHATQNA_NAMESPACE 'chatqa' $((pods_count-1)) 0 9
Expand Down Expand Up @@ -257,7 +257,7 @@ function validate_chatqna_with_dataprep() {
exit 1
fi

pods_count=$(kubectl get pods -n $CHATQNA_DATAPREP_NAMESPACE -o jsonpath='{.items[*].metadata.name}' | wc -w)
pods_count=$(kubectl get pods -n $CHATQNA_DATAPREP_NAMESPACE --no-headers | grep -v "Terminating" | wc -w)
# expected_ready_pods, expected_external_pods, expected_total_pods
# pods_count-1 is to exclude the client pod in this namespace
check_gmc_status $CHATQNA_DATAPREP_NAMESPACE 'chatqa' $((pods_count-1)) 0 10
Expand Down Expand Up @@ -350,7 +350,7 @@ function validate_chatqna_in_switch() {
exit 1
fi

pods_count=$(kubectl get pods -n $CHATQNA_SWITCH_NAMESPACE -o jsonpath='{.items[*].metadata.name}' | wc -w)
pods_count=$(kubectl get pods -n $CHATQNA_SWITCH_NAMESPACE --no-headers | grep -v "Terminating" | wc -w)
# expected_ready_pods, expected_external_pods, expected_total_pods
# pods_count-1 is to exclude the client pod in this namespace
check_gmc_status $CHATQNA_SWITCH_NAMESPACE 'switch' $((pods_count-1)) 0 15
Expand Down Expand Up @@ -443,8 +443,8 @@ function validate_modify_config() {
exit 1
fi

pods_count=$(kubectl get pods -n $MODIFY_STEP_NAMESPACE -o jsonpath='{.items[*].metadata.name}' | wc -w)
check_gmc_status $MODIFY_STEP_NAMESPACE 'codegen' $pods_count 0 3
pods_count=$(kubectl get pods -n $MODIFY_STEP_NAMESPACE --no-headers | grep -v "Terminating" | wc -w)
check_gmc_status $MODIFY_STEP_NAMESPACE 'codegen' $((pods_count-1)) 0 3
if [ $? -ne 0 ]; then
echo "GMC status is not as expected"
exit 1
Expand All @@ -461,7 +461,7 @@ function validate_modify_config() {
exit 1
fi

check_gmc_status $MODIFY_STEP_NAMESPACE 'codegen' $pods_count 0 3
check_gmc_status $MODIFY_STEP_NAMESPACE 'codegen' $((pods_count-1)) 0 3
if [ $? -ne 0 ]; then
echo "GMC status is not as expected"
exit 1
Expand Down Expand Up @@ -494,8 +494,8 @@ function validate_remove_step() {
exit 1
fi

pods_count=$(kubectl get pods -n $DELETE_STEP_NAMESPACE -o jsonpath='{.items[*].metadata.name}' | wc -w)
check_gmc_status $DELETE_STEP_NAMESPACE 'codegen' $pods_count 0 3
pods_count=$(kubectl get pods -n $DELETE_STEP_NAMESPACE --no-headers | grep -v "Terminating" | wc -w)
check_gmc_status $DELETE_STEP_NAMESPACE 'codegen' $((pods_count-1)) 0 3
if [ $? -ne 0 ]; then
echo "GMC status is not as expected"
exit 1
Expand Down
151 changes: 74 additions & 77 deletions microservices-connector/internal/controller/gmconnector_controller.go
Original file line number Diff line number Diff line change
Expand Up @@ -797,100 +797,97 @@ func isMetadataChanged(oldObject, newObject *metav1.ObjectMeta) bool {
return false
}

// SetupWithManager sets up the controller with the Manager.
func (r *GMConnectorReconciler) SetupWithManager(mgr ctrl.Manager) error {
// Predicate to ignore updates to status subresource
ignoreStatusUpdatePredicate := predicate.Funcs{
UpdateFunc: func(e event.UpdateEvent) bool {
// Cast objects to your GMConnector struct
oldObject, ok1 := e.ObjectOld.(*mcv1alpha3.GMConnector)
newObject, ok2 := e.ObjectNew.(*mcv1alpha3.GMConnector)
if !ok1 || !ok2 {
// Not the correct type, allow the event through
return true
}
func isGMCSpecOrMetadataChanged(e event.UpdateEvent) bool {
oldObject, ok1 := e.ObjectOld.(*mcv1alpha3.GMConnector)
newObject, ok2 := e.ObjectNew.(*mcv1alpha3.GMConnector)
if !ok1 || !ok2 {
// Not the correct type, allow the event through
return true
}

specChanged := !reflect.DeepEqual(oldObject.Spec, newObject.Spec)
metadataChanged := isMetadataChanged(&(oldObject.ObjectMeta), &(newObject.ObjectMeta))
specChanged := !reflect.DeepEqual(oldObject.Spec, newObject.Spec)
metadataChanged := isMetadataChanged(&(oldObject.ObjectMeta), &(newObject.ObjectMeta))

fmt.Printf("\n| spec changed %t | meta changed: %t |\n", specChanged, metadataChanged)
fmt.Printf("\n| spec changed %t | meta changed: %t |\n", specChanged, metadataChanged)

// Compare the old and new spec, ignore metadata, status changes
// metadata change: name, namespace, such change should create a new GMC
// status change: depoyment status
return specChanged || metadataChanged
},
// Other funcs like CreateFunc, DeleteFunc, GenericFunc can be left as default
// if you only want to customize the UpdateFunc behavior.
// Compare the old and new spec, ignore metadata, status changes
// metadata change: name, namespace, such change should create a new GMC
// status change: deployment status
return specChanged || metadataChanged
}

func isDeploymentStatusChanged(e event.UpdateEvent) bool {
oldDeployment, ok1 := e.ObjectOld.(*appsv1.Deployment)
newDeployment, ok2 := e.ObjectNew.(*appsv1.Deployment)
if !ok1 || !ok2 {
// Not the correct type, allow the event through
return true
}

// Predicate to only trigger on status changes for Deployment
deploymentStatusChangePredicate := predicate.Funcs{
UpdateFunc: func(e event.UpdateEvent) bool {
oldDeployment, ok1 := e.ObjectOld.(*appsv1.Deployment)
newDeployment, ok2 := e.ObjectNew.(*appsv1.Deployment)
if !ok1 || !ok2 {
// Not the correct type, allow the event through
fmt.Printf("| status missing |\n")
return true
if len(newDeployment.OwnerReferences) == 0 {
// fmt.Printf("| %s:%s: no owner reference |\n", newDeployment.Namespace, newDeployment.Name)
return false
} else {
for _, owner := range newDeployment.OwnerReferences {
if owner.Kind == "GMConnector" {
// fmt.Printf("| %s:%s: owner is GMConnector |\n", newDeployment.Namespace, newDeployment.Name)
break
}
}
}

if len(newDeployment.OwnerReferences) == 0 {
// fmt.Printf("| %s:%s: no owner reference |\n", newDeployment.Namespace, newDeployment.Name)
return false
} else {
for _, owner := range newDeployment.OwnerReferences {
if owner.Kind == "GMConnector" {
// fmt.Printf("| %s:%s: owner is GMConnector |\n", newDeployment.Namespace, newDeployment.Name)
break
}
oldStatus := corev1.ConditionUnknown
newStatus := corev1.ConditionUnknown
if !reflect.DeepEqual(oldDeployment.Status, newDeployment.Status) {
if newDeployment.Status.Conditions != nil {
for _, condition := range newDeployment.Status.Conditions {
if condition.Type == appsv1.DeploymentAvailable {
newStatus = condition.Status
}
}

oldStatus := corev1.ConditionUnknown
newStatus := corev1.ConditionUnknown
if !reflect.DeepEqual(oldDeployment.Status, newDeployment.Status) {
if newDeployment.Status.Conditions != nil {
for _, condition := range newDeployment.Status.Conditions {
if condition.Type == appsv1.DeploymentAvailable {
newStatus = condition.Status
}
}
}
if oldDeployment.Status.Conditions != nil {
for _, condition := range oldDeployment.Status.Conditions {
if condition.Type == appsv1.DeploymentAvailable {
oldStatus = condition.Status
}
}
}
// Only trigger if the status has changed from true to false|unknown or vice versa
if (oldStatus == corev1.ConditionTrue && oldStatus != newStatus) ||
(newStatus == corev1.ConditionTrue && oldStatus != newStatus) {
{
fmt.Printf("| %s:%s: status changed from : %v to %v|\n",
newDeployment.Namespace, newDeployment.Name,
oldStatus, newStatus)
return true
}
}
if oldDeployment.Status.Conditions != nil {
for _, condition := range oldDeployment.Status.Conditions {
if condition.Type == appsv1.DeploymentAvailable {
oldStatus = condition.Status
}
}
return false
},
//ignore create and delete events, otherwise it will trigger the nested reconcile which is meaningless
CreateFunc: func(e event.CreateEvent) bool {
return false
}, DeleteFunc: func(e event.DeleteEvent) bool {
return false
},
}
// Only trigger if the status has changed from true to false|unknown or vice versa
if (oldStatus == corev1.ConditionTrue && oldStatus != newStatus) ||
(newStatus == corev1.ConditionTrue && oldStatus != newStatus) {
{
fmt.Printf("| %s:%s: status changed from : %v to %v|\n",
newDeployment.Namespace, newDeployment.Name,
oldStatus, newStatus)
return true
}
}
}
return false

}

// SetupWithManager sets up the controller with the Manager.
func (r *GMConnectorReconciler) SetupWithManager(mgr ctrl.Manager) error {
// Predicate to ignore updates to status subresource
gmcfilter := predicate.Funcs{
UpdateFunc: isGMCSpecOrMetadataChanged,
// Other funcs like CreateFunc, DeleteFunc, GenericFunc can be left as default
// if you only want to customize the UpdateFunc behavior.
}

// Predicate to only trigger on status changes for Deployment
deploymentFilter := predicate.Funcs{
UpdateFunc: isDeploymentStatusChanged,
}

return ctrl.NewControllerManagedBy(mgr).
For(&mcv1alpha3.GMConnector{}, builder.WithPredicates(ignoreStatusUpdatePredicate)).
For(&mcv1alpha3.GMConnector{}, builder.WithPredicates(gmcfilter)).
Watches(
&appsv1.Deployment{},
&handler.EnqueueRequestForObject{},
builder.WithPredicates(deploymentStatusChangePredicate),
builder.WithPredicates(deploymentFilter),
).
Complete(r)
}
Loading
Loading