-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test: script for testing operator performance #202
Conversation
|
ec59331
to
3681d32
Compare
test/performance/operf.go
Outdated
} | ||
_, err := c.CoreV1().PersistentVolumeClaims(*testNamespace).Create(context.TODO(), &pvc, metav1.CreateOptions{}) | ||
if err != nil { | ||
fmt.Printf("Error creating PVC <%s-%d>: %s\n", PVCName, i, err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest that the error should be handled by failing the entire process if there is any error creating the pods. Since we will be evaluating the output based on the number of resources (pods) we are creating, we should ensure that desired number of pods are always running before printing the metrics.
test/performance/operf.go
Outdated
} | ||
_, err := c.CoreV1().Pods(*testNamespace).Create(context.TODO(), &pod, metav1.CreateOptions{}) | ||
if err != nil { | ||
fmt.Printf("Error creating Pod <%s-%d>: %s\n", PodName, i, err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest that the error should be handled by failing the entire process if there is any error creating the pods. Since we will be evaluating the output based on the number of resources (pods) we are creating, we should ensure that desired number of pods are always running before printing the metrics.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a test script for metrics and not actual functional testing. IMO we should print such errors and continue with getting the metrics.
test/performance/operf.go
Outdated
for i := 0; i < len(values); i++ { | ||
v, err := strconv.ParseFloat(fmt.Sprintf("%s", values[i][1]), 64) | ||
if err != nil { | ||
fmt.Println("error converting ", values[i][1], " to number") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fmt.Errorf
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This error should be returned and handled correctly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The script is designed to do its best and fetch the details it can. It is getting panic only if it can not go ahead.
test/performance/operf.go
Outdated
kubeconfig = &val | ||
|
||
// use the current context in kubeconfig | ||
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is deploymanager created for e2e tests.
Can that be reused here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can do that, But we need to make sure that we do not change anything in the future which will break this script. Moreover, it is 15 lines of code while replacing it we are hardly able to save a few lines of code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can do that, But we need to make sure that we do not change anything in the future which will break this script.
Agreed. But this mostly looks like static code to get the client which I don't expect to change.
Moreover, it is 15 lines of code while replacing it we are hardly able to save a few lines of code.
The idea is the have a common a utility that will provide the client to connect with k8s via kubeconfig. In future, we need to create the client again, then this utility can be reused, rather than writing same logic again to generate the client.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just tried this, The deploy manager uses the client from "sigs.k8s.io/controller-runtime/pkg/client"
and this script uses the client from "k8s.io/client-go/kubernetes"
. We can not directly use the deploy manager client here directly.
c254d41
to
c15f38b
Compare
640a177
to
22af454
Compare
|
for _, u := range unit.ObjectMetrics { | ||
fmt.Println(strings.Repeat("-", 80)) | ||
fmt.Printf("Report for %s %s's %s %s between %s and %s\n", unit.Name, unit.WorkloadType, u.Type, u.Name, unit.Start, unit.End) | ||
fmt.Printf("\t CPU (min|max|avg) seconds: % 10.4f | % 10.4f | % 10.4f\n", u.Cpu.Min, u.Cpu.Max, u.Cpu.Avg) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm still looking into the query. Most online resources seem to use "sum (rate (container_cpu_usage_seconds_total..."
I'll get back on this.
Since the query matches the one used in the Openshift dashboard, we can continue with this. |
Signed-off-by: Juan Miguel Olmo Martínez <[email protected]> Signed-off-by: Nitin Goyal <[email protected]>
/test lvm-operator-bundle-e2e-aws |
@iamniting: The following test failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: iamniting, nbalacha The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
LVM operator performance tests (sc usage + lvmcluster creation) based on #69
Signed-off-by: Juan Miguel Olmo Martínez [email protected]
Signed-off-by: Nitin Goyal [email protected]