Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: run integration tests concurrently #134

Merged
merged 4 commits into from
May 16, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 10 additions & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,8 @@ IMG ?= $(DEFAULT_IMG)
UNIT_DIRS := ./pkg/... ./api/...
INTEGRATION_TEST_SUITE_PATHS := ./controllers/...
INTEGRATION_COVER_PKGS := ./pkg/...,./controllers/...,./api/...
INTEGRATION_TEST_NUM_CORES ?= 4
INTEGRATION_TEST_NUM_PROCESSES ?= 10

# Limitador Operator replaced version
DEFAULT_REPLACES_VERSION = 0.0.0-alpha
Expand Down Expand Up @@ -228,8 +230,15 @@ test-integration: clean-cov generate fmt vet ginkgo ## Run Integration tests.
--coverpkg $(INTEGRATION_COVER_PKGS) \
--output-dir $(PROJECT_PATH)/coverage/integration \
--coverprofile cover.out \
--fail-fast \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why remove --fail-fast ? is it incompatible with concurrent tests?

Copy link
Contributor Author

@KevFan KevFan May 14, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's mainly to align with Kuadrant Operator where the fail-fast flag was removed and by the recommendation at https://onsi.github.io/ginkgo/#recommended-continuous-integration-configuration.

I assume the flag works with concurrent tests. I don't feel too strongly with this so I can revert to keeping the flag if you feel strongly with this

-v \
--compilers=$(INTEGRATION_TEST_NUM_CORES) \
--procs=$(INTEGRATION_TEST_NUM_PROCESSES) \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what about -p ??

 -p 
    If set, ginkgo will run in parallel with an auto-detected number of nodes.

Copy link
Contributor Author

@KevFan KevFan May 14, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can give -p a try, caveat is that https://onsi.github.io/ginkgo/#recommended-continuous-integration-configuration does not recommend using it in CI, although not sure it is an issue with using it via github actions CI 🤔

-procs=N will run each suite in parallel. This can substantially speed up suites and you should experiment with different values of N. Note that it is not recommended that you run specs in parallel with -p on CI. Some CI services run on shared machines that will report (e.g.) 32 cores but will not actually give an individual account access to all those compute resources!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see the issue with CI envs... ok. Let's give it a try

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tried to use -p in 5a59106 and looks like it's using 4 processes for the tests - https://github.com/Kuadrant/limitador-operator/actions/runs/9081747820/job/24956406772?pr=134 which matches the number specified at https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners/about-github-hosted-runners#standard-github-hosted-runners-for-public-repositories

image

It's about 1.5 mins slower than previous where we able to specify to run using 10 processes instead 🤔

Probably will revert to the previous commit since it looks like github CI is able to support it unless you want to go with -p instead

--randomize-all \
--randomize-suites \
--fail-on-pending \
--keep-going \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

--keep-going 
    If set, failures from earlier test suites do not prevent later test suites
    from running.

Not an strong opinion here. But if one test fails, I prefer the test suite to end. I do not need to know how many tests are failing. I fix the failing test and trying again the whole suite.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My preference is to use the keep-going flag instead of the fail-fast. Although it may be not an issue to using the fail-fast flag now that the integration test completes much faster, using keep-going can tell a developer all tests that were effected by a change rather than fixing them sequentially 🤔

Can revert back to --fail-fast if you feel strongly about this. I don't see it as a major downside to keeping to fail-fast if we can now run integration tests fairly quickly

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's give it a try.

If the test duration was too long, I think it is best --fail-fast. I would not like to wait 1 hour just to see all tests that are failing. If it is a matter of few minutes, I am ok with keep-going.

--race \
--trace \
$(INTEGRATION_TEST_SUITE_PATHS)

ifdef TEST_NAME
Expand Down
6 changes: 3 additions & 3 deletions controllers/limitador_controller_affinity_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ var _ = Describe("Limitador controller manages affinity", func() {

BeforeEach(func(ctx SpecContext) {
CreateNamespaceWithContext(ctx, &testNamespace)
}, nodeTimeOut)
})

AfterEach(func(ctx SpecContext) {
DeleteNamespaceWithContext(ctx, &testNamespace)
Expand Down Expand Up @@ -55,7 +55,7 @@ var _ = Describe("Limitador controller manages affinity", func() {
limitadorObj.Spec.Affinity = affinity
Expect(k8sClient.Create(ctx, limitadorObj)).Should(Succeed())
Eventually(testLimitadorIsReady(ctx, limitadorObj)).WithContext(ctx).Should(Succeed())
}, nodeTimeOut)
})

It("Should create a new deployment with the custom affinity", func(ctx SpecContext) {
deployment := appsv1.Deployment{}
Expand Down Expand Up @@ -98,7 +98,7 @@ var _ = Describe("Limitador controller manages affinity", func() {
limitadorObj = basicLimitador(testNamespace)
Expect(k8sClient.Create(ctx, limitadorObj)).Should(Succeed())
Eventually(testLimitadorIsReady(ctx, limitadorObj)).WithContext(ctx).Should(Succeed())
}, nodeTimeOut)
})

It("Should modify the deployment with the affinity custom settings", func(ctx SpecContext) {
deployment := appsv1.Deployment{}
Expand Down
6 changes: 3 additions & 3 deletions controllers/limitador_controller_limits_sync_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ var _ = Describe("Limitador controller syncs limits to pod", func() {

BeforeEach(func(ctx SpecContext) {
CreateNamespaceWithContext(ctx, &testNamespace)
}, nodeTimeOut)
})

AfterEach(func(ctx SpecContext) {
DeleteNamespaceWithContext(ctx, &testNamespace)
Expand Down Expand Up @@ -63,7 +63,7 @@ var _ = Describe("Limitador controller syncs limits to pod", func() {
limitadorObj.Spec.Limits = limits
Expect(k8sClient.Create(ctx, limitadorObj)).Should(Succeed())
Eventually(testLimitadorIsReady(ctx, limitadorObj)).WithContext(ctx).Should(Succeed())
}, nodeTimeOut)
})

It("Should annotate limitador pods with annotation of limits cm resource version", func(ctx SpecContext) {
podList := &corev1.PodList{}
Expand Down Expand Up @@ -117,7 +117,7 @@ var _ = Describe("Limitador controller syncs limits to pod", func() {
limitadorObj.Spec.Limits = limits
Expect(k8sClient.Create(ctx, limitadorObj)).Should(Succeed())
Eventually(testLimitadorIsReady(ctx, limitadorObj)).WithContext(ctx).Should(Succeed())
}, nodeTimeOut)
})

It("Should update limitador pods annotation and sync config map to pod", func(ctx SpecContext) {
// Check cm resource version of pods before update
Expand Down
6 changes: 3 additions & 3 deletions controllers/limitador_controller_limits_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ var _ = Describe("Limitador controller manages limits", func() {

BeforeEach(func(ctx SpecContext) {
CreateNamespaceWithContext(ctx, &testNamespace)
}, nodeTimeOut)
})

AfterEach(func(ctx SpecContext) {
DeleteNamespaceWithContext(ctx, &testNamespace)
Expand Down Expand Up @@ -55,7 +55,7 @@ var _ = Describe("Limitador controller manages limits", func() {
limitadorObj.Spec.Limits = limits
Expect(k8sClient.Create(ctx, limitadorObj)).Should(Succeed())
Eventually(testLimitadorIsReady(ctx, limitadorObj)).WithContext(ctx).Should(Succeed())
}, nodeTimeOut)
})

It("Should create configmap with the custom limits", func(ctx SpecContext) {
cm := &v1.ConfigMap{}
Expand Down Expand Up @@ -99,7 +99,7 @@ var _ = Describe("Limitador controller manages limits", func() {
limitadorObj = basicLimitador(testNamespace)
Expect(k8sClient.Create(ctx, limitadorObj)).Should(Succeed())
Eventually(testLimitadorIsReady(ctx, limitadorObj)).WithContext(ctx).Should(Succeed())
}, nodeTimeOut)
})

It("Should modify configmap with the new limits", func(ctx SpecContext) {
cm := &v1.ConfigMap{}
Expand Down
4 changes: 2 additions & 2 deletions controllers/limitador_controller_pdb_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ var _ = Describe("Limitador controller manages PodDisruptionBudget", func() {

BeforeEach(func(ctx SpecContext) {
CreateNamespaceWithContext(ctx, &testNamespace)
}, nodeTimeOut)
})

AfterEach(func(ctx SpecContext) {
DeleteNamespaceWithContext(ctx, &testNamespace)
Expand All @@ -41,7 +41,7 @@ var _ = Describe("Limitador controller manages PodDisruptionBudget", func() {
limitadorObj.Spec.PodDisruptionBudget = pdbType
Expect(k8sClient.Create(ctx, limitadorObj)).Should(Succeed())
Eventually(testLimitadorIsReady(ctx, limitadorObj)).WithContext(ctx).Should(Succeed())
}, nodeTimeOut)
})

It("Should create PodDisruptionBudget", func(ctx SpecContext) {
pdb := &policyv1.PodDisruptionBudget{}
Expand Down
6 changes: 3 additions & 3 deletions controllers/limitador_controller_ports_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ var _ = Describe("Limitador controller manages ports", func() {

BeforeEach(func(ctx SpecContext) {
CreateNamespaceWithContext(ctx, &testNamespace)
}, nodeTimeOut)
})

AfterEach(func(ctx SpecContext) {
DeleteNamespaceWithContext(ctx, &testNamespace)
Expand All @@ -48,7 +48,7 @@ var _ = Describe("Limitador controller manages ports", func() {
}
Expect(k8sClient.Create(ctx, limitadorObj)).Should(Succeed())
Eventually(testLimitadorIsReady(ctx, limitadorObj)).WithContext(ctx).Should(Succeed())
}, nodeTimeOut)
})

It("Should configure k8s resources with the custom ports", func(ctx SpecContext) {
// Deployment ports
Expand Down Expand Up @@ -142,7 +142,7 @@ var _ = Describe("Limitador controller manages ports", func() {
limitadorObj = basicLimitador(testNamespace)
Expect(k8sClient.Create(ctx, limitadorObj)).Should(Succeed())
Eventually(testLimitadorIsReady(ctx, limitadorObj)).WithContext(ctx).Should(Succeed())
}, nodeTimeOut)
})

It("Should modify the k8s resources with the custom ports", func(ctx SpecContext) {
deployment := appsv1.Deployment{}
Expand Down
8 changes: 4 additions & 4 deletions controllers/limitador_controller_replicas_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ var _ = Describe("Limitador controller manages replicas", func() {

BeforeEach(func(ctx SpecContext) {
CreateNamespaceWithContext(ctx, &testNamespace)
}, nodeTimeOut)
})

AfterEach(func(ctx SpecContext) {
DeleteNamespaceWithContext(ctx, &testNamespace)
Expand All @@ -38,7 +38,7 @@ var _ = Describe("Limitador controller manages replicas", func() {
limitadorObj.Spec.Replicas = ptr.To(int(replicas))
Expect(k8sClient.Create(ctx, limitadorObj)).Should(Succeed())
Eventually(testLimitadorIsReady(ctx, limitadorObj)).WithContext(ctx).Should(Succeed())
}, nodeTimeOut)
})

It("Should create a new deployment with the custom replicas", func(ctx SpecContext) {
deployment := appsv1.Deployment{}
Expand All @@ -62,8 +62,8 @@ var _ = Describe("Limitador controller manages replicas", func() {
BeforeEach(func(ctx SpecContext) {
limitadorObj = basicLimitador(testNamespace)
Expect(k8sClient.Create(ctx, limitadorObj)).Should(Succeed())
Eventually(testLimitadorIsReady(ctx, limitadorObj), time.Minute, 5*time.Second).Should(Succeed())
}, nodeTimeOut)
Eventually(testLimitadorIsReady(ctx, limitadorObj)).WithContext(ctx).Should(Succeed())
})

It("Should modify deployment replicas", func(ctx SpecContext) {
deployment := appsv1.Deployment{}
Expand Down
6 changes: 3 additions & 3 deletions controllers/limitador_controller_resources_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ var _ = Describe("Limitador controller manages resource requirements", func() {

BeforeEach(func(ctx SpecContext) {
CreateNamespaceWithContext(ctx, &testNamespace)
}, nodeTimeOut)
})

AfterEach(func(ctx SpecContext) {
DeleteNamespaceWithContext(ctx, &testNamespace)
Expand All @@ -41,7 +41,7 @@ var _ = Describe("Limitador controller manages resource requirements", func() {
limitadorObj.Spec.ResourceRequirements = &resourceRequirements
Expect(k8sClient.Create(ctx, limitadorObj)).Should(Succeed())
Eventually(testLimitadorIsReady(ctx, limitadorObj)).WithContext(ctx).Should(Succeed())
}, nodeTimeOut)
})

It("Should create a new deployment with the custom resource requirements", func(ctx SpecContext) {
deployment := appsv1.Deployment{}
Expand Down Expand Up @@ -70,7 +70,7 @@ var _ = Describe("Limitador controller manages resource requirements", func() {
limitadorObj = basicLimitador(testNamespace)
Expect(k8sClient.Create(ctx, limitadorObj)).Should(Succeed())
Eventually(testLimitadorIsReady(ctx, limitadorObj)).WithContext(ctx).Should(Succeed())
}, nodeTimeOut)
})

It("Should modify deployment resource requirements", func(ctx SpecContext) {
deployment := appsv1.Deployment{}
Expand Down
20 changes: 10 additions & 10 deletions controllers/limitador_controller_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ var _ = Describe("Limitador controller", func() {

BeforeEach(func(ctx SpecContext) {
CreateNamespaceWithContext(ctx, &testNamespace)
}, nodeTimeOut)
})

AfterEach(func(ctx SpecContext) {
DeleteNamespaceWithContext(ctx, &testNamespace)
Expand All @@ -51,7 +51,7 @@ var _ = Describe("Limitador controller", func() {
limitadorObj = basicLimitador(testNamespace)
Expect(k8sClient.Create(ctx, limitadorObj)).Should(Succeed())
Eventually(testLimitadorIsReady(ctx, limitadorObj)).WithContext(ctx).Should(Succeed())
}, nodeTimeOut)
})

It("Should create a Limitador service with default ports", func(ctx SpecContext) {
createdLimitadorService := corev1.Service{}
Expand Down Expand Up @@ -182,7 +182,7 @@ var _ = Describe("Limitador controller", func() {

Expect(k8sClient.Create(ctx, limitadorObj)).Should(Succeed())
Eventually(testLimitadorIsReady(ctx, limitadorObj)).WithContext(ctx).Should(Succeed())
}, nodeTimeOut)
})

It("Should create a new deployment with rate limit headers command line arg", func(ctx SpecContext) {
createdLimitadorDeployment := appsv1.Deployment{}
Expand Down Expand Up @@ -220,7 +220,7 @@ var _ = Describe("Limitador controller", func() {

Expect(k8sClient.Create(ctx, limitadorObj)).Should(Succeed())
Eventually(testLimitadorIsReady(ctx, limitadorObj)).WithContext(ctx).Should(Succeed())
}, nodeTimeOut)
})

It("Should modify the limitador deployment command line args", func(ctx SpecContext) {
updatedLimitador := limitadorv1alpha1.Limitador{}
Expand Down Expand Up @@ -271,7 +271,7 @@ var _ = Describe("Limitador controller", func() {

Expect(k8sClient.Create(ctx, limitadorObj)).Should(Succeed())
Eventually(testLimitadorIsReady(ctx, limitadorObj)).WithContext(ctx).Should(Succeed())
}, nodeTimeOut)
})

It("Should modify the limitador deployment command line args", func(ctx SpecContext) {
Eventually(func(g Gomega) {
Expand Down Expand Up @@ -325,7 +325,7 @@ var _ = Describe("Limitador controller", func() {

Expect(k8sClient.Create(ctx, limitadorObj)).Should(Succeed())
Eventually(testLimitadorIsReady(ctx, limitadorObj)).WithContext(ctx).Should(Succeed())
}, nodeTimeOut)
})

It("Should create a new deployment with verbosity level command line arg", func(ctx SpecContext) {
deployment := &appsv1.Deployment{}
Expand Down Expand Up @@ -371,7 +371,7 @@ var _ = Describe("Limitador controller", func() {

Expect(k8sClient.Create(ctx, limitadorObj)).Should(Succeed())
Eventually(testLimitadorIsReady(ctx, limitadorObj)).WithContext(ctx).Should(Succeed())
}, nodeTimeOut)
})

It("Should modify the limitador deployment command line args", func(ctx SpecContext) {
deployment := &appsv1.Deployment{}
Expand Down Expand Up @@ -438,7 +438,7 @@ var _ = Describe("Limitador controller", func() {
limitadorObj = basicLimitador(testNamespace)
Expect(k8sClient.Create(ctx, limitadorObj)).Should(Succeed())
Eventually(testLimitadorIsReady(ctx, limitadorObj)).WithContext(ctx).Should(Succeed())
}, nodeTimeOut)
})

It("User tries adding side-cars to deployment CR", func(ctx SpecContext) {
deploymentObj := appsv1.Deployment{}
Expand Down Expand Up @@ -507,7 +507,7 @@ var _ = Describe("Limitador controller", func() {
Eventually(func(g Gomega) {
g.Expect(k8sClient.Get(ctx, client.ObjectKeyFromObject(redisSecret), secret)).To(Succeed())
}).WithContext(ctx).Should(Succeed())
}, nodeTimeOut)
})

It("command line is correct", func(ctx SpecContext) {
limitadorObj := limitadorWithRedisStorage(client.ObjectKeyFromObject(redisSecret), testNamespace)
Expand Down Expand Up @@ -563,7 +563,7 @@ var _ = Describe("Limitador controller", func() {
Eventually(func(g Gomega) {
g.Expect(k8sClient.Get(ctx, client.ObjectKeyFromObject(redisSecret), secret)).To(Succeed())
}).WithContext(ctx).Should(Succeed())
}, nodeTimeOut)
})

It("with all defaults, the command line is correct", func(ctx SpecContext) {
limitadorObj := limitadorWithRedisCachedStorage(client.ObjectKeyFromObject(redisSecret), testNamespace)
Expand Down
6 changes: 3 additions & 3 deletions controllers/limitador_controller_version_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ var _ = Describe("Limitador controller manages image version", func() {

BeforeEach(func(ctx SpecContext) {
CreateNamespaceWithContext(ctx, &testNamespace)
}, nodeTimeOut)
})

AfterEach(func(ctx SpecContext) {
DeleteNamespaceWithContext(ctx, &testNamespace)
Expand All @@ -37,7 +37,7 @@ var _ = Describe("Limitador controller manages image version", func() {
limitadorObj.Spec.Version = ptr.To("otherversion")
Expect(k8sClient.Create(ctx, limitadorObj)).Should(Succeed())
// Do not expect to have limitador ready
}, nodeTimeOut)
})

It("Should create a new deployment with the custom image", func(ctx SpecContext) {
deployment := appsv1.Deployment{}
Expand All @@ -61,7 +61,7 @@ var _ = Describe("Limitador controller manages image version", func() {

Expect(k8sClient.Create(ctx, limitadorObj)).Should(Succeed())
Eventually(testLimitadorIsReady(ctx, limitadorObj)).WithContext(ctx).Should(Succeed())
}, nodeTimeOut)
})

It("Should modify the deployment with the custom image", func(ctx SpecContext) {
deployment := appsv1.Deployment{}
Expand Down
Loading
Loading