Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix not enough space error for keycloak container image quay.io/keycloak/keycloak:25.0 #2067

Merged

Conversation

jcarranzan
Copy link
Contributor

Summary

In our daily jobs, we experienced OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000006af400000, 4034920448, 0) failed; error=' Not enough space' (errno=12)#.

It was some improvements with quarkus-qe/quarkus-test-framework#1351
Apart from that, the' HttpAdvancedIT' test also needs to be updated, and I did with this PR.
I've tested in our Jenkins-job and the test PASSED.

Please select the relevant options.

  • Bug fix (non-breaking change which fixes an issue)
  • Dependency update
  • Refactoring
  • Backport
  • New scenario (non-breaking change which adds functionality)
  • This change requires a documentation update
  • This change requires execution against OCP (use run tests phrase in comment)

Checklist:

  • Methods and classes used in PR scenarios are meaningful
  • Commits are well encapsulated and follow the best practices

@jcarranzan jcarranzan requested a review from fedinskiy October 7, 2024 14:18
@fedinskiy
Copy link
Contributor

I believe we need a release of framework with this commit (quarkus-qe/quarkus-test-framework#1351), don't we?

@jcarranzan jcarranzan marked this pull request as ready for review October 7, 2024 14:42
@jcarranzan
Copy link
Contributor Author

I believe we need a release of framework with this commit (quarkus-qe/quarkus-test-framework#1351), don't we?

Nope, because those changes already were bumped with the 1.6.0.Beta8 version.

@jcarranzan jcarranzan force-pushed the fixes/keycloak-memory-container branch from 85c0942 to 346176d Compare October 9, 2024 06:14
@jcarranzan jcarranzan requested a review from fedinskiy October 9, 2024 06:25
@jcarranzan jcarranzan force-pushed the fixes/keycloak-memory-container branch 2 times, most recently from 09fe70b to 9c8e09a Compare October 9, 2024 13:35
@jcarranzan
Copy link
Contributor Author

None of those changes actually worked. I misinterpreted the results because, in the HttpAdvancedIT test, the JAVA_OPT property was set, which made the test pass. This led me to believe that certain changes had fixed the problem, so the only way to fix it is as I did at the beginning of this PR with JAVA_OPT property set.

PW: I could put also an annotation with the issue that you mentioned here: quarkusio/quarkus#41813 WDYT?

@jcarranzan jcarranzan requested a review from fedinskiy October 9, 2024 14:05
@github-actions github-actions bot added the triage/flaky-test Signal that flaky tests were detected during CI run label Oct 9, 2024
Copy link

github-actions bot commented Oct 9, 2024

Following jobs contain at least one flaky test:

  • PR - Linux - JVM build - Latest Version

Run summary: https://github.com/quarkus-qe/quarkus-test-suite/actions/runs/11256134835?pr=2067

Flaky tests:


io.quarkus.ts.sqldb.panacheflyway.dbpool.AgroalPoolTest.idleTimeoutTest

  • Failure message: agroalCheckIdleTimeout: Expected 1 active connections ==> expected: <1> but was: <3>
  • Failed in jobs:
    • PR - Linux - JVM build - Latest Version
Failure stacktrace
org.opentest4j.AssertionFailedError: agroalCheckIdleTimeout: Expected 1 active connections ==> expected: <1> but was: <3>
	at org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
	at org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
	at org.junit.jupiter.api.AssertEquals.failNotEqual(AssertEquals.java:197)
	at org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:182)
	at org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:672)
	at io.quarkus.ts.sqldb.panacheflyway.dbpool.AgroalPoolTest.idleTimeoutTest(AgroalPoolTest.java:102)
	at java.base/java.lang.reflect.Method.invoke(Method.java:569)
	at io.quarkus.test.junit.QuarkusTestExtension.runExtensionMethod(QuarkusTestExtension.java:971)
	at io.quarkus.test.junit.QuarkusTestExtension.interceptTestMethod(QuarkusTestExtension.java:821)
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)


@jcarranzan jcarranzan force-pushed the fixes/keycloak-memory-container branch 2 times, most recently from 23b451a to b71a444 Compare October 10, 2024 10:31
@jcarranzan
Copy link
Contributor Author

Unfortunately even removing the test that "seems" the issue, it didn't solve the keycloak container error Not enough space...so it keeps being flaky and the only workaround keeps being my initial purpose with JAVA_OPTS.

@jcarranzan jcarranzan force-pushed the fixes/keycloak-memory-container branch from 1f0c20e to 1a13e48 Compare October 10, 2024 11:16
Copy link

Following jobs contain at least one flaky test:

  • PR - Linux - JVM build - Latest Version

Run summary: https://github.com/quarkus-qe/quarkus-test-suite/actions/runs/11272272041?pr=2067

Flaky tests:


io.quarkus.ts.messaging.kafka.reactive.streams.KafkaGratefulShutdownIT.testConnection

  • Failure message: Assertion condition defined as a Lambda expression in io.quarkus.test.utils.LogsVerifier Log does not contain any of '[ave]'. Full logs: [__ ____ __ _____ ___ __ ____ ______ , --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ , -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \ , --\___\_\____/_/ |_/_/|_/_/|_|\____/___/ , 12:43:12,924 SRMSG18229: Configured topics for channel 'slow': [slow], 12:43:12,939 SRMSG18214: Key deserializer omitted, using String as default, 12:43:12,939 SRMSG18234: Auto-commit disabled for channel slow, 12:43:13,119 SRMSG18240: 'throttled' commit strategy used for channel 'slow', 12:43:13,120 SRMSG18232: Will commit for group 'kafka-streams-reactive-messaging' every 5000 milliseconds., 12:43:13,120 SRMSG18236: Will mark throttled commit strategy for group 'kafka-streams-reactive-messaging' as unhealthy if records go more than 60000 milliseconds without being processed., 12:43:13,148 SRMSG18248: Key serializer omitted, using String as default, 12:43:13,184 SRMSG18258: Kafka producer kafka-producer-slow-topic, connected to Kafka brokers 'localhost:32791', is configured to write records to 'slow', 12:43:13,224 SRMSG18257: Kafka consumer kafka-consumer-slow, connected to Kafka brokers 'localhost:32791', belongs to the 'kafka-streams-reactive-messaging' consumer group and is configured to poll records from [slow], 12:43:13,229 No Topology producer; Kafka Streams will not be started, 12:43:13,318 kafka-streams-reactive-messaging 1.0.0-SNAPSHOT on JVM (powered by Quarkus 999-SNAPSHOT) started in 1.312s. Listening on: http://0.0.0.0:1104, 12:43:13,319 Profile prod activated. , 12:43:13,319 Installed features: [cdi, kafka-client, kafka-streams, messaging, messaging-kafka, resteasy, resteasy-jsonb, smallrye-context-propagation, vertx], 12:43:13,463 [Consumer clientId=kafka-consumer-slow, groupId=kafka-streams-reactive-messaging] Error while fetching metadata with correlation id 2 : {slow=UNKNOWN_TOPIC_OR_PARTITION}, 12:43:16,958 SRMSG18209: Sending message org.eclipse.microprofile.reactive.messaging.Message$5@6239832f from channel 'slow-topic' to Kafka topic 'slow', 12:43:17,110 SRMSG18211: Message org.eclipse.microprofile.reactive.messaging.Message$5@6239832f from channel 'slow-topic' sent successfully to Kafka topic-partition 'slow-0', with offset 0] ==> expected: <true> but was: <false> within 30 seconds.
  • Failed in jobs:
    • PR - Linux - JVM build - Latest Version
Failure stacktrace
org.awaitility.core.ConditionTimeoutException: Assertion condition defined as a Lambda expression in io.quarkus.test.utils.LogsVerifier Log does not contain any of '[ave]'. Full logs: [__  ____  __  _____   ___  __ ____  ______ ,  --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ ,  -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \   , --\___\_\____/_/ |_/_/|_/_/|_|\____/___/   , 12:43:12,924 SRMSG18229: Configured topics for channel 'slow': [slow], 12:43:12,939 SRMSG18214: Key deserializer omitted, using String as default, 12:43:12,939 SRMSG18234: Auto-commit disabled for channel slow, 12:43:13,119 SRMSG18240: 'throttled' commit strategy used for channel 'slow', 12:43:13,120 SRMSG18232: Will commit for group 'kafka-streams-reactive-messaging' every 5000 milliseconds., 12:43:13,120 SRMSG18236: Will mark throttled commit strategy for group 'kafka-streams-reactive-messaging' as unhealthy if records go more than 60000 milliseconds without being processed., 12:43:13,148 SRMSG18248: Key serializer omitted, using String as default, 12:43:13,184 SRMSG18258: Kafka producer kafka-producer-slow-topic, connected to Kafka brokers 'localhost:32791', is configured to write records to 'slow', 12:43:13,224 SRMSG18257: Kafka consumer kafka-consumer-slow, connected to Kafka brokers 'localhost:32791', belongs to the 'kafka-streams-reactive-messaging' consumer group and is configured to poll records from [slow], 12:43:13,229 No Topology producer; Kafka Streams will not be started, 12:43:13,318 kafka-streams-reactive-messaging 1.0.0-SNAPSHOT on JVM (powered by Quarkus 999-SNAPSHOT) started in 1.312s. Listening on: http://0.0.0.0:1104, 12:43:13,319 Profile prod activated. , 12:43:13,319 Installed features: [cdi, kafka-client, kafka-streams, messaging, messaging-kafka, resteasy, resteasy-jsonb, smallrye-context-propagation, vertx], 12:43:13,463 [Consumer clientId=kafka-consumer-slow, groupId=kafka-streams-reactive-messaging] Error while fetching metadata with correlation id 2 : {slow=UNKNOWN_TOPIC_OR_PARTITION}, 12:43:16,958 SRMSG18209: Sending message org.eclipse.microprofile.reactive.messaging.Message$5@6239832f from channel 'slow-topic' to Kafka topic 'slow', 12:43:17,110 SRMSG18211: Message org.eclipse.microprofile.reactive.messaging.Message$5@6239832f from channel 'slow-topic' sent successfully to Kafka topic-partition 'slow-0', with offset 0] ==> expected: <true> but was: <false> within 30 seconds.
	at org.awaitility.core.ConditionAwaiter.await(ConditionAwaiter.java:167)
	at org.awaitility.core.AssertionCondition.await(AssertionCondition.java:119)
	at org.awaitility.core.AssertionCondition.await(AssertionCondition.java:31)
	at org.awaitility.core.ConditionFactory.until(ConditionFactory.java:1006)
	at org.awaitility.core.ConditionFactory.untilAsserted(ConditionFactory.java:790)
	at io.quarkus.test.utils.AwaitilityUtils.untilAsserted(AwaitilityUtils.java:134)
	at io.quarkus.test.utils.AwaitilityUtils.untilAsserted(AwaitilityUtils.java:124)
	at io.quarkus.test.utils.LogsVerifier.assertContains(LogsVerifier.java:28)
	at io.quarkus.ts.messaging.kafka.reactive.streams.KafkaGratefulShutdownIT.testConnection(KafkaGratefulShutdownIT.java:71)
	at java.base/java.lang.reflect.Method.invoke(Method.java:569)
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
Caused by: org.opentest4j.AssertionFailedError: Log does not contain any of '[ave]'. Full logs: [__  ____  __  _____   ___  __ ____  ______ ,  --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ ,  -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \   , --\___\_\____/_/ |_/_/|_/_/|_|\____/___/   , 12:43:12,924 SRMSG18229: Configured topics for channel 'slow': [slow], 12:43:12,939 SRMSG18214: Key deserializer omitted, using String as default, 12:43:12,939 SRMSG18234: Auto-commit disabled for channel slow, 12:43:13,119 SRMSG18240: 'throttled' commit strategy used for channel 'slow', 12:43:13,120 SRMSG18232: Will commit for group 'kafka-streams-reactive-messaging' every 5000 milliseconds., 12:43:13,120 SRMSG18236: Will mark throttled commit strategy for group 'kafka-streams-reactive-messaging' as unhealthy if records go more than 60000 milliseconds without being processed., 12:43:13,148 SRMSG18248: Key serializer omitted, using String as default, 12:43:13,184 SRMSG18258: Kafka producer kafka-producer-slow-topic, connected to Kafka brokers 'localhost:32791', is configured to write records to 'slow', 12:43:13,224 SRMSG18257: Kafka consumer kafka-consumer-slow, connected to Kafka brokers 'localhost:32791', belongs to the 'kafka-streams-reactive-messaging' consumer group and is configured to poll records from [slow], 12:43:13,229 No Topology producer; Kafka Streams will not be started, 12:43:13,318 kafka-streams-reactive-messaging 1.0.0-SNAPSHOT on JVM (powered by Quarkus 999-SNAPSHOT) started in 1.312s. Listening on: http://0.0.0.0:1104, 12:43:13,319 Profile prod activated. , 12:43:13,319 Installed features: [cdi, kafka-client, kafka-streams, messaging, messaging-kafka, resteasy, resteasy-jsonb, smallrye-context-propagation, vertx], 12:43:13,463 [Consumer clientId=kafka-consumer-slow, groupId=kafka-streams-reactive-messaging] Error while fetching metadata with correlation id 2 : {slow=UNKNOWN_TOPIC_OR_PARTITION}, 12:43:16,958 SRMSG18209: Sending message org.eclipse.microprofile.reactive.messaging.Message$5@6239832f from channel 'slow-topic' to Kafka topic 'slow', 12:43:17,110 SRMSG18211: Message org.eclipse.microprofile.reactive.messaging.Message$5@6239832f from channel 'slow-topic' sent successfully to Kafka topic-partition 'slow-0', with offset 0] ==> expected: <true> but was: <false>
	at io.quarkus.test.utils.LogsVerifier.lambda$assertContains$0(LogsVerifier.java:30)
	at org.awaitility.core.AssertionCondition.lambda$new$0(AssertionCondition.java:53)
	at org.awaitility.core.ConditionAwaiter$ConditionPoller.call(ConditionAwaiter.java:248)
	at org.awaitility.core.ConditionAwaiter$ConditionPoller.call(ConditionAwaiter.java:235)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
	at java.base/java.lang.Thread.run(Thread.java:840)


Copy link

Following jobs contain at least one flaky test:

  • PR - Linux - JVM build - Latest Version

Run summary: https://github.com/quarkus-qe/quarkus-test-suite/actions/runs/11272267117?pr=2067

Flaky tests:


io.quarkus.ts.messaging.kafka.reactive.streams.KafkaGratefulShutdownIT.testConnection

  • Failure message: Assertion condition defined as a Lambda expression in io.quarkus.test.utils.LogsVerifier Log does not contain any of '[ave]'. Full logs: [__ ____ __ _____ ___ __ ____ ______ , --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ , -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \ , --\___\_\____/_/ |_/_/|_/_/|_|\____/___/ , 13:01:37,155 SRMSG18229: Configured topics for channel 'slow': [slow], 13:01:37,170 SRMSG18214: Key deserializer omitted, using String as default, 13:01:37,170 SRMSG18234: Auto-commit disabled for channel slow, 13:01:37,341 SRMSG18240: 'throttled' commit strategy used for channel 'slow', 13:01:37,342 SRMSG18232: Will commit for group 'kafka-streams-reactive-messaging' every 5000 milliseconds., 13:01:37,343 SRMSG18236: Will mark throttled commit strategy for group 'kafka-streams-reactive-messaging' as unhealthy if records go more than 60000 milliseconds without being processed., 13:01:37,373 SRMSG18248: Key serializer omitted, using String as default, 13:01:37,406 SRMSG18258: Kafka producer kafka-producer-slow-topic, connected to Kafka brokers 'localhost:32791', is configured to write records to 'slow', 13:01:37,455 SRMSG18257: Kafka consumer kafka-consumer-slow, connected to Kafka brokers 'localhost:32791', belongs to the 'kafka-streams-reactive-messaging' consumer group and is configured to poll records from [slow], 13:01:37,459 No Topology producer; Kafka Streams will not be started, 13:01:37,556 kafka-streams-reactive-messaging 1.0.0-SNAPSHOT on JVM (powered by Quarkus 999-SNAPSHOT) started in 1.385s. Listening on: http://0.0.0.0:1104, 13:01:37,556 Profile prod activated. , 13:01:37,556 Installed features: [cdi, kafka-client, kafka-streams, messaging, messaging-kafka, resteasy, resteasy-jsonb, smallrye-context-propagation, vertx], 13:01:37,691 [Consumer clientId=kafka-consumer-slow, groupId=kafka-streams-reactive-messaging] Error while fetching metadata with correlation id 2 : {slow=UNKNOWN_TOPIC_OR_PARTITION}, 13:01:41,148 SRMSG18209: Sending message org.eclipse.microprofile.reactive.messaging.Message$5@420ee74c from channel 'slow-topic' to Kafka topic 'slow', 13:01:41,239 SRMSG18211: Message org.eclipse.microprofile.reactive.messaging.Message$5@420ee74c from channel 'slow-topic' sent successfully to Kafka topic-partition 'slow-0', with offset 0] ==> expected: <true> but was: <false> within 30 seconds.
  • Failed in jobs:
    • PR - Linux - JVM build - Latest Version
Failure stacktrace
org.awaitility.core.ConditionTimeoutException: Assertion condition defined as a Lambda expression in io.quarkus.test.utils.LogsVerifier Log does not contain any of '[ave]'. Full logs: [__  ____  __  _____   ___  __ ____  ______ ,  --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ ,  -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \   , --\___\_\____/_/ |_/_/|_/_/|_|\____/___/   , 13:01:37,155 SRMSG18229: Configured topics for channel 'slow': [slow], 13:01:37,170 SRMSG18214: Key deserializer omitted, using String as default, 13:01:37,170 SRMSG18234: Auto-commit disabled for channel slow, 13:01:37,341 SRMSG18240: 'throttled' commit strategy used for channel 'slow', 13:01:37,342 SRMSG18232: Will commit for group 'kafka-streams-reactive-messaging' every 5000 milliseconds., 13:01:37,343 SRMSG18236: Will mark throttled commit strategy for group 'kafka-streams-reactive-messaging' as unhealthy if records go more than 60000 milliseconds without being processed., 13:01:37,373 SRMSG18248: Key serializer omitted, using String as default, 13:01:37,406 SRMSG18258: Kafka producer kafka-producer-slow-topic, connected to Kafka brokers 'localhost:32791', is configured to write records to 'slow', 13:01:37,455 SRMSG18257: Kafka consumer kafka-consumer-slow, connected to Kafka brokers 'localhost:32791', belongs to the 'kafka-streams-reactive-messaging' consumer group and is configured to poll records from [slow], 13:01:37,459 No Topology producer; Kafka Streams will not be started, 13:01:37,556 kafka-streams-reactive-messaging 1.0.0-SNAPSHOT on JVM (powered by Quarkus 999-SNAPSHOT) started in 1.385s. Listening on: http://0.0.0.0:1104, 13:01:37,556 Profile prod activated. , 13:01:37,556 Installed features: [cdi, kafka-client, kafka-streams, messaging, messaging-kafka, resteasy, resteasy-jsonb, smallrye-context-propagation, vertx], 13:01:37,691 [Consumer clientId=kafka-consumer-slow, groupId=kafka-streams-reactive-messaging] Error while fetching metadata with correlation id 2 : {slow=UNKNOWN_TOPIC_OR_PARTITION}, 13:01:41,148 SRMSG18209: Sending message org.eclipse.microprofile.reactive.messaging.Message$5@420ee74c from channel 'slow-topic' to Kafka topic 'slow', 13:01:41,239 SRMSG18211: Message org.eclipse.microprofile.reactive.messaging.Message$5@420ee74c from channel 'slow-topic' sent successfully to Kafka topic-partition 'slow-0', with offset 0] ==> expected: <true> but was: <false> within 30 seconds.
	at org.awaitility.core.ConditionAwaiter.await(ConditionAwaiter.java:167)
	at org.awaitility.core.AssertionCondition.await(AssertionCondition.java:119)
	at org.awaitility.core.AssertionCondition.await(AssertionCondition.java:31)
	at org.awaitility.core.ConditionFactory.until(ConditionFactory.java:1006)
	at org.awaitility.core.ConditionFactory.untilAsserted(ConditionFactory.java:790)
	at io.quarkus.test.utils.AwaitilityUtils.untilAsserted(AwaitilityUtils.java:134)
	at io.quarkus.test.utils.AwaitilityUtils.untilAsserted(AwaitilityUtils.java:124)
	at io.quarkus.test.utils.LogsVerifier.assertContains(LogsVerifier.java:28)
	at io.quarkus.ts.messaging.kafka.reactive.streams.KafkaGratefulShutdownIT.testConnection(KafkaGratefulShutdownIT.java:71)
	at java.base/java.lang.reflect.Method.invoke(Method.java:569)
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
Caused by: org.opentest4j.AssertionFailedError: Log does not contain any of '[ave]'. Full logs: [__  ____  __  _____   ___  __ ____  ______ ,  --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ ,  -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \   , --\___\_\____/_/ |_/_/|_/_/|_|\____/___/   , 13:01:37,155 SRMSG18229: Configured topics for channel 'slow': [slow], 13:01:37,170 SRMSG18214: Key deserializer omitted, using String as default, 13:01:37,170 SRMSG18234: Auto-commit disabled for channel slow, 13:01:37,341 SRMSG18240: 'throttled' commit strategy used for channel 'slow', 13:01:37,342 SRMSG18232: Will commit for group 'kafka-streams-reactive-messaging' every 5000 milliseconds., 13:01:37,343 SRMSG18236: Will mark throttled commit strategy for group 'kafka-streams-reactive-messaging' as unhealthy if records go more than 60000 milliseconds without being processed., 13:01:37,373 SRMSG18248: Key serializer omitted, using String as default, 13:01:37,406 SRMSG18258: Kafka producer kafka-producer-slow-topic, connected to Kafka brokers 'localhost:32791', is configured to write records to 'slow', 13:01:37,455 SRMSG18257: Kafka consumer kafka-consumer-slow, connected to Kafka brokers 'localhost:32791', belongs to the 'kafka-streams-reactive-messaging' consumer group and is configured to poll records from [slow], 13:01:37,459 No Topology producer; Kafka Streams will not be started, 13:01:37,556 kafka-streams-reactive-messaging 1.0.0-SNAPSHOT on JVM (powered by Quarkus 999-SNAPSHOT) started in 1.385s. Listening on: http://0.0.0.0:1104, 13:01:37,556 Profile prod activated. , 13:01:37,556 Installed features: [cdi, kafka-client, kafka-streams, messaging, messaging-kafka, resteasy, resteasy-jsonb, smallrye-context-propagation, vertx], 13:01:37,691 [Consumer clientId=kafka-consumer-slow, groupId=kafka-streams-reactive-messaging] Error while fetching metadata with correlation id 2 : {slow=UNKNOWN_TOPIC_OR_PARTITION}, 13:01:41,148 SRMSG18209: Sending message org.eclipse.microprofile.reactive.messaging.Message$5@420ee74c from channel 'slow-topic' to Kafka topic 'slow', 13:01:41,239 SRMSG18211: Message org.eclipse.microprofile.reactive.messaging.Message$5@420ee74c from channel 'slow-topic' sent successfully to Kafka topic-partition 'slow-0', with offset 0] ==> expected: <true> but was: <false>
	at io.quarkus.test.utils.LogsVerifier.lambda$assertContains$0(LogsVerifier.java:30)
	at org.awaitility.core.AssertionCondition.lambda$new$0(AssertionCondition.java:53)
	at org.awaitility.core.ConditionAwaiter$ConditionPoller.call(ConditionAwaiter.java:248)
	at org.awaitility.core.ConditionAwaiter$ConditionPoller.call(ConditionAwaiter.java:235)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
	at java.base/java.lang.Thread.run(Thread.java:840)


io.quarkus.ts.sqldb.panacheflyway.dbpool.AgroalPoolTest.idleTimeoutTest

  • Failure message: agroalCheckIdleTimeout: Expected 1 active connections ==> expected: <1> but was: <2>
  • Failed in jobs:
    • PR - Linux - JVM build - Latest Version
Failure stacktrace
org.opentest4j.AssertionFailedError: agroalCheckIdleTimeout: Expected 1 active connections ==> expected: <1> but was: <2>
	at org.junit.jupiter.api.AssertionFailureBuilder.build(AssertionFailureBuilder.java:151)
	at org.junit.jupiter.api.AssertionFailureBuilder.buildAndThrow(AssertionFailureBuilder.java:132)
	at org.junit.jupiter.api.AssertEquals.failNotEqual(AssertEquals.java:197)
	at org.junit.jupiter.api.AssertEquals.assertEquals(AssertEquals.java:182)
	at org.junit.jupiter.api.Assertions.assertEquals(Assertions.java:672)
	at io.quarkus.ts.sqldb.panacheflyway.dbpool.AgroalPoolTest.idleTimeoutTest(AgroalPoolTest.java:102)
	at java.base/java.lang.reflect.Method.invoke(Method.java:569)
	at io.quarkus.test.junit.QuarkusTestExtension.runExtensionMethod(QuarkusTestExtension.java:971)
	at io.quarkus.test.junit.QuarkusTestExtension.interceptTestMethod(QuarkusTestExtension.java:821)
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)


Copy link

Following jobs contain at least one flaky test:

  • PR - Linux - JVM build - Latest Version

Run summary: https://github.com/quarkus-qe/quarkus-test-suite/actions/runs/11272929099?pr=2067

Flaky tests:


io.quarkus.ts.messaging.kafka.reactive.streams.KafkaGratefulShutdownIT.testConnection

  • Failure message: Assertion condition defined as a Lambda expression in io.quarkus.test.utils.LogsVerifier Log does not contain any of '[ave]'. Full logs: [__ ____ __ _____ ___ __ ____ ______ , --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ , -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \ , --\___\_\____/_/ |_/_/|_/_/|_|\____/___/ , 13:12:32,650 SRMSG18229: Configured topics for channel 'slow': [slow], 13:12:32,664 SRMSG18214: Key deserializer omitted, using String as default, 13:12:32,664 SRMSG18234: Auto-commit disabled for channel slow, 13:12:32,839 SRMSG18240: 'throttled' commit strategy used for channel 'slow', 13:12:32,840 SRMSG18232: Will commit for group 'kafka-streams-reactive-messaging' every 5000 milliseconds., 13:12:32,840 SRMSG18236: Will mark throttled commit strategy for group 'kafka-streams-reactive-messaging' as unhealthy if records go more than 60000 milliseconds without being processed., 13:12:32,872 SRMSG18248: Key serializer omitted, using String as default, 13:12:32,899 SRMSG18258: Kafka producer kafka-producer-slow-topic, connected to Kafka brokers 'localhost:32791', is configured to write records to 'slow', 13:12:32,941 SRMSG18257: Kafka consumer kafka-consumer-slow, connected to Kafka brokers 'localhost:32791', belongs to the 'kafka-streams-reactive-messaging' consumer group and is configured to poll records from [slow], 13:12:32,942 No Topology producer; Kafka Streams will not be started, 13:12:33,037 kafka-streams-reactive-messaging 1.0.0-SNAPSHOT on JVM (powered by Quarkus 999-SNAPSHOT) started in 1.288s. Listening on: http://0.0.0.0:1104, 13:12:33,038 Profile prod activated. , 13:12:33,038 Installed features: [cdi, kafka-client, kafka-streams, messaging, messaging-kafka, resteasy, resteasy-jsonb, smallrye-context-propagation, vertx], 13:12:33,173 [Consumer clientId=kafka-consumer-slow, groupId=kafka-streams-reactive-messaging] Error while fetching metadata with correlation id 2 : {slow=UNKNOWN_TOPIC_OR_PARTITION}, 13:12:36,721 SRMSG18209: Sending message org.eclipse.microprofile.reactive.messaging.Message$5@50aba9e1 from channel 'slow-topic' to Kafka topic 'slow', 13:12:36,822 SRMSG18211: Message org.eclipse.microprofile.reactive.messaging.Message$5@50aba9e1 from channel 'slow-topic' sent successfully to Kafka topic-partition 'slow-0', with offset 0] ==> expected: <true> but was: <false> within 30 seconds.
  • Failed in jobs:
    • PR - Linux - JVM build - Latest Version
Failure stacktrace
org.awaitility.core.ConditionTimeoutException: Assertion condition defined as a Lambda expression in io.quarkus.test.utils.LogsVerifier Log does not contain any of '[ave]'. Full logs: [__  ____  __  _____   ___  __ ____  ______ ,  --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ ,  -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \   , --\___\_\____/_/ |_/_/|_/_/|_|\____/___/   , 13:12:32,650 SRMSG18229: Configured topics for channel 'slow': [slow], 13:12:32,664 SRMSG18214: Key deserializer omitted, using String as default, 13:12:32,664 SRMSG18234: Auto-commit disabled for channel slow, 13:12:32,839 SRMSG18240: 'throttled' commit strategy used for channel 'slow', 13:12:32,840 SRMSG18232: Will commit for group 'kafka-streams-reactive-messaging' every 5000 milliseconds., 13:12:32,840 SRMSG18236: Will mark throttled commit strategy for group 'kafka-streams-reactive-messaging' as unhealthy if records go more than 60000 milliseconds without being processed., 13:12:32,872 SRMSG18248: Key serializer omitted, using String as default, 13:12:32,899 SRMSG18258: Kafka producer kafka-producer-slow-topic, connected to Kafka brokers 'localhost:32791', is configured to write records to 'slow', 13:12:32,941 SRMSG18257: Kafka consumer kafka-consumer-slow, connected to Kafka brokers 'localhost:32791', belongs to the 'kafka-streams-reactive-messaging' consumer group and is configured to poll records from [slow], 13:12:32,942 No Topology producer; Kafka Streams will not be started, 13:12:33,037 kafka-streams-reactive-messaging 1.0.0-SNAPSHOT on JVM (powered by Quarkus 999-SNAPSHOT) started in 1.288s. Listening on: http://0.0.0.0:1104, 13:12:33,038 Profile prod activated. , 13:12:33,038 Installed features: [cdi, kafka-client, kafka-streams, messaging, messaging-kafka, resteasy, resteasy-jsonb, smallrye-context-propagation, vertx], 13:12:33,173 [Consumer clientId=kafka-consumer-slow, groupId=kafka-streams-reactive-messaging] Error while fetching metadata with correlation id 2 : {slow=UNKNOWN_TOPIC_OR_PARTITION}, 13:12:36,721 SRMSG18209: Sending message org.eclipse.microprofile.reactive.messaging.Message$5@50aba9e1 from channel 'slow-topic' to Kafka topic 'slow', 13:12:36,822 SRMSG18211: Message org.eclipse.microprofile.reactive.messaging.Message$5@50aba9e1 from channel 'slow-topic' sent successfully to Kafka topic-partition 'slow-0', with offset 0] ==> expected: <true> but was: <false> within 30 seconds.
	at org.awaitility.core.ConditionAwaiter.await(ConditionAwaiter.java:167)
	at org.awaitility.core.AssertionCondition.await(AssertionCondition.java:119)
	at org.awaitility.core.AssertionCondition.await(AssertionCondition.java:31)
	at org.awaitility.core.ConditionFactory.until(ConditionFactory.java:1006)
	at org.awaitility.core.ConditionFactory.untilAsserted(ConditionFactory.java:790)
	at io.quarkus.test.utils.AwaitilityUtils.untilAsserted(AwaitilityUtils.java:134)
	at io.quarkus.test.utils.AwaitilityUtils.untilAsserted(AwaitilityUtils.java:124)
	at io.quarkus.test.utils.LogsVerifier.assertContains(LogsVerifier.java:28)
	at io.quarkus.ts.messaging.kafka.reactive.streams.KafkaGratefulShutdownIT.testConnection(KafkaGratefulShutdownIT.java:71)
	at java.base/java.lang.reflect.Method.invoke(Method.java:569)
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
	at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
Caused by: org.opentest4j.AssertionFailedError: Log does not contain any of '[ave]'. Full logs: [__  ____  __  _____   ___  __ ____  ______ ,  --/ __ \/ / / / _ | / _ \/ //_/ / / / __/ ,  -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \   , --\___\_\____/_/ |_/_/|_/_/|_|\____/___/   , 13:12:32,650 SRMSG18229: Configured topics for channel 'slow': [slow], 13:12:32,664 SRMSG18214: Key deserializer omitted, using String as default, 13:12:32,664 SRMSG18234: Auto-commit disabled for channel slow, 13:12:32,839 SRMSG18240: 'throttled' commit strategy used for channel 'slow', 13:12:32,840 SRMSG18232: Will commit for group 'kafka-streams-reactive-messaging' every 5000 milliseconds., 13:12:32,840 SRMSG18236: Will mark throttled commit strategy for group 'kafka-streams-reactive-messaging' as unhealthy if records go more than 60000 milliseconds without being processed., 13:12:32,872 SRMSG18248: Key serializer omitted, using String as default, 13:12:32,899 SRMSG18258: Kafka producer kafka-producer-slow-topic, connected to Kafka brokers 'localhost:32791', is configured to write records to 'slow', 13:12:32,941 SRMSG18257: Kafka consumer kafka-consumer-slow, connected to Kafka brokers 'localhost:32791', belongs to the 'kafka-streams-reactive-messaging' consumer group and is configured to poll records from [slow], 13:12:32,942 No Topology producer; Kafka Streams will not be started, 13:12:33,037 kafka-streams-reactive-messaging 1.0.0-SNAPSHOT on JVM (powered by Quarkus 999-SNAPSHOT) started in 1.288s. Listening on: http://0.0.0.0:1104, 13:12:33,038 Profile prod activated. , 13:12:33,038 Installed features: [cdi, kafka-client, kafka-streams, messaging, messaging-kafka, resteasy, resteasy-jsonb, smallrye-context-propagation, vertx], 13:12:33,173 [Consumer clientId=kafka-consumer-slow, groupId=kafka-streams-reactive-messaging] Error while fetching metadata with correlation id 2 : {slow=UNKNOWN_TOPIC_OR_PARTITION}, 13:12:36,721 SRMSG18209: Sending message org.eclipse.microprofile.reactive.messaging.Message$5@50aba9e1 from channel 'slow-topic' to Kafka topic 'slow', 13:12:36,822 SRMSG18211: Message org.eclipse.microprofile.reactive.messaging.Message$5@50aba9e1 from channel 'slow-topic' sent successfully to Kafka topic-partition 'slow-0', with offset 0] ==> expected: <true> but was: <false>
	at io.quarkus.test.utils.LogsVerifier.lambda$assertContains$0(LogsVerifier.java:30)
	at org.awaitility.core.AssertionCondition.lambda$new$0(AssertionCondition.java:53)
	at org.awaitility.core.ConditionAwaiter$ConditionPoller.call(ConditionAwaiter.java:248)
	at org.awaitility.core.ConditionAwaiter$ConditionPoller.call(ConditionAwaiter.java:235)
	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
	at java.base/java.lang.Thread.run(Thread.java:840)


Copy link
Member

@michalvavrik michalvavrik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have looked at the quarkus-qe/quarkus-test-framework#1351 and quarkusio/quarkus#43630 and I am not sure why do we need to set these parameters when there is KeycloakContainer#memoryLimitMiB. If I understand quarkusio/quarkus#43601 correctly, Keycloak Dev Services does exactly same, just reduce container memory because Keycloak raise to that limit and no more.

But let's say we do, setting JAVA_OPTS to the -Xms512m -Xmx1g means that heap can go up to 1g, but resident set size contains also metaspace and the metaspace can push you to OOM as well.

TL;DR; we have container memory limit set to 1000 MiB and you allow 1g for the heap, so what about the rest of RSS? My proposal is to test this well in many runs and link series of success runs with same configuration in the Slack. Thanks

Also, few questions:

  • can we simply set KeycloakContainer#memoryLimitMiB to higher values for this test?
  • any idea why it happens only for this test? I can imagine it is hard to tell the reason, but I found 6 other tests with identical Keycloak configuration just by simple fulltext search. And OOM happens on the Keycloak startup, which means it has nothing to do with any @Test method at all, netiher with application properties and Quarkus application either (because it happens before Quarkus is started).

This ^^ leads me to thinking that maybe there is some resource leak before HttpAdvancedIT is even started and by the time that Keycloak starts, there is already so little memory it cannot start.

But that you cannot investigate by guessing, you need more data, add there hooks and printout how much memory space there is on the executor machine disk before Keycloak is started.

@jcarranzan
Copy link
Contributor Author

Thanks @michalvavrik ,you've come to the same conclusion as me, I said this to Fedor yesterday on Slack: "Related to Keycloak container memory issues, I really think this has nothing to do with the tests."
I will try to investigate it printing out the memory before keycloak starts, that's a good point.

@jcarranzan
Copy link
Contributor Author

Ok printing the memory of the total memory and free memory before start the container seems the issue, Jenkins logs :

9:52:23 [INFO] Running io.quarkus.ts.http.advanced.HttpAdvancedIT
09:52:23 === System Memory Info ===
09:52:23 Total Physical Memory: 7695 MB
09:52:23 Free Physical Memory: 1330 MB
09:52:23 === Disk Space Info ===
09:52:23 Total Disk Space: 40847 MB
09:52:23 Free Disk Space: 32477 MB
09:52:24 07:52:24,522 INFO  ⭐  PKCS12 keystore and truststore generated successfully!
09:52:24 07:52:24,523 INFO  🔐  Key Store File: /tmp/quarkus-qe-certs3795654588981957403/quarkus-qe-keystore.p12
09:52:24 07:52:24,523 INFO  🔓  Trust Store File: /tmp/quarkus-qe-certs3795654588981957403/quarkus-qe-truststore.p12
09:52:24 07:52:24,563 INFO  [keycloak] Initialize service (quay.io/keycloak/keycloak:25.0)
09:52:24 07:52:24,590 Image pull policy will be performed by: DefaultPullPolicy()
09:52:24 07:52:24,593 Image name substitution will be performed by: DefaultImageNameSubstitutor (composite of 'ConfigurationFileImageNameSubstitutor' and 'PrefixingImageNameSubstitutor')
09:52:24 07:52:24,608 Testcontainers version: 1.20.2
09:52:25 07:52:25,223 Found Docker environment with Environment variables, system properties and defaults. Resolved dockerHost=unix:///run/user/600/podman/podman.sock
09:52:25 07:52:25,224 Docker host IP address is localhost
09:52:25 07:52:25,275 Connected to docker: 
09:52:25   Server Version: 4.9.4-rhel
09:52:25   API Version: 1.41
09:52:25   Operating System: rhel
09:52:25   Total Memory: 7695 MB
09:52:25 07:52:25,298 Pulling docker image: testcontainers/ryuk:0.9.0. Please be patient; this may take some time but only needs to be done once.
09:52:25 07:52:25,546 Starting to pull image
09:52:25 07:52:25,571 Pulling image layers:  2 pending,  1 downloaded,  0 extracted, (0 bytes/? MB)
09:52:25 07:52:25,577 Pulling image layers:  1 pending,  2 downloaded,  0 extracted, (0 bytes/? MB)
09:52:25 07:52:25,832 Pulling image layers:  0 pending,  3 downloaded,  0 extracted, (0 bytes/0 bytes)
09:52:25 07:52:25,835 Pulling image layers:  0 pending,  4 downloaded,  0 extracted, (0 bytes/0 bytes)
09:52:25 07:52:25,841 Pulling image layers:  0 pending,  4 downloaded,  0 extracted, (0 bytes/0 bytes)
09:52:25 07:52:25,842 Image testcontainers/ryuk:0.9.0 pull took PT0.543442853S
09:52:25 07:52:25,865 Creating container for image: testcontainers/ryuk:0.9.0
09:52:25 07:52:25,924 Container testcontainers/ryuk:0.9.0 is starting: e9b60ace3360744a689e7f627c6e1e3e0e388bd62904e4d84089d45d17f47d48
09:52:26 07:52:26,310 Container testcontainers/ryuk:0.9.0 started in PT0.44446503S
09:52:26 07:52:26,313 Ryuk started - will monitor and terminate Testcontainers containers on JVM exit
09:52:26 07:52:26,313 Checking the system...
09:52:26 07:52:26,314 ✔︎ Docker server version should be at least 1.6.0
09:52:26 07:52:26,317 Pulling docker image: quay.io/keycloak/keycloak:25.0. Please be patient; this may take some time but only needs to be done once.
09:52:26 07:52:26,853 Starting to pull image
09:52:26 07:52:26,859 Pulling image layers:  0 pending,  1 downloaded,  0 extracted, (0 bytes/0 bytes)
09:52:27 07:52:27,361 Pulling image layers:  2 pending,  2 downloaded,  0 extracted, (0 bytes/? MB)
09:52:28 07:52:28,589 Pulling image layers:  1 pending,  3 downloaded,  0 extracted, (121 MB/? MB)
09:52:31 07:52:31,536 Pulling image layers:  0 pending,  4 downloaded,  0 extracted, (121 MB/217 MB)
09:52:31 07:52:31,539 Pulling image layers:  0 pending,  5 downloaded,  0 extracted, (121 MB/217 MB)
09:52:31 07:52:31,562 Pulling image layers:  0 pending,  5 downloaded,  0 extracted, (121 MB/217 MB)
09:52:31 07:52:31,562 Pull complete. 5 layers, pulled in 4s (downloaded 121 MB at 30 MB/s)
09:52:31 07:52:31,562 Image quay.io/keycloak/keycloak:25.0 pull took PT5.24515508S
09:52:31 07:52:31,569 Creating container for image: quay.io/keycloak/keycloak:25.0
09:52:31 07:52:31,740 Container quay.io/keycloak/keycloak:25.0 is starting: d86dffbe330cd6c624d131d533018e183f721ac21b3dfc469ce85b7c6fcca82d
09:57:32 07:57:32,044 Could not start container: java.lang.IllegalStateException: Wait strategy failed. Container exited with code 1
09:57:32 	at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:533)
09:57:32 	at org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:354)
09:57:32 	at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:81)
09:57:32 	at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:344)
09:57:32 	at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:330)
09:57:32 	at io.quarkus.test.services.containers.DockerContainerManagedResource.doStart(DockerContainerManagedResource.java:122)
09:57:32 	at io.quarkus.test.services.containers.DockerContainerManagedResource.start(DockerContainerManagedResource.java:72)
09:57:32 	at io.quarkus.test.bootstrap.BaseService.doStart(BaseService.java:356)
09:57:32 	at io.quarkus.test.bootstrap.BaseService.start(BaseService.java:245)
09:57:32 	at io.quarkus.test.bootstrap.QuarkusScenarioBootstrap.launchService(QuarkusScenarioBootstrap.java:172)
09:57:32 	at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
09:57:32 	at io.quarkus.test.bootstrap.QuarkusScenarioBootstrap.beforeAll(QuarkusScenarioBootstrap.java:73)
09:57:32 	at io.quarkus.test.bootstrap.QuarkusScenarioBootstrap.beforeAll(QuarkusScenarioBootstrap.java:50)
09:57:32 	at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.lambda$invokeBeforeAllCallbacks$12(ClassBasedTestDescriptor.java:396)
09:57:32 	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
09:57:32 	at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.invokeBeforeAllCallbacks(ClassBasedTestDescriptor.java:396)
09:57:32 	at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.before(ClassBasedTestDescriptor.java:212)
09:57:32 	at org.junit.jupiter.engine.descriptor.ClassBasedTestDescriptor.before(ClassBasedTestDescriptor.java:85)
09:57:32 	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:148)
09:57:32 	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
09:57:32 	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
09:57:32 	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
09:57:32 	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
09:57:32 	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
09:57:32 	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
09:57:32 	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
09:57:32 	at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
09:57:32 	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:41)
09:57:32 	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$6(NodeTestTask.java:155)
09:57:32 	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
09:57:32 	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:141)
09:57:32 	at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:137)
09:57:32 	at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$9(NodeTestTask.java:139)
09:57:32 	at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73)
09:57:32 	at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:138)
09:57:32 	at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:95)
09:57:32 	at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:35)
09:57:32 	at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57)
09:57:32 	at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:54)
09:57:32 	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:198)
09:57:32 	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:169)
09:57:32 	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:93)
09:57:32 	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.lambda$execute$0(EngineExecutionOrchestrator.java:58)
09:57:32 	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.withInterceptedStreams(EngineExecutionOrchestrator.java:141)
09:57:32 	at org.junit.platform.launcher.core.EngineExecutionOrchestrator.execute(EngineExecutionOrchestrator.java:57)
09:57:32 	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:103)
09:57:32 	at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:85)
09:57:32 	at org.junit.platform.launcher.core.DelegatingLauncher.execute(DelegatingLauncher.java:47)
09:57:32 	at org.apache.maven.surefire.junitplatform.LazyLauncher.execute(LazyLauncher.java:56)
09:57:32 	at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.execute(JUnitPlatformProvider.java:184)
09:57:32 	at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:148)
09:57:32 	at org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:122)
09:57:32 	at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:385)
09:57:32 	at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:162)
09:57:32 	at org.apache.maven.surefire.booter.ForkedBooter.run(ForkedBooter.java:507)
09:57:32 	at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:495)
09:57:32 Caused by: org.testcontainers.containers.ContainerLaunchException: Timed out waiting for log output matching '.*started in.*\s'
09:57:32 	at org.testcontainers.containers.wait.strategy.LogMessageWaitStrategy.waitUntilReady(LogMessageWaitStrategy.java:47)
09:57:32 	at org.testcontainers.containers.wait.strategy.AbstractWaitStrategy.waitUntilReady(AbstractWaitStrategy.java:52)
09:57:32 	at org.testcontainers.containers.GenericContainer.waitUntilContainerStarted(GenericContainer.java:909)
09:57:32 	at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:500)
09:57:32 	... 55 more
09:57:32 
09:57:32 07:57:32,060 Log output from the failed container:
09:57:32 OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000006af400000, 4034920448, 0) failed; error='Not enough space' (errno=12)
09:57:32 #
09:57:32 # There is insufficient memory for the Java Runtime Environment to continue.
09:57:32 # Native memory allocation (mmap) failed to map 4034920448 bytes. Error detail: committing reserved memory.
09:57:32 # An error report file with more information is saved as:
09:57:32 # /tmp/hs_err_pid84.log
09:57:32 
09:57:33 [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 309.3 s <<< FAILURE! -- in io.quarkus.ts.http.advanced.HttpAdvancedIT
09:57:33 [ERROR] io.quarkus.ts.http.advanced.HttpAdvancedIT -- Time elapsed: 309.3 s <<< ERROR!
09:57:33 org.testcontainers.containers.ContainerLaunchException: Container startup failed for image quay.io/keycloak/keycloak:25.0
09:57:33 	at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:359)
09:57:33 	at org.testcontainers.containers.GenericContainer.start(GenericContainer.java:330)
09:57:33 	at io.quarkus.test.services.containers.DockerContainerManagedResource.doStart(DockerContainerManagedResource.java:122)
09:57:33 	at io.quarkus.test.services.containers.DockerContainerManagedResource.start(DockerContainerManagedResource.java:72)
09:57:33 	at io.quarkus.test.bootstrap.BaseService.doStart(BaseService.java:356)
09:57:33 	at io.quarkus.test.bootstrap.BaseService.start(BaseService.java:245)
09:57:33 	at io.quarkus.test.bootstrap.QuarkusScenarioBootstrap.launchService(QuarkusScenarioBootstrap.java:172)
09:57:33 	at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
09:57:33 	at io.quarkus.test.bootstrap.QuarkusScenarioBootstrap.beforeAll(QuarkusScenarioBootstrap.java:73)
09:57:33 	at io.quarkus.test.bootstrap.QuarkusScenarioBootstrap.beforeAll(QuarkusScenarioBootstrap.java:50)
09:57:33 	at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
09:57:33 Caused by: org.rnorth.ducttape.RetryCountExceededException: Retry limit hit with exception
09:57:33 	at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:88)
09:57:33 	at org.testcontainers.containers.GenericContainer.doStart(GenericContainer.java:344)
09:57:33 	... 10 more
09:57:33 Caused by: org.testcontainers.containers.ContainerLaunchException: Could not create/start container
09:57:33 	at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:563)
09:57:33 	at org.testcontainers.containers.GenericContainer.lambda$doStart$0(GenericContainer.java:354)
09:57:33 	at org.rnorth.ducttape.unreliables.Unreliables.retryUntilSuccess(Unreliables.java:81)
09:57:33 	... 11 more
09:57:33 Caused by: java.lang.IllegalStateException: Wait strategy failed. Container exited with code 1
09:57:33 	at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:533)
09:57:33 	... 13 more
09:57:33 Caused by: org.testcontainers.containers.ContainerLaunchException: Timed out waiting for log output matching '.*started in.*\s'
09:57:33 	at org.testcontainers.containers.wait.strategy.LogMessageWaitStrategy.waitUntilReady(LogMessageWaitStrategy.java:47)
09:57:33 	at org.testcontainers.containers.wait.strategy.AbstractWaitStrategy.waitUntilReady(AbstractWaitStrategy.java:52)
09:57:33 	at org.testcontainers.containers.GenericContainer.waitUntilContainerStarted(GenericContainer.java:909)
09:57:33 	at org.testcontainers.containers.GenericContainer.tryStart(GenericContainer.java:500)
09:57:33 	... 13 more

@jcarranzan jcarranzan force-pushed the fixes/keycloak-memory-container branch from d4e271e to f0f1159 Compare October 18, 2024 08:56
@jcarranzan
Copy link
Contributor Author

The issue opened in our TS:
#2106

@jcarranzan jcarranzan force-pushed the fixes/keycloak-memory-container branch from 457a308 to 9ff09eb Compare October 21, 2024 07:14
@fedinskiy fedinskiy merged commit 11c37ca into quarkus-qe:main Oct 21, 2024
7 checks passed
@michalvavrik michalvavrik deleted the fixes/keycloak-memory-container branch October 21, 2024 09:40
@michalvavrik michalvavrik removed the triage/flaky-test Signal that flaky tests were detected during CI run label Nov 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants