Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DockerTests.test600Interrupt failing due to stdout being too long #99508

Closed
pgomulka opened this issue Sep 13, 2023 · 6 comments
Closed

DockerTests.test600Interrupt failing due to stdout being too long #99508

pgomulka opened this issue Sep 13, 2023 · 6 comments
Labels
:Core/Infra/Core Core issues without another label Team:Core/Infra Meta label for core/infra team >test-failure Triaged test failures from CI

Comments

@pgomulka
Copy link
Contributor

CI Link

https://gradle-enterprise.elastic.co/s/kl67xjolc3cgc

Repro line

n/a

Does it reproduce?

Yes

Applicable branches

main

Failure history

No response

Failure excerpt

https://gradle-enterprise.elastic.co/scans/tests?search.timeZoneId=Europe/Warsaw&tests.container=org.elasticsearch.packaging.test.DockerTests&tests.test=test600Interrupt

the test fails due to infrastructure code reading from docker's stdout failing due to stdout being too long



<<Too large to read (113516 bytes), truncated>>" |  
-- | --
  | at __randomizedtesting.SeedInfo.seed([359E5782DDE88E6E:AF75E480BDF7F318]:0) |  
  | at org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:18) |  
  | at org.junit.Assert.assertThat(Assert.java:956) |  
  | at org.elasticsearch.packaging.test.DockerTests.test600Interrupt(DockerTests.java:1237)


the limit is 1001024charsize

we should iteratively check the stdout instead of copying the full output

@pgomulka pgomulka added >test-failure Triaged test failures from CI needs:triage Requires assignment of a team area label labels Sep 13, 2023
pgomulka added a commit to pgomulka/elasticsearch that referenced this issue Sep 13, 2023
pgomulka added a commit that referenced this issue Sep 13, 2023
@astefan astefan added :Delivery/Packaging RPM and deb packaging, tar and zip archives, shell and batch scripts and removed needs:triage Requires assignment of a team area label labels Sep 13, 2023
@elasticsearchmachine elasticsearchmachine added the Team:Delivery Meta label for Delivery team label Sep 13, 2023
@elasticsearchmachine
Copy link
Collaborator

Pinging @elastic/es-delivery (Team:Delivery)

@pgomulka pgomulka added :Core/Infra/Core Core issues without another label and removed :Delivery/Packaging RPM and deb packaging, tar and zip archives, shell and batch scripts Team:Delivery Meta label for Delivery team labels Sep 13, 2023
@elasticsearchmachine
Copy link
Collaborator

Pinging @elastic/es-core-infra (Team:Core/Infra)

@elasticsearchmachine elasticsearchmachine added the Team:Core/Infra Meta label for core/infra team label Sep 13, 2023
@mark-vieira
Copy link
Contributor

we should iteratively check the stdout instead of copying the full output

We should but refactoring these assertions to work with streams is non-trivial and complicated further by the fact that we have multiple assertions.

A 100kB limit might be a bit conservative here. I know we don't want to store enormous strings in-memory, but we could 10x this limit w/o any appreciable effect on test execution memory usage. @rjernst thoughts?

@rjernst
Copy link
Member

rjernst commented Sep 14, 2023

I'm fine to increase the limit, but I want to draw a line in the sand. 100kb was plenty before, what changed? These are very short lived instance, so they don't have a lot of log messages. At some point we have to fix this (it can be a new method and iteratively change to the iterative log assertions). For now though, let's increase to 1mB to unblock this.

pgomulka added a commit to pgomulka/elasticsearch that referenced this issue Sep 14, 2023
json logs used in docker tests are consuming more space
than the previous 200KB
setting the limit to 2MB
relates elastic#99508
pgomulka added a commit that referenced this issue Sep 14, 2023
json logs used in docker tests are consuming more space than the previous 100KB
setting the limit to 1MB
relates #99508
@mark-vieira
Copy link
Contributor

Most startup messages are simply INFO level stuff like loading modules, index templates, etc. These certainly grow over time as we add more modules to the distribution.

@williamrandolph
Copy link
Contributor

I created an issue for changing how the tests work so that we can close this test-failure ticket. We talked about it in a team meeting today, and concluded that bumping the timeout counts as a temporary fix.

We can reopen this issue or create a new one if tests fail again in a similar way.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Core/Infra/Core Core issues without another label Team:Core/Infra Meta label for core/infra team >test-failure Triaged test failures from CI
Projects
None yet
Development

No branches or pull requests

6 participants