Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

container logs lose line(s) (the last one) #17747

Closed
tnozicka opened this issue Dec 13, 2017 · 20 comments
Closed

container logs lose line(s) (the last one) #17747

tnozicka opened this issue Dec 13, 2017 · 20 comments
Assignees
Labels
component/containers kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/P0 sig/containers
Milestone

Comments

@tnozicka
Copy link
Contributor

At least the last line of logs get lost sometimes causing flakes in various tests. Test use oc logs command.

I've found some old hack with sleep 1 at the end of the container command which appear to mitigate the issue mostly, but flake sometimes as well.

Here are some flakes caused by this:

cc: @mfojtik @soltysh @deads2k

@tnozicka
Copy link
Contributor Author

this is likely related:
#12143 (comment)
moby/moby#29863

@soltysh
Copy link
Contributor

soltysh commented Dec 13, 2017

So I was talking about #9285, but it looks like the bits mentioned in projectatomic/docker#218 should be in already.

@tnozicka
Copy link
Contributor Author

tnozicka commented Dec 13, 2017

@soltysh
Copy link
Contributor

soltysh commented Dec 13, 2017

We'd need to check the current docker or get @runcom look into it

@runcom
Copy link
Member

runcom commented Dec 13, 2017

We'd need to check the current docker or get @runcom look into it

I guess we can re-open projectatomic/docker#218 for the latest docker(s) we support and have @nalind ack on it to get it in

@tnozicka
Copy link
Contributor Author

tnozicka commented Dec 13, 2017

I've resurrected it already projectatomic/docker#291

@stevekuznetsov
Copy link
Contributor

In our test AMIs we are using

$ rpm -q docker
docker-1.12.6-68.gitec8512b.el7.x86_64

@tnozicka
Copy link
Contributor Author

that version of docker already has that particular fix (https://github.com/projectatomic/docker/blob/ec8512ba42f349cbe5a63140c4d9c810f7afa527/daemon/logger/journald/read.go#L262-L269)

It must be something else then

@stevekuznetsov
Copy link
Contributor

We need to revert the journald limit disable PR.

@soltysh
Copy link
Contributor

soltysh commented Dec 14, 2017

We've discussed this with Steve yesterday, I'll open a PR which will enable the journald for all our tests, for now the revert in #17758 should stabilize the queue, slightly.

@tnozicka
Copy link
Contributor Author

Is this really just jurnald rate limit? I find it suspicious that rate limiting would strip only the last line in the same specific test.

@stevekuznetsov
Copy link
Contributor

I think it ends up being how reproducible our tests are -- always run the same N tests in the same order and generate the same log lines, so if we trigger a rate-limit it will be fairly often in the same place.

@tnozicka
Copy link
Contributor Author

Last time it was on the last line of different test and we had to add sleep 1 there. Seems like too much of a coincidence.

@stevekuznetsov
Copy link
Contributor

There was also some weirdness with the way that the logs were gathered in docker that Andy was working around by asking for them twice, but I don't think that ever made it in.

@tnozicka
Copy link
Contributor Author

@stevekuznetsov it is in the docker version you've pasted #17747 (comment)

@stevekuznetsov
Copy link
Contributor

No, @soltysh knows more about that patch from Andy

@stevekuznetsov stevekuznetsov changed the title container logs loose line(s) (the last one) container logs lose line(s) (the last one) Jan 24, 2018
@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 24, 2018
@openshift-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 25, 2018
@openshift-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/containers kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/P0 sig/containers
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants