-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
container logs lose line(s) (the last one) #17747
Comments
this is likely related: |
So I was talking about #9285, but it looks like the bits mentioned in projectatomic/docker#218 should be in already. |
@soltysh I don't think we have projectatomic/docker#218 that pr was closed https://github.com/projectatomic/docker/blob/master/daemon/logger/journald/read.go#L255 |
We'd need to check the current docker or get @runcom look into it |
I guess we can re-open projectatomic/docker#218 for the latest docker(s) we support and have @nalind ack on it to get it in |
I've resurrected it already projectatomic/docker#291 |
In our test AMIs we are using
|
that version of docker already has that particular fix (https://github.com/projectatomic/docker/blob/ec8512ba42f349cbe5a63140c4d9c810f7afa527/daemon/logger/journald/read.go#L262-L269) It must be something else then |
We need to revert the |
We've discussed this with Steve yesterday, I'll open a PR which will enable the journald for all our tests, for now the revert in #17758 should stabilize the queue, slightly. |
Is this really just jurnald rate limit? I find it suspicious that rate limiting would strip only the last line in the same specific test. |
I think it ends up being how reproducible our tests are -- always run the same N tests in the same order and generate the same log lines, so if we trigger a rate-limit it will be fairly often in the same place. |
Last time it was on the last line of different test and we had to add |
There was also some weirdness with the way that the logs were gathered in |
@stevekuznetsov it is in the docker version you've pasted #17747 (comment) |
No, @soltysh knows more about that patch from Andy |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
Stale issues rot after 30d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle rotten |
Rotten issues close after 30d of inactivity. Reopen the issue by commenting /close |
At least the last line of logs get lost sometimes causing flakes in various tests. Test use
oc logs
command.I've found some old hack with sleep 1 at the end of the container command which appear to mitigate the issue mostly, but flake sometimes as well.
Here are some flakes caused by this:
cc: @mfojtik @soltysh @deads2k
The text was updated successfully, but these errors were encountered: