-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
logs: k8s-file: fix race #10600
logs: k8s-file: fix race #10600
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this works.
libpod/container_log.go
Outdated
eventChannel := make(chan *events.Event) | ||
errorChannel := make(chan error) | ||
eventOptions := events.ReadOptions{ | ||
EventChannel: eventChannel, | ||
Filters: []string{"event=died", "container=" + c.ID()}, | ||
FromStart: true, | ||
Stream: true, | ||
} | ||
go func() { | ||
errorChannel <- c.runtime.Events(ctx, eventOptions) | ||
}() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand this. You never read the event channel and this will just hang forever and never exit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Absolutely, thanks! Took a phone call and thought I added the remaining bits :^)
Also take a look at the following for a possible test: Lines 186 to 193 in 9a3a732
|
Good catch. With the recent fixes in the journald logger and this PR, we should be good to go. I'll take a look. |
Fix a race in the k8s-file logs driver. When "following" the logs, Podman will print the container's logs until the end. Previously, Podman logged until the state transitioned into something non-running which opened up a race with the container still running, possibly in the "stopping" state. To fix the race, log until we've seen the wait event for the specific container. In that case, conmon will have finished writing all logs to the file, and Podman will read it until EOF. Further tweak the integration tests for testing `logs -f` on a running container. Previously, the test only checked for one of two lines stating that there was a race. Indeed the race was in using `run --rm` where a log file may be removed before we could fully read it. Fixes: containers#10596 Signed-off-by: Valentin Rothberg <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: Luap99, vrothberg The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/lgtm |
/hold cancel |
Fix a race in the k8s-file logs driver. When "following" the logs,
Podman will print the container's logs until the end. Previously,
Podman logged until the state transitioned into something non-running
which opened up a race with the container still running, possibly in
the "stopping" state.
To fix the race, log until we've seen the wait event for the specific
container. In that case, conmon will have finished writing all logs to
the file, and Podman will read it until EOF.
Further tweak the integration tests for testing
logs -f
on a runningcontainer. Previously, the test only checked for one of two lines
stating that there was a race. Indeed the race was in using
run --rm
where a log file may be removed before we could fully read it.
Fixes: #10596
Signed-off-by: Valentin Rothberg [email protected]