Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Elastic Log Driver: Support "docker logs" via local buffer #19371

Closed
fearful-symmetry opened this issue Jun 24, 2020 · 4 comments
Closed

Elastic Log Driver: Support "docker logs" via local buffer #19371

fearful-symmetry opened this issue Jun 24, 2020 · 4 comments
Assignees
Labels

Comments

@fearful-symmetry
Copy link
Contributor

This has been a long-standing issue in #13990 which we'll want to address sooner rather than later.

Right now, the docker plugin has no support fort the docker logs command, which is a must for cloud adoption. We need to entirely re-implement this behavior in the plugin. This behavior needs to be entirely local, as one of the primary use cases for docker logs is grabbing logs when upstream elasticsearch outputs are down. I've talked with @urso about this, and our best bet to implement this in a short period of time would be to generate a log file for each configured pipeline that we can spit back when the user asks for it. There's also the fs-backed buffer that @faec is working on, although it might not be ready yet. This leaves us with two remaining questions:

  • Where in the pipeline do we siphon logs?
  • How do we write and manage log files? Does a "reaper" process periodically prune them?

Keep in mind we need to support the following options:

  • A "since" date that reports logs after a cutoff
  • A "follow" option
  • A count of logs to return
@fearful-symmetry fearful-symmetry added enhancement Team:Integrations Label for the Integrations team labels Jun 24, 2020
@fearful-symmetry fearful-symmetry self-assigned this Jun 24, 2020
@elasticmachine
Copy link
Collaborator

Pinging @elastic/integrations (Team:Integrations)

@urso
Copy link

urso commented Jun 24, 2020

This leaves us with two remaining questions:

  • Where in the pipeline do we siphon logs?
  • How do we write and manage log files? Does a "reaper" process periodically prune them?

Why not follow the JSON logging driver logic 1 by 1 here? We have a go-routine per container. Each would write to it's own directory (named based on container name) into a JSON file that is rotated after 10MB. After the go-routine has written the file it would use libbeat to published the log line as an event.
When the container is deleted we delete the directory.
For example see: https://github.com/cpuguy83/docker-log-driver-test/blob/master/

If we have a way to store the log file in the container directory, then docker will take care of deleting all resources for us. If we decide that we need to keep the files in a separate directory we need to make some limits configurable (e.g. delete log files after container is stopped for ).

@andresrc
Copy link
Contributor

can we close this issue or is there anything missing? Thanks!

@businessbean
Copy link

Does this mean that we will have the Recent Log Entries section populated if the Elasticsearch nodes are running in Docker containers (on Kubernetes)?
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants