-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Specify logging driver and options for docker driver #688
Comments
@pajel Nomad is going to come with logging support in the next release, users will be able to specify a syslog endpoint that Nomad is going to push logs to. The easiest way to push logs to cloudwatch would be to push them through a logging middleware such as logstash which have Syslog input and can forward logs to a bunch of places. Would that work? You will be able to stream logs via Nomad cli for running tasks directly too. |
@diptanu thank you for your reply. Running a logstash would, however, seem to me as an unnecessary overhead. Since 1.9 docker has the awslog driver built in, so everything Nomad would need to do is to pass the 4 above mentioned arguments to the docker run, and everything else is taken care of by the docker daemon. |
+1 on configurable Docker logs rotation, i.e. to prevent server disks from filling up. Now, regarding log dispatchment, I agree with @pajel if you're only targeting Docker logging - which I think you're not, right? |
@pires yes, I am targeting only the Docker logs. I am not interested in Nomad logs here. To explain a little bit: Does it make sense? :) |
@pajel Totally understand that your setup is convenient and has less moving parts. It's a pain that cloud watch unlike other cloud logging-as-a-service providers don't have syslog endpoints. If they did, it would have worked out of the box. The reason why we are doing log rotation and streaming logs in Nomad is that we want users to be able to use the Nomad cli and run Having said that let me think how we can solve this problem for you without needing you to run a syslog log forwarder. |
Thanks for explaining. Yeah, my use-case is only for a small part of Nomad - docker task - and only on AWS, so I understand that it might not be a priority. However, it's a built-in Docker feature, so why not to leverage it..
For json-file:
Possible values and their options would match the Docker built-in logging drivers explained here: @pires - I believe it would solve your Docker logs rotation problem as well. Thanks for your time. Much appreciated. |
@pajel yes, I'm aware of the drivers but if Nomad implements it, I don't need to configure Docker on every Docker host. |
we also use docker log drivers to push logs from container stdout to fluentd ( which is in docker container. in our case). the point here in using docker log driver is that docker containers can point to different log collectors and it makes it flexible. rather then having syslog as central log collector, this way logs can be distributed to different collectors by just pointing to the one when running container. this makes lots of sense to us because we are still experimenting with different ways to handle logs and containers outputs. above scenario can be used, not only for logging, but for some data processing pipelines where, for example, application would be passing data streams to stdout of docker container and through docker log driver it would be redirected to fluentd, and fluentd would sent it to kafka (so first container would not need to be registered as kafka producer...for example). |
@engine07 So your use case would work with our current design. Fluentld has a syslog input plugin, which you can use and in the nomad job configuration you will have to just mention the syslog input port and nomad will push all the logs from tasks to fluentd and then you can do whatever you want to do with your logs. Does that makes sense? |
@diptanu Actually it does. Was not aware that in nomad configuration on job level it would be possible to specify syslog input. |
@engine07 Yeah the PR hasn't landed yet. Working on it right now! |
Any update on this? |
@diptanu any updates on this? |
@c4milo @marceldegraaf So for 0.3 we haven't done any work on forwarding log messages to a remote syslog endpoint. But we write all the logs into a Would that work as a stop gap workaround until we have the remote syslog endpoint? We just need some more bandwidth to do that work. |
Thanks for your response @diptanu. I wasn't aware of the Currently I grab all Docker container logs with logspout but if the |
@marceldegraaf All logs of tasks in an allocation are written into '/alloc/logs' |
Thanks, and are those rotated by Nomad? Is that location stable or is it expected to change in future Nomad versions? |
Yes they are rotated by Nomad! Please see the documentation regarding how you can configure the behaviour of the rotation. And we don't think the location is going to change in the future. Sent from my iPhone
|
Hi @diptanu,
How can I specify that in job configuration? |
@fieryvova for the |
@dadgar, OK. |
I have a question for you @dadgar... from what I can see, the intention is to have The question: would we be able to use |
@ketzacoatl That is a goal. Log forwarding did not make the 0.3.X cut but will come in the future. For now you can have a log shipper that runs in the same task group that ships where needed. This is what we do in our production. |
@fieryvova I am not sure I understand, the STDOUT and STDERR are forwarded to Nomad. |
@fieryvova I'm simply running filebeat to monitor Nomad's job logs (in my case in Works like a charm, and filebeat's memory footprint is considerably smaller than that of Logstash. |
@dadgar, yes, sure, but path that looks like @marceldegraaf, thanks for an option. My use case is the following: Does it make sense? |
@fieryvova I see. AFAIK there's no way to do that automatically now with Nomad and Docker. You could use Logstash on the container runners to add the job's UUID to the collected log events, but that may not be very useful. |
@fieryvova we do something similar in production that may work for you. In our task group we have 2 tasks
Because we know what app is running in 1, we can "tag" the logs with whatever data we want to help us identify them. In our case we "tag" them with the task group name, and the allocation id so they are easily recognizable in our centralized logging solution. We do this by passing environment variables in the nomad job file which then a wrapper script uses to write our centralized logging config. So then we use the same container for all logging jobs and just pass in environment variables for any dynamic config. I am not sure that solves your use case as I do not use logstash but I hope it gives you a bit more detail on one possible solution. This does have the downside of needing to run a log collector coprocess for each task but we're overall happy with it as our agent is fairly lightweight. |
Dear all, I faced the same problem as well because I needed to integrate my docker container into our existing SPLUNK infrastructure with all its alerting, escalation management, you name it. I extended the docker driver configuration by a logging block: logging { All docker options may be passed in as desired and here at my company, it works without any troubles. I also added support to mount-in folders from the host. This is a side feature that I need to mount in credentials/certificates I don't want to see baked into my docker files. I also don't support Alex' argumentation about the abstraction. From my point of view, the abstraction is the ability to plug-in drivers and the drivers should leverage the feature set of the underlying technology, in this case docker. Abstracting away features is not the way to go, I think. It's simply not taking into account how people actually use the software but is based on assumptions and prettiness. For all those wanting to use nomad 0.4.1 with the logging settings provided, I created a fork and released a new version (0.4.2) here: https://github.com/Fluxxo/nomad/releases As I don't think that the project's maintainer will merge the pull request I don't even bother submitting one :) |
@diptanu, is it undesirable to consider accepting this type of PR, and in the future, remove/update/refactor into the more ideal you desire? |
Someone just gave me a hint that there's a bug with the default docker values. I will fix it ASAP and release a new version on m fork. Sorry for this, but it's my first lines of go code :) |
@Fluxxo Good job! All for fixing real problems you have! I hope down the line you will agree with me when one logging config applies to all drivers :) @ketzacoatl As for merging this, no we will not be. Nomad 0.6 will bring plugin support and one of the plugin types will be for logging. |
@dadgar while I appreciate the work towards nomad plugin support and the overall elegance of the design of nomad, I still feel strongly that this is a case where misguided design philosophy or business strategy hurts people that actually use nomad day-to-day. Docker currently has 9 logging drivers, many of which are highly specialized like gcplogs or awslogs. Presumably, when nomad plugins land in 3-6 months, none of these will be supported initially and will have to be reimplemented (or may only be available in some enterprise version). Given that the docker ecosystem will always be much larger than the nomad ecosystem, it's pretty much guaranteed that nomad logging plugins will always be less varied and less functional than their docker equivalents. Being able to use one logging config across all drivers is pretty neat.. a nice elegant design.. but it doesn't help me at all. However, being able to use docker's existing functionality would help me greatly and immediately. There's simply no good reason not to allow operators to opt-out and pass whatever options they want to the docker API. Yes, maybe that would break things in certain cases but if it's opt-out then I'm explicitly agreeing to take that risk. Btw, I really love all the work you guys are doing at hashicorp. I attended hashiconf and have been using consul and other hashicorp tools in production for almost two years. I am really hoping you will reconsider this business/design strategy of wrapping docker and not allowing for any kind of opt-out. It seems obvious that users want this and that it will help speed nomad adoption and grow the ecosystem. |
As an datapoint, using multiple nomad drivers together, I can appreciate the core teams decision to keep semantics and promises clean and tight (and uniform across drivers). I would rather configure logging uniformly at nomad level than deal with configuring drivers separately. Allowing the passing of arbitrary options to drivers might be an necessary evil though, perhaps guarded by an admin setting similar to the raw_exec driver? |
I'm 100% for clean and elegant design, however I cannot fathom why there is zero interest in making this available immediately so users have something meaningful here and now (and removing that in the future when a more perfect design has been implemented). It's really hard to sell nomad when support for volumes, logging, etc is blocked like this. Kubernetes moves fast, is terribly designed, and a complete mess (if you ask me), but their support for basics means organizations choose it over better solutions like nomad. |
I personally prefer slow and well done than fast and unreliable. Take your
|
There is absolutely no reason why a solution for the immediate here and now has to be unreliable. It's totally possible to improve the immediate situation in core, while playing well with future plans. |
You can easily patch the Docker driver to do what you want to do while the
|
Maybe for some, but certainly not for all. Maintaining forks can carry significant costs (such as needing to also maintain the release distribution rather than relying on what upstream publishes). |
I'm with @ketzacoatl here. While it's possible for me to maintain a Nomad fork with a patch on it until the functionality desired is in core, that doesn't mean the rest of my team can do it. There's a lot of overhead in packaging forked code into enterprise or other production environments and many organisations will use that as a reason to not adopt software. My company currently uses Serf, Terraform, and Consul but not having control of Docker logging in Nomad is a deal breaker and we're looking to go back to ECS or at least Kubernetes. Once a solution like that is implemented, it's almost impossible to move to something else even if it gets the functionality desired in the future. Not having such a basic thing available now effectively blocks Nomad as a solution for many organisations until they re-evaluate all their stack, regardless of how good Nomad becomes. |
Hi there, as announced, I release a new version, v0.4.3 here: https://github.com/Fluxxo/nomad/releases/tag/v0.4.3 I hope you like the release and I welcome any input. Following up the discussion here, I'd like to state something again - and I'll try to stay as neutral as I can. As already mentioned, I really love what hashicorp is doing. In my opinion, Hashicorp stands for software that just works, with little overhead, solving real world problems. Let me tell you a short story: We were using etcd (yes, my fault :D) in conjunction with confd (Issue: kelseyhightower/confd#102), which is actually pretty similar to consul-template written and maintained by @kelseyhightower. Though I do respect Kelseys opinion on that point, I think it disrespects the community using this piece of software - there is a need and argumentation is about cleanliness and the confd model, not about the problems to solve. This decision led to two points.
Let's port this issue to this discussion: @dadgar tells us, the docker logging driver option does not fit into the abstraction model nomad tries to keep up. In another post, it is mentioned, that this might be a pro-feature you have to pay for. Another time later, we're talking about plugins. I do love plugins as they promise a lot of flexibility but sources the troubles to the community. Covered from a devops perspective, I am totally and absolutely fine when logging options are held in manifests and have different options from team to team. Different logging infrastructure needs different logging settings. Period. I know this sounds very emotional, but I hope you get the point: I want real world problems to be solved and not a software that wins a design contest. If both workds together, it is clearly the silver bullet. In this case I don't see anything breaking. One more point towards @c4milo: Maintaining a fork and patching the driver each time an update comes out just is not the right solution to go for. As soon as you work in a company with structural IT governance processes it will become difficult to get this working over a longer period of time. Ok, so I hope you don't get me wrong. I just want the best for us all, users and maintainers as well :) |
Why the resistance to using a sidecar solution to bridge the time gap until plugins? Sure, it's a bit of more work, but so is adding short-lived features and needing to deprecate them later. I'm sure you can already appreciate the extra work required even for "trivial" features yourself now. |
Is there an example how such a sidecar solution would work? Since I couldn't configure nomad to use the docker awslogs driver I wrote my own awslogs appender for logback... But I would rather use something standard. |
I'm not sure there's a canonical example anywhere, and logging setups tend to vary hugely. I'm planning on using fluentd as the sidecar, forwarding to central logging. |
@erkki That's why I'm releasing on my own now. Deprecating is a topic, I am with you. |
Regarding sidecars, they may work for some, but they also add performance overhead that isn't always welcome or feasible to allow (with a sidecar, you run one per job, so if you have multiple jobs per host, you can end up spending more CPU/mem on logging than on doing actual work). |
Given this more thought and have decided to do the following: add an operator option that allows the docker driver to change logging behavior and then add a config to the docker driver to set the logging driver. @Fluxxo Could you please open a PR with what you have. Have also filed an issue against Docker so that we don't have to sacrifice |
I'll submit the PR ASAP, currently busy getting into my weekend :) |
I just created the pull request. |
Fixed by #1797 |
I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues. |
Please correct me if I am wrong, but I couldn't find in the documentation how to pass the log-driver and log-opt arguments to containers when running them as Nomad tasks, e.g.:
--log-driver=awslogs --log-opt awslogs-region=us-east-1 --log-opt awslogs-group=myLogGroup --log-opt awslogs-stream=myLogStream
I know I can configure the docker daemon with these arguments, but then I can't specify different logstreams for each container.
If this is currently not possible, I would like to request it as a feature.
Thank you
The text was updated successfully, but these errors were encountered: