-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to set log stream name as pod_name and container name #16
Comments
Can we have something like Tag_Regex for log stream also or some way to use Tag value as it is generated with help of Tag_Regex as the value for log_stream_name? |
Also looking for a way to implement this. |
@ld-singh this should do what you want
|
@swibrow your example does not work as the plugin requires either log_stream_name or log_stream_prefix to be set. I actually think it's a bug, as there is no way to set the log_stream_name to tag value. Fluent-bit doesn't support tag variable. Default expected behavior would be to set the log_stream_name to tag value. I can submit a PR if the maintainers agree with this. |
This feature would be great. I'm thinking about how I can work out it at this moment. |
While not ideal for solving this problem, I was able to use the nifty new tag-rewrite filter introduced in fluent-bit 1.4 to customize how the CloudWatch plugin routes logs into distinct log streams. This works since the CloudWatch plugin currently determines each event's log stream based on its tag. # =====
# This uses two distinct tag patterns to effectively create two
# processing streams:
#
# 1. the first stream aggregates kubernetes logs into events
# tagged with the `kube.*` pattern;
#
# 2. the second stream uses the rewrite_tag filter to copy
# each `kube.*` event, but tags the copy with a non-overlapping
# pattern (`namespaces.*`) that matches the desired CloudWatch
# log stream organization. In this example the rewrite_tag filter
# is configured to drop the original kube.* event.
#
# Careful: the rewrite_tag filter triggers the copied event from
# the start of the pipeline and will be reprocessed by any sections
# that have a wildcard Match (*)!
# =====
# =====
# Processing Stream 1 (kube.*): gather kubernetes logs
#
# Gather all container logs from each node. Any modifications
# you want to make to the streams need to
# =====
[INPUT]
Name tail
Tag kube.*
Path /var/log/containers/*.log
Parser docker
DB /var/log/flb_kube.db
Mem_Buf_Limit 5MB
Skip_Long_Lines On
Refresh_Interval 10
[FILTER]
Name kubernetes
Match kube.*
Kube_URL https://kubernetes.default.svc.cluster.local:443
Merge_Log On
Keep_Log Off
# =====
# Processing Stream 2 (namespaces.*): customize CloudWatch log stream routing
#
# Use the `rewrite_tag` filter to customize how the CloudWatch plugin routes
# events into log streams. In this case, events are grouped into streams based
# on their kubernetes namespace name.
#
# CAUTION: the rewrite_tag filter triggers an entirely new event that will
# be re-processed by the entire pipeline!
#
# see: https://docs.fluentbit.io/manual/pipeline/filters/rewrite-tag
# =====
[FILTER]
Name rewrite_tag
Match kube.*
# the `false` at the end of rule drops the original `kube.*` event.
Rule $kubernetes['namespace_name'] ^(.*)$ namespaces.$1 false
[OUTPUT]
Name cloudwatch
# Note: this will capture all events, but shouldn't capture the
# original kube.* events since they're dropped by the rewrite_tag
# filter.
Match *
region us-east-1
log_group_name fluent-bit-cloudwatch
log_stream_prefix your-log-stream-prefix.
auto_create_group true |
David's recent contributions address this feature request; we will do a release for it in the next few days. |
Has this gone out yet? I am using the
I've tried If this has gone out, is there documentation for how things now work? |
@lonnix This was released in AWS for Fluent Bit 2.7.0 Documentation is in the readme: https://github.com/aws/amazon-cloudwatch-logs-for-fluent-bit |
I just updated to 2.7.0. According to that documentation this should work, specifically from the line "This value allows a template in the form of $(variable) where variable is a map key name in the log message."
but I get this in the logs
|
@lonnix Saw your ping in the fluent slack... we can discuss here. So if you do Referencing this k8s tutorial: https://aws.amazon.com/blogs/containers/kubernetes-logging-powered-by-aws-for-fluent-bit/ With k8s metadata added by fluent bit, logs might look like:
So
I think we need better error handling for this code- can you open a separate issue for that? |
I bet you want something like:
Everything must be spelled correctly and must be the exact right name of course.. |
Using this config
I tried
In my example mentioned earlier I was using this log (I got this by changing the OUTPUT to stdout instead of cloudwatch_logs)
Here you can see that |
@lonnix Are you using 2.7.0? |
I'm using a DaemonSet with |
I tried tags |
Are you using correct plugin cloudwatch vs cloudwatch_logs? |
Good call... the go plugin is I'm very busy right now trying to get out_s3 working for the 1.6 release of Fluent Bit... will try to test this out myself in a bit... |
I test this, and it seems to be OK.
|
So setting up a k8s cluster would take me more time than I have free right now... I created a simple logger that just spits out the full JSON event that @lonnix posted over and over again. I used a fluent bit config that parses the incoming log as JSON and then sends it to the CW output. It works:
|
I really do not think you are using the right version somehow... If I use a config where the keys don't exist:
The code seems to remove the
|
I was using the wrong plugin, I switched to
Thanks for the help! |
Possibly. Though it might use a different syntax, since core Fluent Bit has some code to do this logic that unfortunately uses different templating. Our work will be based on actual customer need and feedback. Right now, I've gotten a very small number of complaints (like 3 customers total) about max throughput limitations in the Go plugins. The go plugins do have limitations but they seem to be good enough for the vast majority of folks. Let us know if you experience any performance limitations that impact your use cases. The go plugins are easier to maintain and write features for, so new features will probably always come out in them first, and then be ported to the C plugins based on request. |
Sounds good to me. I was having issues with the fluentd cloudwatch plugin so as long as those don't show up anymore I'll be good. I know for sure we won't have throughput issues, we're not big enough for that yet :) |
Hi
How can i set log_stream_name as <pod_name><container_name><namespace_name> format instead of one we get with prefix like kube.var.log.containers?
Or How can i remove the prefix kube.var.log.containers from the stream name?
The text was updated successfully, but these errors were encountered: