Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiline Filter example does not work with Forward input #4173

Closed
PettitWesley opened this issue Oct 8, 2021 · 28 comments
Closed

Multiline Filter example does not work with Forward input #4173

PettitWesley opened this issue Oct 8, 2021 · 28 comments

Comments

@PettitWesley
Copy link
Contributor

Bug Report

Describe the bug

I took the multiline example here and turned it into an example that works with Forward input: https://docs.fluentbit.io/manual/pipeline/filters/multiline-stacktrace

However, that didn't work. I am not able to get the filter working and actually concatenating multilines with the forward input. Which is strange, since there shouldn't be a difference between my forward example and the tail example from the POV of the filter.

To Reproduce

First, create a container image that will output the log file from the example to stdout:

FROM public.ecr.aws/amazonlinux/amazonlinux:latest
ADD test.log /test.log

RUN yum upgrade -y && yum install -y python3 pip3

RUN pip3 install boto3

WORKDIR /usr/local/bin

COPY main.py .

CMD ["python3", "main.py"]

You could just cat the file... but I went for a python script that prints it line by line:

file1 = open('test.log', 'r')
Lines = file1.readlines()
 
# Strips the newline character because print will add it back
for line in Lines:
    print(line.rstrip())

test.log is the file from the example in the docs.

You can run this container with the fluentd docker log driver:

docker run -it --log-driver fluentd app

And then capture the logs with this configuration:

[SERVICE]
    flush                 1
    grace                 1
    log_level             info
    parsers_file          parsers_multiline.conf

# [INPUT]
#     name                  tail
#     path                  /fluent-bit/etc/test.log
#     read_from_head        true
[INPUT]
    Name forward
    Listen 0.0.0.0
    Port 24224

[FILTER]
    Name    modify
    Match   *
    Remove_wildcard container_name
    Remove_wildcard source
    Remove_wildcard container_id


[FILTER]
    name                  multiline
    match                 *
    multiline.key_content log
    multiline.parser      go, multiline-regex-test



[OUTPUT]
    name                  stdout
    match                 *
    Format                json_lines
    json_date_key         false

The parsers file is the same as the one from the example.

  • Steps to reproduce the problem:

Expected behavior

Multiline example should work with forward input.

Screenshots

Your Environment

  • Version used: 1.8.7
  • Environment name and version (e.g. Kubernetes? What version?): My MacBook

Additional context

Amazon ECS FireLens uses the Fluentd Docker Log Driver and forward input for logs, and so this issue has impacted many AWS customers: aws/aws-for-fluent-bit#100

@PettitWesley
Copy link
Contributor Author

I tried doing things like running the tail example with the output of my python script; that worked, the multiline filter worked, so the python script seems to work fine. And then I tried running the tail example without the multiline filter, just to see what its output would be to stdout. I compared this with the output from the forward example... and its the same. So I can't figure out why the filter isn't working. It's getting the same logs in each case.

My next course of investigation might be to log each incoming record to debug in the filter, to double check that in each case it really is getting the same logs.

@nokute78
Copy link
Collaborator

nokute78 commented Oct 9, 2021

I found differences, but I don't know if they cause this issue.

  • The timestamp type of docker log is not "event time", it is "unix epoch time(uint32)".
  • All timestamps of docker log are same value.
Input plugin Type of timestamp Value of timestamp
Forward Integer Same
Tail EventTime Different for each line

I tested my out_gdetail plugin. https://github.com/nokute78/fluentbit-plugin-out-detail

forward.log:

        {"format":"uint 32", "header":"0xce", "raw":"0xce61613776", "value":1633761142},
        {"format":"fixmap", "header":"0x81", "length":1, "raw":"0x81a36c6f67af616e6f74686572206c696e652e2e2e", "value":
            [
                {"key":
                    {"format":"fixstr", "header":"0xa3", "raw":"0xa36c6f67", "value":"log"},
                 "value":
                    {"format":"fixstr", "header":"0xaf", "raw":"0xaf616e6f74686572206c696e652e2e2e", "value":"another line..."}
                }
            ]
        }

tail.log:

        {"format":"event time", "header":"0xd7", "type":0, "raw":"0xd7006161378f0536d20c", "value":"2021-10-09 15:32:47.087478796 +0900 JST"},
        {"format":"fixmap", "header":"0x81", "length":1, "raw":"0x81a36c6f67af616e6f74686572206c696e652e2e2e", "value":
            [
                {"key":
                    {"format":"fixstr", "header":"0xa3", "raw":"0xa36c6f67", "value":"log"},
                 "value":
                    {"format":"fixstr", "header":"0xaf", "raw":"0xaf616e6f74686572206c696e652e2e2e", "value":"another line..."}
                }
            ]
        }

@nokute78
Copy link
Collaborator

nokute78 commented Oct 9, 2021

Oops, the type of timestamp doesn't cause this issue.
I think the size of flushing is important.

I added below diff.

diff --git a/plugins/filter_multiline/ml.c b/plugins/filter_multiline/ml.c
index 1ce9bc41..d7a118e4 100644
--- a/plugins/filter_multiline/ml.c
+++ b/plugins/filter_multiline/ml.c
@@ -171,7 +171,7 @@ static int cb_ml_filter(const void *data, size_t bytes,
     size_t tmp_size;
     struct ml_ctx *ctx = filter_context;
     struct flb_time tm;
-
+    flb_error("size=%d",bytes);
     /* reset mspgack size content */
     ctx->mp_sbuf.size = 0;

The output is different.
in_tail handles entire lines of file at once.
On the other hand, in_forward processes each line.

in_tail:

$ ../bin/fluent-bit -c a.conf 
Fluent Bit v1.9.0
* Copyright (C) 2019-2021 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2021/10/09 18:53:30] [ info] [engine] started (pid=5256)
[2021/10/09 18:53:30] [ info] [storage] version=1.1.3, initializing...
[2021/10/09 18:53:30] [ info] [storage] in-memory
[2021/10/09 18:53:30] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128
[2021/10/09 18:53:30] [ info] [cmetrics] version=0.2.1
[2021/10/09 18:53:30] [ info] [sp] stream processor started
[2021/10/09 18:53:30] [error] size=4097
[2021/10/09 18:53:30] [ info] [input:tail:tail.0] inotify_fs_add(): inode=1453725 watch_fd=1 name=test.log

in_forward:

$ ../bin/fluent-bit -c a.conf 
Fluent Bit v1.9.0
* Copyright (C) 2019-2021 The Fluent Bit Authors
* Copyright (C) 2015-2018 Treasure Data
* Fluent Bit is a CNCF sub-project under the umbrella of Fluentd
* https://fluentbit.io

[2021/10/09 18:53:13] [ info] [engine] started (pid=5137)
[2021/10/09 18:53:13] [ info] [storage] version=1.1.3, initializing...
[2021/10/09 18:53:13] [ info] [storage] in-memory
[2021/10/09 18:53:13] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128
[2021/10/09 18:53:13] [ info] [cmetrics] version=0.2.1
[2021/10/09 18:53:13] [ info] [input:forward:forward.0] listening on 0.0.0.0:24224
[2021/10/09 18:53:13] [ info] [sp] stream processor started
[2021/10/09 18:53:16] [error] size=26
[2021/10/09 18:53:16] [error] size=119
[2021/10/09 18:53:16] [error] size=79
[2021/10/09 18:53:16] [error] size=83
[2021/10/09 18:53:16] [error] size=83
[2021/10/09 18:53:16] [error] size=80
[2021/10/09 18:53:16] [error] size=73
[2021/10/09 18:53:16] [error] size=27
[2021/10/09 18:53:16] [error] size=27
[2021/10/09 18:53:16] [error] size=12
[2021/10/09 18:53:16] [error] size=34
[2021/10/09 18:53:16] [error] size=37
[2021/10/09 18:53:16] [error] size=104
[2021/10/09 18:53:16] [error] size=41
[2021/10/09 18:53:16] [error] size=73
[2021/10/09 18:53:16] [error] size=28
[2021/10/09 18:53:16] [error] size=106
[2021/10/09 18:53:16] [error] size=32
[2021/10/09 18:53:16] [error] size=28
[2021/10/09 18:53:16] [error] size=12
[2021/10/09 18:53:16] [error] size=39
[2021/10/09 18:53:16] [error] size=85
[2021/10/09 18:53:16] [error] size=103
[2021/10/09 18:53:16] [error] size=86
[2021/10/09 18:53:16] [error] size=102
[2021/10/09 18:53:16] [error] size=72
[2021/10/09 18:53:16] [error] size=103
[2021/10/09 18:53:16] [error] size=49
[2021/10/09 18:53:16] [error] size=102
[2021/10/09 18:53:16] [error] size=23
[2021/10/09 18:53:16] [error] size=73
[2021/10/09 18:53:16] [error] size=26
[2021/10/09 18:53:16] [error] size=103
[2021/10/09 18:53:16] [error] size=28
[2021/10/09 18:53:16] [error] size=106
[2021/10/09 18:53:16] [error] size=12
[2021/10/09 18:53:16] [error] size=42
[2021/10/09 18:53:16] [error] size=73
[2021/10/09 18:53:16] [error] size=103
[2021/10/09 18:53:16] [error] size=77
[2021/10/09 18:53:16] [error] size=102
[2021/10/09 18:53:16] [error] size=35
[2021/10/09 18:53:16] [error] size=102
[2021/10/09 18:53:16] [error] size=28
[2021/10/09 18:53:16] [error] size=106
[2021/10/09 18:53:16] [error] size=37
[2021/10/09 18:53:16] [error] size=58
[2021/10/09 18:53:16] [error] size=12
[2021/10/09 18:53:16] [error] size=40
[2021/10/09 18:53:16] [error] size=77
[2021/10/09 18:53:16] [error] size=103
[2021/10/09 18:53:16] [error] size=69
[2021/10/09 18:53:16] [error] size=102
[2021/10/09 18:53:16] [error] size=41
[2021/10/09 18:53:16] [error] size=105
[2021/10/09 18:53:16] [error] size=28
[2021/10/09 18:53:16] [error] size=106
[2021/10/09 18:53:16] [error] size=39
[2021/10/09 18:53:16] [error] size=57
[2021/10/09 18:53:16] [error] size=39
^C[2021/10/09 18:53:21] [engine] caught signal (SIGINT)

@konoui
Copy link

konoui commented Oct 9, 2021

I'm facing same issue when using multiline parser with forward input.
I confirmed the different behaviors between enabling and disabling multiline filter.
But it seems that it is not expected behavior of multiline parser.

When enabling multiline filter, logs are followings.
A line matched by first rule in parsers_multiline.conf includes metadata like a source, countainer_id.
e.g.) Dec 14 06:41:08 Exception in thread "main"
Lines matched by next rule in parsers_multiline.conf has only a log field.
e.g.) at com.myproject.module.MyProject.badMethod(MyProject.java:22)

logrouter    | [0] 77c094345af7: [1633774020.000000000, {"container_name"=>"/app", "source"=>"stdout", "log"=>"single line...", "container_id"=>"77c094345af733f9db78340ffbb1b725ebb29f3cc3fb7a4a1791401d14a8eb9b"}]
logrouter    | [1] 77c094345af7: [1633774020.000000000, {"source"=>"stdout", "log"=>"Dec 14 06:41:08 Exception in thread "main" java.lang.RuntimeException: Something has gone wrong, aborting!", "container_id"=>"77c094345af733f9db78340ffbb1b725ebb29f3cc3fb7a4a1791401d14a8eb9b", "container_name"=>"/app"}]
logrouter    | [2] 77c094345af7: [1633774020.000000000, {"log"=>"    at com.myproject.module.MyProject.badMethod(MyProject.java:22)"}]
logrouter    | [3] 77c094345af7: [1633774020.000000000, {"log"=>"    at com.myproject.module.MyProject.oneMoreMethod(MyProject.java:18)"}]
(snip)
logrouter    | [7] 77c094345af7: [1633774020.000000000, {"container_name"=>"/app", "source"=>"stdout", "log"=>"another line...", "container_id"=>"77c094345af733f9db78340ffbb1b725ebb29f3cc3fb7a4a1791401d14a8eb9b"}]
logrouter    | [8] 77c094345af7: [1633774020.000000000, {"source"=>"stdout", "log"=>"panic: my panic", "container_id"=>"77c094345af733f9db78340ffbb1b725ebb29f3cc3fb7a4a1791401d14a8eb9b", "container_name"=>"/app"}]
(snip)

When disabling multiline parser by comment out FILTER in fluent-bit.conf, logs are followings.
All lines include metadata such as source, container_id.

logrouter    | [0] 77c094345af7: [1633774542.000000000, {"container_id"=>"77c094345af733f9db78340ffbb1b725ebb29f3cc3fb7a4a1791401d14a8eb9b", "container_name"=>"/app", "source"=>"stdout", "log"=>"single line..."}]
logrouter    | [1] 77c094345af7: [1633774542.000000000, {"container_id"=>"77c094345af733f9db78340ffbb1b725ebb29f3cc3fb7a4a1791401d14a8eb9b", "container_name"=>"/app", "source"=>"stdout", "log"=>"Dec 14 06:41:08 Exception in thread "main" java.lang.RuntimeException: Something has gone wrong, aborting!"}]
logrouter    | [2] 77c094345af7: [1633774542.000000000, {"source"=>"stdout", "log"=>"    at com.myproject.module.MyProject.badMethod(MyProject.java:22)", "container_id"=>"77c094345af733f9db78340ffbb1b725ebb29f3cc3fb7a4a1791401d14a8eb9b", "container_name"=>"/app"}]
(snip)

I used fluentd log driver with https://github.com/konoui/multiline-with-forward-input.

@PettitWesley
Copy link
Contributor Author

PettitWesley commented Oct 10, 2021

@nokute78 I found the same.

I added some debug logs here: https://github.com/PettitWesley/fluent-bit/tree/ecs-multiline-debug

I found that:

  • log content received by the filter is the same in each case
  • parsers seem to be working, matching logs
  • BUT, the problem is that in the forward case, each log line is delivered one by one to the filter, and it returns for each log.

I am not sure what to do to fix this. @edsiper It appears that filter multiline only works with tail or other plugins that will send all logs at in one nice big chunk. Any suggestions?

Attached are my logs which show the filter returning each time.

debug_logs_forward.txt

debug_logs_tail.txt

@PettitWesley
Copy link
Contributor Author

I tried increasing Buffer_Max_Size and Buffer_Chunk_Size, and the Flush interval. Didn't help. The filter still gets each log line in different callback.

@PettitWesley
Copy link
Contributor Author

@nokute78 Do you think this is a bug in the forward input or its interaction with the core? It should be ingesting logs in chunks same as tail? Is it because the coroutine callbacks work differently for each plugin? Because forward uses connections, but tail is just reading from a file.

@marksumm
Copy link

marksumm commented Oct 11, 2021

It's possibly worth highlighting that the forward input seems to work in combination with the multiline parser if the source of the data is another Fluent Bit instance configured with a forward output. However, I was not able to get multiline parsing working with the head input, regardless of the buffer size.

@nokute78
Copy link
Collaborator

@PettitWesley I think it is necessary to add some kind of buffering mechanism for filter_multiline.

By the way, in the document, it is recommended to use in_tail to concatenate CRI logs.
https://docs.fluentbit.io/manual/pipeline/filters/multiline-stacktrace

If you aim to concatenate messages split originally by Docker or CRI container engines, we recommend doing the concatenation on Tail plugin,  this same functionality exists there.

@PettitWesley
Copy link
Contributor Author

@edsiper Do you agree with @nokute78 recommendation? I may be able to work on implementing this, if you could provide some guidance. We have many many many users who are waiting for this (one of many multiline bugs) to get fixed.

@PettitWesley
Copy link
Contributor Author

@PettitWesley
Copy link
Contributor Author

I figured out how to make the filter work as expected, by re-writing it to use in_emitter: aws/aws-for-fluent-bit#100 (comment)

I will post a PR and design next week.

@github-actions
Copy link
Contributor

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

@github-actions github-actions bot added the Stale label Dec 10, 2021
@PettitWesley
Copy link
Contributor Author

Implementation of the fix mostly done but still in progress: #4309

@prettykingking
Copy link

It's possibly worth highlighting that the forward input seems to work in combination with the multiline parser if the source of the data is another Fluent Bit instance configured with a forward output. However, I was not able to get multiline parsing working with the head input, regardless of the buffer size.

Yes, it should be highlighted. I didn't dive much into the code. Now that one need to concatenate logs using multiline FILTER coming from docker logs source, put an dedicated peer forward INPUT for the docker instance then forward logs to the next central peer collector. Use the multiline FILTER on the central peer side. This design also compliant with production workflow. But i still wonder how the forward INPUT buffering works differently with the multiline FILTER.

logging-flow

@PettitWesley
Copy link
Contributor Author

@prettykingking Once this #4383 is merged then the multiline filter will work as expected :)

@prettykingking
Copy link

@prettykingking Once this #4383 is merged then the multiline filter will work as expected :)

Thank you for the redesign and it's implementation of the improved multiline FILTER. I see that it's been merged.

Prior to that feature or major release. For those who face the same issue, this is a workaround for version 1.8.x that split multiline FILTER with the others of different type into different logging stages. It's also simple and efficient.

@squalou
Copy link

squalou commented Mar 4, 2022

Hi,
I'm trying this, it works fine apparently (context : AWS ECS firelens, INPUT is 'forward') but I have a doubt about the flush mechanism.
It doesn't seem to work.

I read the doc from this PR https://github.com/fluent/fluent-bit-docs/pull/675/files, and added flush_ms 1000 (even if it's supposed to have a default value, just in case)

What happens is that the "last" log never reaches output unless a new log comes in.

I also have set a flush_timeout in the MULTILINE_PARSER section, with no better result.

Am I missing something obvious ?

(using Fluent Bit v1.8.12)

@PettitWesley
Copy link
Contributor Author

@squalou to be clear, in your case, is the log lost because Fluent Bit shut down?

If you are using the latest release, any buffered multiline logs should be purged from the buffer once flush_ms has expired. However, I am aware that on shutdown, this basically breaks, and last log in the buffer can be lost. Setting a lower flush_ms should help. I have put it on my personal backlog to fix this, unfortunately, its non-trivial work.

@squalou
Copy link

squalou commented Mar 16, 2022

I'm afraid not : it's running, it is a docker container running side by side with an application, if one of them is shut down the other would be killed too. (that's on purpose, not to give too much details,its "one task" in ECS sense, with two running containers)

The application is logging absolutely nothing for a few minutes after it has started, (on purpose).
Then, when the application logs again, last message (with its original timestamp from these minutes ago) is sent to output.

@PettitWesley
Copy link
Contributor Author

PettitWesley commented Mar 16, 2022

. Then, when the application logs again, last message (with its original timestamp from these minutes ago) is sent to output.

@squalou Hmmm I am confused... this was the old behavior before I added the buffer setting. With the buffer and flush_ms setting it should now flush that pending log. If you can provide more details or some easy way for me to repro this bug report that'd really help.

that's on purpose, not to give too much details,its "one task" in ECS sense, with two running containers

I understand; I work for ECS and I created FireLens (which I assume is what you are using). You may want to open an issue in the AWS repo or contact AWS Support for further investigation: https://github.com/aws/aws-for-fluent-bit

@squalou
Copy link

squalou commented Mar 16, 2022

Thank you for the details, I'll get in touch with them.
Maybe I'm out of luck and the version I use is not the one I assume is in use, or maybe it has been built with an incomplete patch, or whatever.

I use public.ecr.aws/aws-observability/aws-for-fluent-bit:2.22.0 since stable tag did not yet include the multiline 'working' behaviour at all.

I should try another one when ready and stable I guess.

@PettitWesley
Copy link
Contributor Author

@squalou Hmmm 2.22.0 should work. Again, please contact AWS Support but also if you can come up with any sort of a simplified case that repros this and send it to me, then that'd help. If you need to send any details that are confidential and can't be shared on github, then AWS Support can facilitate that.

@mahima145
Copy link

I am trying to read whole message from systemd plugin.

Jan 06 13:03:00 edge-Virtual-Machine 9e46f1b5f888[13342]: trce: IoTEdge.Services.IoTHubDeviceApiV2Service[0]
Jan 06 13:03:00 edge-Virtual-Machine 9e46f1b5f888[13342]: [2022-01-06T07:33:00.5895681Z]: OnReceiveMessage({
Jan 06 13:03:00 edge-Virtual-Machine 9e46f1b5f888[13342]: "iothub-ack": "full",
Jan 06 13:03:00 edge-Virtual-Machine 9e46f1b5f888[13342]: "msgType": "ack",
Jan 06 13:03:00 edge-Virtual-Machine 9e46f1b5f888[13342]: "correlationId": "147b8b26-3a20-4299-9562-fa5f3dc1a123",
Jan 06 13:03:00 edge-Virtual-Machine 9e46f1b5f888[13342]: "action": "model.query",
Jan 06 13:03:00 edge-Virtual-Machine 9e46f1b5f888[13342]: "content-encoding": "utf-8"
Jan 06 13:03:00 edge-Virtual-Machine 9e46f1b5f888[13342]: })

In current configuration the is trimmed after 1 line.

Jan 06 14:41:40 edge-Virtual-Machine fae510e87e43[13342]: [26] proxy_log: [1641460295.254838000, {"PRIORITY"=>"6", "_TRANSPORT"=>"journal", "_SELINUX_CONTEXT"=>"unconfined
Jan 06 14:41:40 edge-Virtual-Machine fae510e87e43[13342]: ", "_COMM"=>"dockerd", "_EXE"=>"/usr/bin/dockerd",
"_CMDLINE"=>"/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock", "_SYSTEMD_CGROUP"=>"/system.slice/docker.service", "_SYSTEMD_UNIT"=>"docker.service",
"_SYSTEMD_INVOCATION_ID"=>"97055008b941455fb40f432fada2445c", "IMAGE_NAME"=>"abc.azurecr.io/edge/fluent-bit:v1", "CONTAINER_TAG"=>"fae510e87e43", "SYSLOG_IDENTIFIER"=>"fae510e87e43",
"CONTAINER_ID"=>"fae510e87e43","MESSAGE"=>"trce: IoTEdge.Services.IoTHubDeviceApiV2Service[0]"]", "_SOURCE_REALTIME_TIMESTAMP"=>"1641460290275066"}]", "_SOURCE_REALTIME_TIMESTAMP"=>"1641460295254181"}]

The MESSAGE field should be able to read whole message.

The regex patterns that I am using seems to be working with tail input plugin but not with systemd.

parser.conf
[MULTILINE_PARSER]
name multiline-regex-test
type regex
flush_ms 1000
rule "start_state" "/(^[A-Z][a-z]\s[\d-]+\s[\d:]+\s+Virtual-Machine\s+\d+[a-z0-9]+[[a-z0-9]]:)/" "cont"
rule "cont" "/([a-z]:\s+[A-Z].)/" "cont"
rule "cont" "/([\d*-\d\d-\d*\S*])/" "cont"
rule "cont" "/("[a-z][A-Z]\S:\s\S*)(\S*)/" "cont"

fluent-bit.conf
[SERVICE]
Flush 1
Log_Level info
Parsers_File parser.conf
[INPUT]
Name systemd
Tag systemd_log
Path /var/log/journal
Systemd_Filter _SYSTEMD_UNIT=docker.service
[FILTER]
Name multiline
Match *
multiline.key_content log
multiline.parser multiline-regex-test
[OUTPUT]
Name stdout
Match *

Please help me with the issue. What I am doing wrong here.

@PettitWesley
Copy link
Contributor Author

@mahima145 Use https://rubular.com/ to test your regex against your logs, AFAICT, the start state doesn't match your first log.

@mahima145
Copy link

@PettitWesley it's a mistake while copying. This is the start state which is matching the log while using Tail
rule "start_state" "/(^[A-Z][a-z]\s[\d-]+\s[\d:]+\s+edge-Virtual-Machine\s+\d+[a-z0-9]+[[a-z0-9]]:)/" "cont"

@lubingfeng
Copy link

Is this fix merged into which Fluent Bit release - 1.8.x or 1.9.x?

@PettitWesley
Copy link
Contributor Author

This was released in 1.8.12: https://fluentbit.io/announcements/v1.8.12/

Do to a mistake, we accidentally didn't include it in the 1.9 series at first, but 1.9.3 adds it back in: https://fluentbit.io/announcements/v1.9.3/

@PettitWesley PettitWesley changed the title Multiline Filter example not working with Forward input Multiline Filter example does not work with Forward input May 2, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants