Skip to content

AWS for Fluent Bit 2.31.12

Compare
Choose a tag to compare
@PettitWesley PettitWesley released this 19 Jun 19:54
· 93 commits to mainline since this release

2.31.12

This release includes:

  • Fluent Bit 1.9.10
  • Amazon CloudWatch Logs for Fluent Bit 1.9.4
  • Amazon Kinesis Streams for Fluent Bit 1.10.2
  • Amazon Kinesis Firehose for Fluent Bit 1.7.2

Compared to 2.31.11 this release adds:

  • Bug - Fix incorrect decrementing of s3_key_format $INDEX on Multipart Upload failure aws-for-fluent-bit:653
  • Bug - cloudwatch go plugin: fix utf-8 calculation of payload length to account for invalid unicode bytes that will be replaced with the 3 byte unicode replacement character. This bug can lead to an InvalidParameterException from CloudWatch when the payload sent is calculated to be over the limit due to character replacement. cloudwatch:330
  • Bug - fix SIGSEGV crash issue in exec input due to memory initialization aws-for-fluent-bit:661
  • Bug - record_accessor/rewrite_tag fix to allow single character rules fluent-bit:7330
  • Bug - filter_modify: fix memory clean up that can lead to crash fluent-bit:7368
  • Bug - http input: fix memory initialization issue and enable on Windows fluent-bit:7008
  • Enhancement - this version also includes a number of patches from upstream 2.x releases and library upgrades which are not associated with any end-user impacting issue but should improve stability.

We’ve run the new released image in our ECS load testing framework and here is the result. This testing result provides benchmarks of aws-for-fluent-bit under different input load. Learn more about the load test.

plugin source 20 MB/s 25 MB/s 30 MB/s
kinesis_firehose stdstream Log Loss
Log Duplication 0%(5500)
kinesis_firehose tcp Log Loss 0%(5557)
Log Duplication 0%(4000) 0%(6000)
kinesis_streams stdstream Log Loss 0%(10462)
Log Duplication 0%(117769)
kinesis_streams tcp Log Loss
Log Duplication 0%(1000) 0%(9915)
s3 stdstream Log Loss
Log Duplication
s3 tcp Log Loss
Log Duplication
plugin source 1 MB/s 2 MB/s 3 MB/s
cloudwatch_logs stdstream Log Loss
Log Duplication
cloudwatch_logs tcp Log Loss
Log Duplication

Note:

  • The green check ✅ in the table means no log loss or no log duplication.
  • Number in parentheses means the number of records out of total records. For example, 0%(1064/1.8M) under 30Mb/s throughput means 1064 duplicate records out of 18M input records by which log duplication percentage is 0%.
  • Log loss is the percentage of data lost and log duplication is the percentage of duplicate logs received at the destination. Your results may differ because they can be influenced by many factors like different configs and environment settings. Log duplication is exclusively caused by partially succeeded batches that were retried, which means it is random.