Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

write-only SAS token fails sending data to blob #9459

Closed
uristernik opened this issue Oct 2, 2024 · 0 comments · Fixed by #9457
Closed

write-only SAS token fails sending data to blob #9459

uristernik opened this issue Oct 2, 2024 · 0 comments · Fixed by #9457

Comments

@uristernik
Copy link
Contributor

Bug Report

Describe the bug

Using SAS token authentication and want to make sure all fluent-bit clients have write-only permissions

When creating a SAS token with write-only permissions we are seeing the following errors:

[2024/10/01 15:40:33] [error] [engine] chunk '17-1727797215.111211921.flb' cannot be retried: task_id=1, input=tail.0 > output=azure_blob.0
[2024/10/01 15:40:35] [ warn] [engine] failed to flush chunk '17-1727797225.102833884.flb', retry in 7 seconds: task_id=0, input=tail.0 > output=azure_blob.0 (out_id=0)
[2024/10/01 15:40:42] [error] [engine] chunk '17-1727797225.102833884.flb' cannot be retried: task_id=0, input=tail.0 > output=azure_blob.0
[2024/10/01 15:40:45] [ warn] [engine] failed to flush chunk '17-1727797235.100613583.flb', retry in 9 seconds: task_id=0, input=tail.0 > output=azure_blob.0 (out_id=0)

To Reproduce

    [OUTPUT]
        name                  azure_blob
        match                 *
        account_name          abcdefg
        auth_type             sas
        sas_token             ${STORAGE_ACCOUNT_SAS_TOKEN}
        path                  kubernetes
        container_name        logs
        auto_create_container off
        tls                   on
        blob_type             blockblob
        endpoint              https://abcdefg/

CleanShot 2024-10-02 at 13 51 32@2x

Expected behavior

Setting auto_create_container to false should assume the container exists ahead of time.
This is especially important in scenarios where you want to define a write-only SAS token and considering that a single SAS token can't be scoped to allow read on containers but write on blobs/objects https://learn.microsoft.com/en-us/rest/api/storageservices/create-account-sas#blob-service

Your Environment

  • Version used: 3.1.7.1
  • Configuration: attached below
  • Environment name and version (e.g. Kubernetes? What version?): Kubernetes EKS v1.28.3
  • Server type and version: -
  • Operating System and version: -
  • Filters and plugins: kubernetes, modify, parse, out_blob

Additional context

    [SERVICE]
        Daemon Off
        Flush 10
        Log_Level warn
        Parsers_File /fluent-bit/etc/parsers.conf
        Parsers_File /fluent-bit/etc/conf/custom_parsers.conf
        HTTP_Server On
        HTTP_Listen 0.0.0.0
        HTTP_Port 2020
        Health_Check On

    [INPUT]
        Name tail
        multiline.parser docker, cri
        Path /var/log/containers/*.log
        Tag kube.*
        Buffer_Max_Size 1MB # Support log lines up to 1MB
        Buffer_Chunk_Size 1MB
        Mem_Buf_Limit 1024MB
        Skip_Long_Lines On
        Refresh_Interval 1
        db logs.db

    [FILTER]
        Name kubernetes
        Match kube.*
        Merge_Log On
        Keep_Log On
        K8S-Logging.Parser Off
        K8S-Logging.Exclude On
        Buffer_Size 0
        Merge_Log_Key parsed_message
        Kube_URL https://${KUBERNETES_SERVICE_HOST}:${KUBERNETES_SERVICE_PORT}
        tls.vhost kubernetes.default.svc

    [FILTER]
        Name record_modifier
        Match *
        Record cluster_name ${cluster_name}
        Record environment ${env}

    [FILTER]
        Name parser
        Match *
        Key_Name log
        Parser klog_parser
        Parser kube_generic_parser
        Parser datadog_parser
        Parser coredns_parser
        Parser fluentbit_parser
        Parser grafana_parser
        Reserve_Data On
        Preserve_Key On

    # We lift it because the modify plugin does not support nested fields. We want to make sure that we have only msg and not message
    [FILTER]
        Name nest
        Match *
        Operation lift
        Wildcard ^(level|msg|message)$
        Nested_under parsed_message
        Add_prefix p_

    [FILTER]
        Name modify
        Match *
        Condition Key_does_not_exist $p_msg
        Rename p_message p_msg

    [FILTER]
        Name modify
        Match *
        Condition Key_does_not_exist $p_time
        Rename time p_time

    # If we didn't match any of the above, we send the raw log as msg
    [FILTER]
        Name modify
        Match *
        Condition Key_does_not_exist $p_msg
        Rename log p_msg

    [FILTER]
        Name nest
        Match *
        Operation nest
        Wildcard p_*
        Nest_under parsed_message
        Remove_prefix p_

    [FILTER]
        Name record_modifier
        Match *
        Allowlist_key parsed_message
        Allowlist_key kubernetes
        Allowlist_key cluster_name
        Allowlist_key environment
        Allowlist_key host_info
        Uuid_key log_id
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant