Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

S3 Output Plugin not working with IMDSv1 and kube2iam #6479

Closed
joshbranham opened this issue Nov 29, 2022 · 8 comments
Closed

S3 Output Plugin not working with IMDSv1 and kube2iam #6479

joshbranham opened this issue Nov 29, 2022 · 8 comments

Comments

@joshbranham
Copy link

joshbranham commented Nov 29, 2022

Bug Report

Describe the bug
When using the AWS S3 output plugin, coupled with IAM assume roles in Kubernetes, Fluent-bit fails to get credentials, seemingly because it does not support IMDSv1? To work around this, we can explicitly set the role_arn value to the role that kube2iam would have otherwise automatically assumed on the Pods behalf if it had hit the IMDSv1 metadata endpoint for credentials.

To Reproduce
Run Fluent-bit on an EC2 instance with IMDSv1 only, or in a Kubernetes cluster using kube2iam or similar (which only supports IMDSv1 currently) and attempt to write logs to an S3 bucket.

Expected behavior
Fluent-bit will make a request to the metadata service, when provided no credentials, to receive the instance profile.

Your Environment

  • Version used: 1.9.4
  • Configuration:
  • Environment name and version (e.g. Kubernetes? What version?): Kubernetes, using kube2iam for IAM roles
  • Server type and version:
  • Operating System and version:
  • Filters and plugins:

Additional context
Is this expected to just only work with IMDSv2? I noticed the filter_s3 plugin supports setting imds_version v1 but output_s3 does not.

Here is the error output:

[2022/11/29 18:55:08] [error] [aws_client] auth error, refreshing creds
[2022/11/29 18:55:08] [error] [aws_credentials] Shared credentials file /root/.aws/credentials does not exist
[2022/11/29 18:55:08] [error] [output:s3:s3.0] PutObject API responded with error='AccessDenied', message='Access Denied'
[2022/11/29 18:55:08] [error] [output:s3:s3.0] Raw PutObject response: HTTP/1.1 403 Forbidden
x-amz-request-id: 5S6TFXEWGJEF2T70
x-amz-id-2: <redacted>
Content-Type: application/xml
Transfer-Encoding: chunked
Date: Tue, 29 Nov 2022 18:55:07 GMT
Server: AmazonS3
Connection: close

I would have expected the above to log trying to connect to IMDS?

This might be related, although we are running 1.9.4 #4388

@joshbranham joshbranham changed the title S3 Output Issue not working with IMDSv1 and kube2iam S3 Output Plugin not working with IMDSv1 and kube2iam Nov 29, 2022
@sebsthiel
Copy link

I am having the same problem. We have a fluentbit sidecar that needs to write to an S3 bucket.
Even though we have specified the role_arn we are also getting:
[2022/12/02 13:26:00] [error] [aws_client] auth error, refreshing creds
[2022/12/02 13:26:00] [error] [aws_credentials] Shared credentials file /.aws/credentials does not exist

@joshbranham
Copy link
Author

I am having the same problem. We have a fluentbit sidecar that needs to write to an S3 bucket.

Even though we have specified the role_arn we are also getting:

[2022/12/02 13:26:00] [error] [aws_client] auth error, refreshing creds

[2022/12/02 13:26:00] [error] [aws_credentials] Shared credentials file /.aws/credentials does not exist

To clarify, it only works for us if we specify the role_arn, where because of kube2iam, if a process makes a request to the metadata service, kube2iam hijacks that, assumes the role we define in the Pod annotations, and then hands the sts credentials back to the process.

@github-actions
Copy link
Contributor

github-actions bot commented Mar 3, 2023

This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days. Maintainers can add the exempt-stale label.

@github-actions github-actions bot added the Stale label Mar 3, 2023
@joshbranham
Copy link
Author

Still an issue..

@github-actions github-actions bot removed the Stale label Mar 4, 2023
@github-actions
Copy link
Contributor

github-actions bot commented Jun 2, 2023

This issue is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 5 days. Maintainers can add the exempt-stale label.

@github-actions github-actions bot added the Stale label Jun 2, 2023
@github-actions
Copy link
Contributor

github-actions bot commented Jun 7, 2023

This issue was closed because it has been stalled for 5 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jun 7, 2023
@manojkal
Copy link

@joshbranham @sebsthiel
Even I am facing same issue, did you guys figure out the solution ?

@joshbranham
Copy link
Author

@joshbranham @sebsthiel Even I am facing same issue, did you guys figure out the solution ?

We "fixed" it by expliciting specifying the role to assume in the [OUTPUT] config as shown below. In theory this shouldn't be required, since the sdk should be reaching out to the metadata service, where kube2iam will hijack the request and assume the correct role based on pod annotations.

[OUTPUT]
    Name            s3
    Match           ......
    bucket          ${SOME_BUCKET}
    region          us-east-1
    compression     gzip
    use_put_object  on
    total_file_size 5M

    # For some reason, builtin kube2iam isn't working out of the box so we need to explicitly
    # assume this role in the config.
    role_arn        arn:aws:iam::ACCOUNT:role/role-name
   ........

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants