Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

efs-csi-controller won't start if IMDS access is blocked #313

Closed
korbin opened this issue Jan 27, 2021 · 19 comments · Fixed by #681
Closed

efs-csi-controller won't start if IMDS access is blocked #313

korbin opened this issue Jan 27, 2021 · 19 comments · Fixed by #681
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@korbin
Copy link

korbin commented Jan 27, 2021

/kind bug

What happened?

With IMDS disabled per best practices (https://docs.aws.amazon.com/eks/latest/userguide/best-practices-security.html) on Bottlerocket hosts, pods from the efs-csi-controller deployment will not start.

We need something similar for the controller, or for it to just not need IMDS access to begin with.

F0127 18:13:01.145009 1 driver.go:54] could not get metadata from AWS: EC2 instance metadata is not available
is emitted to the log and a crash occurs.

What you expected to happen?

I expected efs-csi-controller to start. Passing the region/instance ID/other IMDS-sourced information would be acceptable.

How to reproduce it (as minimally and precisely as possible)?

  • Block IMDS access
  • Deploy efs-csi-controller

Anything else we need to know?:

The DaemonSet uses hostNetwork: true to regain access to the IMDS (#188)

Environment

  • Kubernetes version (use kubectl version):
    EKS 1.18
  • Driver version:
    master
@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jan 27, 2021
@korbin
Copy link
Author

korbin commented Feb 1, 2021

I was able to get the efs-csi-controller running by removing the liveness check and switching to hostNetwork: true until the IMDS dependency can be resolved.

@wongma7
Copy link
Contributor

wongma7 commented Feb 1, 2021

Yes, seems we'll have to add hostNetwork true, thank you for trying and verifying it fixes it.

(I am looking into ways to avoid talking to instance metadata altogther since we only use it for super basic stuff like instance id but not even sure if it's feasible yet)

@davidshtian
Copy link

Tried at on premises physical servers environment (not AWS env), and it still throws errors below. Any extra configuration need be be done for this scenario? Thanks.

could not get metadata from AWS: EC2 instance metadata is not available

@davidshtian
Copy link

@korbin Hi Korbin~ Got the same issue for on premises physical servers Kubernetes environment. For this workaround, tried to remove livenessProbe part for container efs-plugin in controller deployment (https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/master/deploy/kubernetes/base/controller-deployment.yaml), and the error still exist. Also need to remove the container liveness-probe? Thanks.

I was able to get the efs-csi-controller running by removing the liveness check and switching to hostNetwork: true until the IMDS dependency can be resolved.

@groodt
Copy link

groodt commented Apr 26, 2021

I've also been running into this.

Another thing to consider, is how to ensure the ports used by the aws-efs-csi-driver do not conflict with the ports used by aws-ebs-csi-driver. Both of these applications seem to use a similar approach, where there is a Deployment and a DaemonSet which require hostNetwork and hostPort to function correctly when IMDS access is blocked.

@wongma7
Copy link
Contributor

wongma7 commented May 3, 2021

@groodt yes the poor choice of default port definitely needs fixing: https://github.com/kubernetes-sigs/aws-efs-csi-driver/pull/437/files

regarding the need for instance metadata in general we arrived at a fix in ebs and will probably copy the fix over here kubernetes-sigs/aws-ebs-csi-driver#855. The tradeoff is that the driver will need permission to Get Nodes. But that's a read-only permission and can come included in the RBAC artifacts, it wont require any extra work on part ofusers

@groodt
Copy link

groodt commented May 4, 2021

Thanks! I think that sharing a common approach with the EBS driver makes sense if possible. I think normalising the use of IRSA where possible can only be a good thing, particularly for the AWS provided add-ons and utilities.

@groodt
Copy link

groodt commented Jun 27, 2021

@wongma7 Thanks for making progress on the ebs-csi-driver kubernetes-sigs/aws-ebs-csi-driver#821 I've been able to successfully remove access to the hostNetwork for the controller. Any updates on the similar approach for the efs-csi-driver?

I would love to remove hostNetwork access for both EFS and EBS (node and controllers: 4 workloads total). So far, I've only been able to remove hostNetwork for the ebs-csi-controller. (1/4 workloads).

@groodt
Copy link

groodt commented Sep 20, 2021

I have some updates here. I can confirm that aws-ebs-csi-driver as of v1.3.0 is able to run successfully without hostNetwork using IRSA. kubernetes-sigs/aws-ebs-csi-driver#821 (comment)

@wongma7 Is it reasonable to expect that the same will be possible with the aws-efs-csi-driver in future?

@wongma7
Copy link
Contributor

wongma7 commented Sep 20, 2021

yes, that is totally reasonable, the EFS driver needs to be able to run without hostnetwork/imds for exactly the same reasons as EBS. The effort entails copying the code and test (an end-to-end test on a "real" EKS cluster with nodes whose IMDS is disabled) from EBS to here. I don't have an ETA but that is my plan

@groodt
Copy link

groodt commented Sep 20, 2021

That sounds awesome! I'll follow this issue for any updates. 🚀

@Quarky9
Copy link
Contributor

Quarky9 commented Dec 8, 2021

@wongma7 Any updates on this issue ? Really looking forward removing hostNetworking ... ;-)

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 8, 2022
@niranjan94
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 8, 2022
@jonathanrainer
Copy link
Contributor

Have raised a PR that I think should resolve this issue here: #681

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 23, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 22, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 21, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
9 participants