Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

controller.yaml is inconsistently using volumes to mount csi.sock - PR with potential fix #213

Closed
frittentheke opened this issue Feb 14, 2019 · 6 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@frittentheke
Copy link
Contributor

/kind bug

What happened?
When using the provided deployment yaml files to set up the controller the driver-registrar container crashes.

What you expected to happen?
The full pod to start up successfully and then to register the CSI driver with the kubelet.

How to reproduce it (as minimally and precisely as possible)?
Simply apply https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/1b8d9d76b5ad775845aacd7533ce28309a72e03a/deploy/kubernetes/controller.yaml

Anything else we need to know?:
I attempted a fix - see #212

Environment
AWS region: eu-central-1
EC2 type: t3.medium
OS: Container Linux

  • Kubernetes version (use kubectl version):
    1.13.3 // 1.14.0-alpha.3

  • Driver version:
    0.3.0-alpha - "latest"

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Feb 14, 2019
@leakingtapan
Copy link
Contributor

@frittentheke thx for the issue and fix.
How did you create you cluster? And what are the feature flags enabled? Surprisingly, I haven't seen this issue on my 1.13.0 cluster which is created by kops.

@frittentheke
Copy link
Contributor Author

@leakingtapan ... thanks for getting back to me this quickly!
I used kubeadm (1.13.3) and set

"feature-gates": "CSINodeInfo=true,CSIPersistentVolume=true,MountPropagation=true,VolumeSnapshotDataSource=true,KubeletPluginsWatcher=true,CSIDriverRegistry=true"

for both, kubelet and kube-apiserver.

@frittentheke
Copy link
Contributor Author

Reading https://github.com/kubernetes/community/blob/master/contributors/design-proposals/storage/container-storage-interface.md#recommended-mechanism-for-deploying-csi-drivers-on-kubernetes over and over again, I am quite convinced for the controller pod all that is required is a common or shared EmptyDir for the driver to place its csi.sock socket in and the helpers (csi-provisioner, ...) to be able to talk to.

@leakingtapan
Copy link
Contributor

I looked at the clustre-driver-registrar example. I am convinced too. Its much cleaner this way for the controller manifest.

@frittentheke thx for sending out the fix

@leakingtapan
Copy link
Contributor

leakingtapan commented Feb 15, 2019

BTW, I noticed you are using t3.medium instance, this will required #178 to be implemented since nitro instance uses NVMe for EBS volume

@frittentheke
Copy link
Contributor Author

@leakingtapan thanks for accepting my PR.

Yeah, nvme type devices for nitro are currently still an issue, right. The idea stated in the corresponding issue is great to NOT rely on any udev mapping of devices. Container Linux already does things differently and an EBS CSI driver should be fully independed regarding how the OS is dealing with its device naming / mapping.

jsafrane pushed a commit to jsafrane/aws-ebs-csi-driver that referenced this issue Jul 18, 2023
…le-count

OCPBUGS-4185: Fix nodeService.getVolumesLimit() adding more instance
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

3 participants