-
Notifications
You must be signed in to change notification settings - Fork 583
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support host iscsi simultaneously with kubelet iscsi (pvc) #1846
Comments
Another issue is during rke updates new kubelet images have a new iqn. This can lead to some pretty weird scenarios I think. |
So I've prototyped this by doing the following: Adding the following script to
And subsequently adding the following to
I've confirmed that sessions are accessible from host/kubelet/etc and all things are shared as appropriate. Would be great to consider some proper integration with |
This issue/PR has been automatically marked as stale because it has not had activity (commit/comment/label) for 60 days. It will be closed in 14 days if no further activity occurs. Thank you for your contributions. |
Not stale. |
I was just about to set up iSCSI both on the host and for k8s PVCs, thanks for keeping this issue open as it kept me from lots of headaches :) |
This issue/PR has been automatically marked as stale because it has not had activity (commit/comment/label) for 60 days. It will be closed in 14 days if no further activity occurs. Thank you for your contributions. |
unstale |
This issue/PR has been automatically marked as stale because it has not had activity (commit/comment/label) for 60 days. It will be closed in 14 days if no further activity occurs. Thank you for your contributions. |
I've found some OpenEBS install notes that also need mounting ISCSI from the host under RKE: https://docs.openebs.io/docs/next/prerequisites.html#rancher services:
kubelet:
extra_binds:
- "/etc/iscsi:/etc/iscsi"
- "/sbin/iscsiadm:/sbin/iscsiadm"
- "/var/lib/iscsi:/var/lib/iscsi"
- "/lib/modules"
- "/var/openebs/local:/var/openebs/local"
- "/usr/lib64/libcrypto.so.10:/usr/lib/libcrypto.so.10"
- "/usr/lib64/libopeniscsiusr.so.0.2.0:/usr/lib/libopeniscsiusr.so.0.2.0" I haven't try setting this up (at least yet) but I leave this here as it might become handy if running into any problems later. |
This issue/PR has been automatically marked as stale because it has not had activity (commit/comment/label) for 60 days. It will be closed in 14 days if no further activity occurs. Thank you for your contributions. |
unstale |
Can you clarify the ask for this issue? Are you asking for having what you described as solution to be integrated as default with like one flag that sets all of that up? Or is it not solved fully yet? |
I’m hoping for some ‘supported’ way of dealing with it generally. The only sane way to handle this is ‘everything host’ but that may be a breaking change for people (but probably a good breaking change). |
This issue/PR has been automatically marked as stale because it has not had activity (commit/comment/label) for 60 days. It will be closed in 14 days if no further activity occurs. Thank you for your contributions. |
unstale |
This issue/PR has been automatically marked as stale because it has not had activity (commit/comment/label) for 60 days. It will be closed in 14 days if no further activity occurs. Thank you for your contributions. |
unstale |
This issue/PR has been automatically marked as stale because it has not had activity (commit/comment/label) for 60 days. It will be closed in 14 days if no further activity occurs. Thank you for your contributions. |
Unstale |
This issue/PR has been automatically marked as stale because it has not had activity (commit/comment/label) for 60 days. It will be closed in 14 days if no further activity occurs. Thank you for your contributions. |
Unstale |
This issue/PR has been automatically marked as stale because it has not had activity (commit/comment/label) for 60 days. It will be closed in 14 days if no further activity occurs. Thank you for your contributions. |
Unstale |
This repository uses an automated workflow to automatically label issues which have not had any activity (commit/comment/label) for 60 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the workflow can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the workflow will automatically close the issue in 14 days. Thank you for your contributions. |
As it stands the host
iscsid
and theiscsid
inside thekubelet
container will conflict with each other (whichever is launched first wins). If iscsi is not used for PVCs but is used by the host generally all will be well. If the host does not use iscsi but the cluster is using iscsi PVCs all will be well. If you need both things get messy.I think there are 2 use-cases where this is important:
csi
drivers which useiscsid
(ie: NetApp trident et al) with in cluster legacy iscsi workloads (csi
drivers tend to leverage the host daemon/binaries)I'm currently working on a
csi
driver and bumped into this situation when deploying it to a cluster which has a 'legacy' iscsi provisioner already installed. Both work independently but once deployed jointly in the same cluster things blow up.I'm not entirely sure how to solve this. I think it can be solved with the following:
/var/lib/iscsi
and/etc/iscsi
intokubelet
containeriscsiadm
wrapper script (as noted in the blog entry below) which simply invokes the hostiscsiadm
in a chroot inside the container (apparently the client binary needs to match the version of the running daemon)Given that step 2 requires generally a full host mount of the root (
/
) step 1 may not be required.Interested in hearing other thoughts/feedback to hopefully find a solution.
The text was updated successfully, but these errors were encountered: