Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

use delegated zfs permissions without sudo access #151

Closed
bjornbouetsmith opened this issue Feb 20, 2022 · 86 comments
Closed

use delegated zfs permissions without sudo access #151

bjornbouetsmith opened this issue Feb 20, 2022 · 86 comments

Comments

@bjornbouetsmith
Copy link

bjornbouetsmith commented Feb 20, 2022

Hi,

I really like this project - but would it be possible that the driver uses its required credentials from a secret that is stored inside the kubernetes cluster?

Right now credentials are stored in plain text inside the values, and that seems like a really bad idea.

Edit: Also I seem to not be able to get a non-root user to work at all.

System info:

  • TrueNas 12
  • Raw Kubernetes (k8s)

If I use username/password for httpConnection like this:

    httpConnection:
      protocol: http
      host: 192.168.0.201
      port: 80
      allowInsecure: true
      username: k8s
      password: xys

Then I get a 401 from the API - my guess is that non-root users are not allowed to use the API.

With the apiKey present instead

    httpConnection:
      protocol: http
      host: 192.168.0.201
      port: 80
      allowInsecure: true
      apiKey: "2-qEjchKvqI7QimbvONfM4vbNopomKuHpNC0ffnNY8QOPdXlyr0gxPDhSv1BgzXC05"

I get errors from the ssh connection

failed to provision volume with StorageClass "freenas-nfs-csi": rpc error: code = Internal desc = Error: cannot create 'fast/k8s/nfs/vols/pvc-5e7afe1e-b3c2-4118-a413-5c6c17f92193': permission denied

if I add:

    zfs:
      cli:
        sudoEnabled: true

The error becomes different:

failed to provision volume with StorageClass "freenas-nfs-csi": rpc error: code = Internal desc = Error: sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper sudo: a password is required

Thanks for any hints - or even a message stating that its not supported properly without root.

@travisghansen
Copy link
Member

Can you send over the full values files (cleansed of secrets of course)? Generally you can run as non-root for ssh connection/operations but the api is root only. The apiKey setup you’re currently using is ideal for the api atm.

The values file does turn the config data into a k8s secret so it is stored as a secret in k8s. If that’s troublesome you can either create the secret manually without using the chart or you can use a project like helm secrets to encrypt your values file(s).

@travisghansen
Copy link
Member

Also what version of TrueNAS are you currently running?

@bjornbouetsmith
Copy link
Author

Thanks - I am using version:

FreeBSD vmnas.root.dom 12.2-RELEASE-p6 FreeBSD 12.2-RELEASE-p6 df578562304(HEAD) TRUENAS  amd6

And if I ssh to the TrueNas server - try to create a dataset, i.e.

zfs create fast/k8s/nfs/vols/test

truenas ask me to authenticate via its "sudo" functionality and when I type in the password - the dataset is created.

In regards to storing the values file inside k8s - I did not realise it was stored as a secret - I just saw the values stored and it looked like it was stored in plain text to me - but that might just have been my "ui" that decrypted it.

csiDriver:
  # should be globally unique for a given cluster
  name: "org.democratic-csi.freenas-nfs"

storageClasses:
- name: freenas-nfs-csi
  defaultClass: true
  reclaimPolicy: Retain
  volumeBindingMode: Immediate
  allowVolumeExpansion: true
  parameters:
    fsType: nfs
  mountOptions:
  - noatime
  secrets:
    provisioner-secret:
    controller-publish-secret:
    node-stage-secret:
    node-publish-secret:
    controller-expand-secret:

driver:
  config:
    driver: freenas-nfs
    instance_id:
    httpConnection:
      protocol: http
      host: 192.168.0.201
      port: 80
      allowInsecure: true
      apiKey: "2-qEjchKvqI7QimbvONfM4vbNopomKuHpNC0ffnNY8QOPdXlyr0gxPDhSv1BgzXC05"
      apiVersion: 2
    sshConnection:
      host: 192.168.0.201
      port: 22
      username: k8s
      password: "<masked>"
    zfs:
      cli:
        sudoEnabled: true
      datasetParentName: fast/k8s/nfs/vols
      detachedSnapshotsDatasetParentName: fast/k8s/nfs/snaps
      datasetEnableQuotas: true
      datasetEnableReservation: false
      datasetPermissionsMode: "0777"
      datasetPermissionsUser: k8s
      datasetPermissionsGroup: wheel
    nfs:
      shareHost: 192.168.0.201
      shareAlldirs: false
      shareAllowedHosts: []
      shareAllowedNetworks: []
      shareMaprootUser: ""
      shareMaprootGroup: ""
      shareMapallUser: k8s
      shareMapallGroup: wheel

@travisghansen
Copy link
Member

I guess helm itself stores the full values in the cluster as secrets as well yes (unless you've configured helm to use configmaps which I wouldn't recommend).

My guess is you haven't enabled passwordless sudo for the k8s user. This bit from the README may help:

# if on CORE 12.0-u3+ you should be able to do the following
# which will ensure it does not get reset during reboots etc
# at the command prompt
cli

# after you enter the truenas cli and are at that prompt
account user query select=id,username,uid,sudo_nopasswd

# find the `id` of the user you want to update (note, this is distinct from the `uid`)
account user update id=<id> sudo=true
account user update id=<id> sudo_nopasswd=true
# optional if you want to disable password
#account user update id=<id> password_disabled=true

# exit cli by hitting ctrl-d

# confirm sudoers file is appropriate
cat /usr/local/etc/sudoers

@bjornbouetsmith
Copy link
Author

I guess helm itself stores the full values in the cluster as secrets as well yes (unless you've configured helm to use configmaps which I wouldn't recommend).

My guess is you haven't enabled passwordless sudo for the k8s user. This bit from the README may help:

True - I did not notice that - it would be nice to have as a comment in the "values.yaml" example - so people know its a hard requirement that passwordless sudo is set up.

But it seems like it does not change a thing - at least not for me:

From my "cli"

 {'id': 35, 'sudo': True, 'sudo_nopasswd': True, 'username': 'k8s'}]

From my sudoers

k8s ALL=(ALL) NOPASSWD: ALL
%k8s ALL=(ALL) ALL

But still my k8s cluster logs:

failed to provision volume with StorageClass "truenas-nfs": rpc error: code = Internal desc = Error: sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper sudo: a password is required

But to make it fair - even though it seems like its set up correctly on the truenas server - I cannot even do it passwordless when ssh'ing to the server manually and logging on.

So it seems like the passwordless sudo on truenas is not working as expected.

I will dig into this further - but at least I got a bit further.

Thanks

@travisghansen
Copy link
Member

travisghansen commented Feb 21, 2022

The issue is the group...sudo has strange behavior in this regard. If you're going to enable sudo on the group as well do the following:

  • remove the sudo_nopasswd from the user
  • add the sudo_nopasswd to the group

It should then work as desired.

EDIT: alternatively, just remove sudo access from the group and you should also get the behavior you are after

@bjornbouetsmith
Copy link
Author

Its very strange - now I can ssh in manually and do:

sudo zfs create fast/k8s/nfs/vols/test

Without getting asked for password - but still I get the same error from k8s.

I will try your suggestion to only give it on the group or client.

@travisghansen
Copy link
Member

That is strange yeah :( let me know how it goes.

@bjornbouetsmith
Copy link
Author

That is strange yeah :( let me know how it goes.

It works now - when I removed the sudo from the group.

Now I just need to figure out if I can limit my k8s user from destroying my entire pool - just in case there is a bug in the csi framework.

But that is more a question for the truenas/zfs forum.

Thanks a lot for your support.

@travisghansen
Copy link
Member

That's a very fair concern. The short answer is you can't really (not with how the driver works currently). When using sudo the commands are invoked with root privs anyway so limiting the k8s user's access won't really help anyway.

At some point I may look into leveraging the zfs delegation but nothing is in place yet :(

@bjornbouetsmith
Copy link
Author

I have asked the question on the TrueNAS forum - whether or not its possible to limit access to datasets - so a configured user gets full access to a certain dataset and nothing else.

And looking at that link you sent it seems like ZFS at least supports it - so perhaps its possible if I do not use sudo, and simply use the access control things in TrueNAS so its only on the explicit dataset that it can get access.

Fingers crossed - and if it turns out its possible already - if I stop using sudo, then I will report back here - and it should probably be used as a "best practice" - just like not using the root user :-)

@travisghansen
Copy link
Member

Yeah. There are a couple scenarios where sudo is still required but please do test and see how far you get.

Without sudo enabled it will fail to change the ownership and permissions on the share dir (for nfs) and will fail some volume expansion operations (for iscsi). For testing nfs you’ll want to disable the datasetPermission config options to avoid those failures. It’s mostly due to these reasons that I haven’t fully explored the delegated deployment yet…it’s certainly feasible to not use sudo for zfs but use it for the other scenarios but it simply hasn’t been developed yet.

@bjornbouetsmith
Copy link
Author

bjornbouetsmith commented Feb 21, 2022

I just tried with delegated permissions - and now I can fully control what the k8s user can do - but it does not have permissions to mount the dataset, which is required I assume to expose it via nfs.

So I am kind of back to square one.

Perhaps one solution - although it requires changes in the codebase is to "document" that any dataset that is set up for this should use delegated permissions (with documentation of which csi require) - and then only do the ZFS commands via ssh, and the remaining via the api, i.e.

If csi needs to create a dataset - it happens via ssh, but the mounting, sharing via nfs etc happens via the API - then it might work with a regular user without sudo access and with delegated permissions to its own dataset?

Does it make sense what I propose?
I am fully aware that having "root" API access is just a capable of messing up the entire system/pools/datasets etc - but if we limit "root" access to the non zfs things - it should make the integration more secure in the sense if a bug creeps in that would potentially destroy something it should not destroy when the code cleans up datasets, then it would only affect the delegated dataset.

Also if bugs sneak in the API integration code, then it would only affect the more "innocent" things, like sharing, mounting etc.

Of course this "split" responsibility will only work for truenas installations - but if you can make one implementation more "secure" from potential bugs, I think it could be worth it - if its possible..

Alternatively - on the TrueNAS forums, they also suggested to use a jail for the dataset - I am not sure if that would make it easier on permissions, or if its even possible to interact with the jail from a ssh session in any decent way - but perhaps a jail, with a dataset inside that - it could be managed with the "generic " zfs-generic-nfs implementation, since everything could be done with ssh towards the jail, with full "su" access, and even if bugs come up, it will only affect the jail and the dataset that got mounted to that jail.

So basically run a NFS server in a jail, with the dataset mounted to that jail, with full delegation - then whatever runs in the jail can do whatever to the dataset, can mount datasets, share them via ssh - but obviously it would all happen via ssh, since there are probably no api access to jails in truenas.

@travisghansen
Copy link
Member

The mount should happen automatically inheriting the mount point logic from the parent no?

I’m with you on making it secure as possible. It just hasn’t been done and requires some thought to make it as robust as possible.

@bjornbouetsmith
Copy link
Author

The mount should happen automatically inheriting the mount point logic from the parent no?

Unfortunately it does not happen - at least not on truenas.

The unpriveleged user is allowed to create a sub dataset in the delegated dataset, but the mount does not happen as far as I see - it even emits a message about this.

@travisghansen
Copy link
Member

Can you get all the properties on the dataset and send it here?

@bjornbouetsmith
Copy link
Author

bjornbouetsmith commented Feb 22, 2022

I have solved it:

zfs allow -u k8s clone,create,destroy,mount,promote,receive,rename,rollback,send,share,snapshot,sharenfs,mountpoint  fast/k8s

Add sysctl setting:
vfs.usermount=1

So now when I ssh in with my k8s user - I can do whatever i want with the dataset fast/k8s and below - including mounting it - which happens automatically when I do:

zfs create fast/k8s/test

Then it gets mounted to /mnt/fast/k8s/test

So I think it might be possible to do with just using "generic zfs csi" - and by using this way.

I can probably remove many of the delegations - I just delegated all of them that I thought the csi might use.

So I definately think this might be the way forward - so a pure SSH solution is possible and just by using delegation and allowing non-root mounts.

I will definately try it out :-)

@travisghansen
Copy link
Member

Pure ssh is not possible as http is required to manage the shares (ie: insert stuff into the TrueNAS db, etc). Also note that sudo will still be required for some non-zfs operations (such as reloading some services, etc). However all zfs operations could be done over ssh in an underprivileged (non-sudo) fashion however so that is great news!

Regarding the specific permissions there are actually quite a few required to cover the full breadth of what the driver does so I think that above list is probably a great start. If anything fails we can add from there.

@bjornbouetsmith
Copy link
Author

bjornbouetsmith commented Feb 22, 2022

I see - but I think that is a decent compromise - since then the driver only needs root access to the API - and not directly to the Zfs commands.

Of course that still allow a bug in the api code to somehow destroy something, but at least it will not impact the zfs pools - which should be the most precious part of a truenas installation, since thats where the data is stored.

Everything else can be recreated - data is harder to recreate - unless you have a very good backup strategy and take snapshots very often.

P.S. I just tested - with sudo=false in my values.yaml - and it does not work:
failed to provision volume with StorageClass "truenas-nfs": rpc error: code = Internal desc = Error: cannot create 'fast/k8s/vols/pvc-e3061070-b0b1-4af2-9bd3-f85e8f813495': permission denied

Which is strange, since I can just ssh as the k8s user and do:

zfs create fast/k8s/vols/pvc-e3061070-b0b1-4af2-9bd3-f85e8f813495

Is that because the driver assumes something, like using sudo is required?

Even though my values contains:

    zfs:
      cli:
        sudoEnabled: false

What is even more strange is that it did manage to create the dataset:
fast/k8s/vols
It was just the last one with the guid it failed to create.

Perhaps its quotas or something else it fails on - let me try to delegate more :-)

@travisghansen
Copy link
Member

I agree 100% it's a decent compromise. While not ideal to still require sudo for some of the other purposes there's not really any ways around that, and having protection/insurance at the zfs level is indeed highly desirable.

@bjornbouetsmith
Copy link
Author

Any where I can see what zfs properties that gets set when the dataset gets created?

I have tried with:

zfs allow -u k8s clone,create,destroy,mount,promote,receive,rename,rollback,send,share,snapshot,sharenfs,mountpoint,quota,volsize,snapdir,reservation,readonly,exec,copies,compression fast/k8s

And still it fails.

I even removed the "vols" dataset beneath fast/k8s - and that got created again - so I am thinking its some property or setting the drives tries to set, that it gets the permission denied from.

@bjornbouetsmith
Copy link
Author

bjornbouetsmith commented Feb 22, 2022

Progress - I added more delegations and now the pvc dataset gets created, but then it fails when it tries to set some properties:
failed to provision volume with StorageClass "truenas-nfs": rpc error: code = AlreadyExists desc = volume has already been created with a different size, existing size: 0, required_bytes: 10737418240, limit_bytes: 0

And

failed to provision volume with StorageClass "truenas-nfs": rpc error: code = Internal desc = Error: cannot set property for 'fast/k8s/vols/pvc-8c417bd8-f729-47c5-853e-06aa27dcc3be': permission denied

@bjornbouetsmith
Copy link
Author

output from zfs get all fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114.

So properties are getting set by the driver - I wonder where it fails to do something.

I have looked into the code - but javascript is not my strong side - I am more of a c++/c/c# kind of guy.

vmnas# zfs get all fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114
NAME                                                    PROPERTY                                          VALUE                                                        SOURCE
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  type                                              filesystem                                                   -
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  creation                                          Tue Feb 22 20:04 2022                                        -
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  used                                              96K                                                          -
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  available                                         6.51T                                                        -
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  referenced                                        96K                                                          -
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  compressratio                                     1.00x                                                        -
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  mounted                                           yes                                                          -
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  quota                                             none                                                         default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  reservation                                       none                                                         default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  recordsize                                        16K                                                          inherited from fast
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  mountpoint                                        /mnt/fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  sharenfs                                          off                                                          default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  checksum                                          on                                                           default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  compression                                       lz4                                                          inherited from fast
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  atime                                             off                                                          inherited from fast
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  devices                                           on                                                           default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  exec                                              on                                                           default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  setuid                                            on                                                           default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  readonly                                          off                                                          default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  jailed                                            off                                                          default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  snapdir                                           hidden                                                       default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  aclmode                                           passthrough                                                  inherited from fast/k8s
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  aclinherit                                        passthrough                                                  inherited from fast
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  createtxg                                         57196865                                                     -
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  canmount                                          on                                                           default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  xattr                                             sa                                                           inherited from fast/k8s
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  copies                                            1                                                            inherited from fast/k8s
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  version                                           5                                                            -
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  utf8only                                          off                                                          -
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  normalization                                     none                                                         -
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  casesensitivity                                   sensitive                                                    -
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  vscan                                             off                                                          default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  nbmand                                            off                                                          default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  sharesmb                                          off                                                          default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  refquota                                          none                                                         default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  refreservation                                    none                                                         default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  guid                                              12125938536443656666                                         -
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  primarycache                                      all                                                          default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  secondarycache                                    all                                                          default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  usedbysnapshots                                   0B                                                           -
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  usedbydataset                                     96K                                                          -
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  usedbychildren                                    0B                                                           -
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  usedbyrefreservation                              0B                                                           -
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  logbias                                           latency                                                      default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  objsetid                                          35688                                                        -
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  dedup                                             off                                                          default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  mlslabel                                          none                                                         default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  sync                                              standard                                                     default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  dnodesize                                         legacy                                                       default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  refcompressratio                                  1.00x                                                        -
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  written                                           96K                                                          -
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  logicalused                                       42.5K                                                        -
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  logicalreferenced                                 42.5K                                                        -
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  volmode                                           default                                                      default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  filesystem_limit                                  none                                                         default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  snapshot_limit                                    none                                                         default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  filesystem_count                                  none                                                         default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  snapshot_count                                    none                                                         default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  snapdev                                           hidden                                                       default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  acltype                                           nfsv4                                                        default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  context                                           none                                                         default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  fscontext                                         none                                                         default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  defcontext                                        none                                                         default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  rootcontext                                       none                                                         default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  relatime                                          off                                                          default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  redundant_metadata                                all                                                          default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  overlay                                           on                                                           default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  encryption                                        off                                                          default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  keylocation                                       none                                                         default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  keyformat                                         none                                                         default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  pbkdf2iters                                       0                                                            default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  special_small_blocks                              0                                                            default
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  democratic-csi:volume_context_provisioner_driver  freenas-nfs                                                  local
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  democratic-csi:csi_volume_name                    pvc-e3561ccf-05e6-4db3-ac92-901656c1a114                     local
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  democratic-csi:managed_resource                   true                                                         local
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  org.truenas:managedby                             192.168.17.200                                               inherited from fast/k8s
fast/k8s/vols/pvc-e3561ccf-05e6-4db3-ac92-901656c1a114  org.freebsd.ioc:active                            yes                                                          inherited from fast

@bjornbouetsmith
Copy link
Author

These are my current delegations:

zfs allow -u k8s clone,create,destroy,mount,promote,receive,rename,rollback,send,share,snapshot,sharenfs,mountpoint,quota,volsize,snapdir,reservation,readonly,exec,copies,compression,userquota,aclmode,exec,readonly,groupquota,groupused,userprop,userquota,userused,atime,canmount,checksum,compression,devices,nbmand,normalization,readonly,recordsize,refreservation,reservation,setuid,utf8only,version,volsize,volblocksize,vscan,xattr fast/k8s

@bjornbouetsmith
Copy link
Author

Any way I can turn on debug logging of some kind - so it logs more what commands it tries to run?

@travisghansen
Copy link
Member

# values.yaml
controller:
  driver:
    logLevel: debug

node:
  driver:
    logLevel: debug

Regarding the error perhaps what happened is the driver tried to set the refuota when you first deployed and the user didn't have permissions (but it failed silently). Try to delete it entirely and start fresh with the new set of permissions and see how it does.

@bjornbouetsmith
Copy link
Author

# values.yaml
controller:
  driver:
    logLevel: debug

node:
  driver:
    logLevel: debug

Can I do this in my values.yaml that I give to helm? I have tried editing via kubectl - but it does not seem to give more logging output.

Regarding the error perhaps what happened is the driver tried to set the refuota when you first deployed and the user didn't have permissions (but it failed silently). Try to delete it entirely and start fresh with the new set of permissions and see how it does.

I already did that - I have removed the dataset, reapplied delegations, uninstalled the helm chart - reinstalled it - but still I get this permission error after it creates the dataset.

Edit:
I have now added refquota to the list of delegations and it works now with my non-root user.

So my full delegation command is:

zfs allow -u k8s clone,create,destroy,mount,promote,receive,rename,rollback,send,share,snapshot,sharenfs,mountpoint,quota,volsize,snapdir,reservation,readonly,exec,copies,compression,userquota,aclmode,exec,readonly,groupquota,groupused,userprop,userquota,userused,atime,canmount,checksum,compression,devices,nbmand,normalization,readonly,recordsize,refreservation,reservation,setuid,utf8only,version,volsize,volblocksize,vscan,xattr,refquota fast/k8s

And the most important bit I think that it requires the sysctl setting:
systctl vfs.usermount=1

Which can be added via the TrueNAS gui - this allows non-root users to mount filesystems.

So success - sudo is no longer needed for the ssh connction :-)

Thank you for the help.

For future reference it would be awesome if it could be documented, what delegations is required precisely, so users only need to delegate the required bits and not do trial and error like I have done.

@travisghansen
Copy link
Member

The debugging must be added to your helm values yes. I would appreciate any contributions to the documentation to get the delegated setup properly documented and explained for easy use.

I think in the meantime, I'll go rework some bits of the code to unconditionally use sudo if the user is not root. Doing so will ensure those things that require sudo will continue to work while also disabling sudo for zfs operations.

@bjornbouetsmith
Copy link
Author

bjornbouetsmith commented Feb 23, 2022

The debugging must be added to your helm values yes. I would appreciate any contributions to the documentation to get the delegated setup properly documented and explained for easy use.

Sure - I don't mind helping with the documentation part - if you can just tell me what delegations I should include :-)

I think in the meantime, I'll go rework some bits of the code to unconditionally use sudo if the user is not root. Doing so will ensure those things that require sudo will continue to work while also disabling sudo for zfs operations.

I am not sure what you mean by this - will this not be counter-intuitive to what I have been trying to achieve?

With my recipy the SSH connection does not require sudo - if you have set up proper delegation and the correct sysctl setting.

So perhaps add a setting instead of sudo?

i.e.

driver:
  config:
    driver: freenas-nfs
    sshConnection:
      host: 192.168.0.201
      port: 22
      delegation: true
      sudo: false

And then in the driver you check whether or not delegation or sudo is in use.

If you have not set up correct delegation, then obviously it requires passwordless sudo to be set up correctly - unless you run as root. But even running as root, in theory it does not guarantee that root has the required permissions - root is just a username - although root usually is the "super user" with all access.

Unless I misunderstand you?

Right now, my user is not allowed to do any sudo, so if you change the code to always require sudo if the username is not root, then my work would be in vain - and I am back to being a possible "victim" of bugs that can destroy my pool.

@travisghansen
Copy link
Member

I honestly don't have a comprehensive list of which delegations are required to cover the feature set. I suspect you've probably given enough at this point. The project has the potential to change as the driver evolves over time as well. I think we'll just need to run it for a bit and see if we run into any situations where something fails due to lack of permissions.

Regarding sudo here are a few points to consider:

  • sudo is always required in the broad sense (executing chmod, chown, etc) to properly run the driver
  • if using delegated zfs permissions sudo may not be required for zfs operations

I'm updating the code to decouple the scenarios in point 1 from point 2. The situations in point 1 require the use of sudo when the user is not root. You cannot delegate chmod, etc (unless you do some really really really stupid stuff).

@reefland
Copy link

reefland commented Mar 1, 2022

I reviewed the freenas-iscsi.yaml and didn't see where to set numeric values, I don't have any user or group names defined.

I completely removed the ssh section, just left the API section:

driver:
  config:
    driver: freenas-iscsi
    instance_id:
    httpConnection:
      protocol: https
      host: truenas.rich-durso.us
      port: 443
      apiKey: [ REDACTED ]
      allowInsecure: False
      #apiVersion: 2

Test claim went fine:

$ kubectl -n democratic-csi create -f test-claim-iscsi.yaml
persistentvolumeclaim/test-claim-iscsi created

Events:
  Type    Reason                 Age                From                                                                                                               Message
  ----    ------                 ----               ----                                                                                                               -------
  Normal  Provisioning           35s                org.democratic-csi.iscsi_zfs-iscsi-democratic-csi-controller-7cf8c844f-wh89s_cc168fe8-c63e-42f6-9859-3ae2ae823530  External provisioner is provisioning volume for claim "democratic-csi/test-claim-iscsi"
  Normal  ExternalProvisioning   32s (x3 over 35s)  persistentvolume-controller                                                                                        waiting for a volume to be created, either by external provisioner "org.democratic-csi.iscsi" or manually created by system administrator
  Normal  ProvisioningSucceeded  31s                org.democratic-csi.iscsi_zfs-iscsi-democratic-csi-controller-7cf8c844f-wh89s_cc168fe8-c63e-42f6-9859-3ae2ae823530  Successfully provisioned volume pvc-12e8d579-5e2f-4004-9841-8db4f0ab8309

Tried to edit the claim:

$ kubectl edit pvc test-claim-iscsi -n democratic-csi 
persistentvolumeclaim/test-claim-iscsi edited

Conditions:
  Type                      Status  LastProbeTime                     LastTransitionTime                Reason  Message
  ----                      ------  -----------------                 ------------------                ------  -------
  FileSystemResizePending   True    Mon, 01 Jan 0001 00:00:00 +0000   Tue, 01 Mar 2022 09:21:24 -0500           Waiting for user to (re-)start a pod to finish file system resize of volume on node.
Events:
  Type     Reason                    Age                   From                                                                                                               Message
  ----     ------                    ----                  ----                                                                                                               -------
  Normal   Provisioning              7m1s                  org.democratic-csi.iscsi_zfs-iscsi-democratic-csi-controller-7cf8c844f-wh89s_cc168fe8-c63e-42f6-9859-3ae2ae823530  External provisioner is provisioning volume for claim "democratic-csi/test-claim-iscsi"
  Normal   ExternalProvisioning      6m58s (x3 over 7m1s)  persistentvolume-controller                                                                                        waiting for a volume to be created, either by external provisioner "org.democratic-csi.iscsi" or manually created by system administrator
  Normal   ProvisioningSucceeded     6m57s                 org.democratic-csi.iscsi_zfs-iscsi-democratic-csi-controller-7cf8c844f-wh89s_cc168fe8-c63e-42f6-9859-3ae2ae823530  Successfully provisioned volume pvc-12e8d579-5e2f-4004-9841-8db4f0ab8309
  Warning  ExternalExpanding         47s                   volume_expand                                                                                                      Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.
  Normal   Resizing                  47s                   external-resizer org.democratic-csi.iscsi                                                                          External resizer is resizing volume pvc-12e8d579-5e2f-4004-9841-8db4f0ab8309
  Normal   FileSystemResizeRequired  46s                   external-resizer org.democratic-csi.iscsi                                                                          Require file system resize of volume on node

I noticed the condition to restart the node, so I tried this:

$ kubectl -n democratic-csi rollout restart daemonsets,deployment
daemonset.apps/zfs-iscsi-democratic-csi-node restarted
deployment.apps/zfs-iscsi-democratic-csi-controller restarted

But the condition remains...

Conditions:
  Type                      Status  LastProbeTime                     LastTransitionTime                Reason  Message
  ----                      ------  -----------------                 ------------------                ------  -------
  FileSystemResizePending   True    Mon, 01 Jan 0001 00:00:00 +0000   Tue, 01 Mar 2022 09:21:24 -0500           Waiting for user to (re-)start a pod to finish file system resize of volume on node.
Events:
  Type     Reason                    Age                From                                                                                                               Message
  ----     ------                    ----               ----                                                                                                               -------
  Normal   Provisioning              32m                org.democratic-csi.iscsi_zfs-iscsi-democratic-csi-controller-7cf8c844f-wh89s_cc168fe8-c63e-42f6-9859-3ae2ae823530  External provisioner is provisioning volume for claim "democratic-csi/test-claim-iscsi"
  Normal   ExternalProvisioning      32m (x3 over 32m)  persistentvolume-controller                                                                                        waiting for a volume to be created, either by external provisioner "org.democratic-csi.iscsi" or manually created by system administrator
  Normal   ProvisioningSucceeded     32m                org.democratic-csi.iscsi_zfs-iscsi-democratic-csi-controller-7cf8c844f-wh89s_cc168fe8-c63e-42f6-9859-3ae2ae823530  Successfully provisioned volume pvc-12e8d579-5e2f-4004-9841-8db4f0ab8309
  Warning  ExternalExpanding         26m                volume_expand                                                                                                      Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.
  Normal   Resizing                  26m                external-resizer org.democratic-csi.iscsi                                                                          External resizer is resizing volume pvc-12e8d579-5e2f-4004-9841-8db4f0ab8309
  Normal   FileSystemResizeRequired  26m                external-resizer org.democratic-csi.iscsi                                                                          Require file system resize of volume on node

I don't see any obvious errors.... suggestions?

@travisghansen
Copy link
Member

You can't remove the ssh section. All zfs operations still run over ssh. The FileSystemResizePending does not mean you need to restart the csi pods, it simply means a pod must be using the volume and the node resize process will happen (it will resize the filesystem on the lun). When that condition appears (despite the message) just sit and exercise patience...it will resize on it's own (assuming a pod is actively using the volume).

Regarding the user/group those settings are only applicable to nfs so not an issue with iscsi. I'm referring to these config options:

...
  datasetPermissionsMode: "0777"
  datasetPermissionsUser: root # must be numeric when using api to set permissions
  datasetPermissionsGroup: wheel # must be numeric when using api to set permissions

@reefland
Copy link

reefland commented Mar 1, 2022

Went back over my notes, while I removed the SSH section from the yaml, I did not reinstall the helm chart.. so that edit didn't do anything.

After some more testing, created a new claim, created a nginx pod to use the claim, and then resized the claim with the nginix pod running and just waited a few seconds and I think everything worked as expected.

The 2nd claim bumped to 2Gi

kubectl get pvc -A
NAMESPACE        NAME               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
democratic-csi   test-claim-iscsi   Bound    pvc-12e8d579-5e2f-4004-9841-8db4f0ab8309   1Gi        RWO            freenas-iscsi-csi   132m
default          test-claim-iscsi   Bound    pvc-1c25c12a-b338-49eb-81c4-e413417e0627   2Gi        RWO            freenas-iscsi-csi   10m
Capacity:      2Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       task-pv-pod
Events:
  Type     Reason                      Age    From                                                                                                                Message
  ----     ------                      ----   ----                                                                                                                -------
  Normal   ExternalProvisioning        7m26s  persistentvolume-controller                                                                                         waiting for a volume to be created, either by external provisioner "org.democratic-csi.iscsi" or manually created by system administrator
  Normal   Provisioning                7m26s  org.democratic-csi.iscsi_zfs-iscsi-democratic-csi-controller-59f5f77864-fqj29_5ddcb17a-4c73-4dd8-b616-c6d60fa405b5  External provisioner is provisioning volume for claim "default/test-claim-iscsi"
  Normal   ProvisioningSucceeded       7m22s  org.democratic-csi.iscsi_zfs-iscsi-democratic-csi-controller-59f5f77864-fqj29_5ddcb17a-4c73-4dd8-b616-c6d60fa405b5  Successfully provisioned volume pvc-1c25c12a-b338-49eb-81c4-e413417e0627
  Warning  ExternalExpanding           43s    volume_expand                                                                                                       Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.
  Normal   Resizing                    43s    external-resizer org.democratic-csi.iscsi                                                                           External resizer is resizing volume pvc-1c25c12a-b338-49eb-81c4-e413417e0627
  Normal   FileSystemResizeRequired    42s    external-resizer org.democratic-csi.iscsi                                                                           Require file system resize of volume on node
  Normal   FileSystemResizeSuccessful  15s    kubelet                                                                                                             MountVolume.NodeExpandVolume succeeded for volume "pvc-1c25c12a-b338-49eb-81c4-e413417e0627"

Inside the pod:

# df -h /usr/share/nginx/html
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdd        2.0G   35M  2.0G   2% /usr/share/nginx/html

Anything else to check?

@travisghansen
Copy link
Member

travisghansen commented Mar 1, 2022

Great! Not really other than to make sure the api call was used to reload ctld vs a command over ssh. If the user does not have sudo then that's pretty well a given that it worked so we're probably good.

If you wish to confirm with certainty you would need to review the logs of the controller pod (csi-driver container) and look for the reload service api request.

@reefland
Copy link

reefland commented Mar 1, 2022

I see this on the TrueNAS side, I assume that isn't important?
Mar 1 11:17:39 truenas 1 2022-03-01T11:17:39.677313-05:00 truenas.[REDACTED] ctld 91366 - - no LUNs defined for target "iqn.2005-10.org.freenas.ctl:csi-pvc-1c25c12a-b338-49eb-81c4-e413417e0627-clustera"

Found in the csi-driver logs:

{"level":"verbose","message":"FreeNAS reloading iscsi daemon using api","service":"democratic-csi"}
{"level":"debug","message":"FREENAS HTTP REQUEST: {\"method\":\"POST\",\"url\":\"https://truenas.[REDACTED]/api/v2.0/service/reload\",\"headers\":{\"Accept\":\"application/json\",\"User-Agent\":\"democratic-csi-driver\",\"Content-Type\":\"application/json\"},\"json\":true,\"body\":{\"service\":\"iscsitarget\",\"service-control\":{\"ha_propagate\":true}},\"agentOptions\":{\"rejectUnauthorized\":true}}","service":"democratic-csi"}

Details:

{"level":"info","message":"new response - driver: FreeNASSshDriver method: ControllerGetCapabilities response: {\"capabilities\":[{\"rpc\":{\"type\":\"CREATE_DELETE_VOLUME\"}},{\"rpc\":{\"type\":\"LIST_VOLUMES\"}},{\"rpc\":{\"type\":\"GET_CAPACITY\"}},{\"rpc\":{\"type\":\"CREATE_DELETE_SNAPSHOT\"}},{\"rpc\":{\"type\":\"LIST_SNAPSHOTS\"}},{\"rpc\":{\"type\":\"CLONE_VOLUME\"}},{\"rpc\":{\"type\":\"EXPAND_VOLUME\"}},{\"rpc\":{\"type\":\"GET_VOLUME\"}},{\"rpc\":{\"type\":\"SINGLE_NODE_MULTI_WRITER\"}}]}","service":"democratic-csi"}
{"level":"info","message":"new request - driver: FreeNASSshDriver method: ControllerExpandVolume call: {\"_events\":{},\"_eventsCount\":1,\"call\":{},\"cancelled\":false,\"metadata\":{\"_internal_repr\":{\"user-agent\":[\"grpc-go/1.40.0\"]},\"flags\":0},\"request\":{\"secrets\":\"redacted\",\"volume_id\":\"pvc-1c25c12a-b338-49eb-81c4-e413417e0627\",\"capacity_range\":{\"required_bytes\":\"2147483648\",\"limit_bytes\":\"0\"},\"volume_capability\":{\"access_mode\":{\"mode\":\"SINGLE_NODE_MULTI_WRITER\"},\"mount\":{\"mount_flags\":[],\"fs_type\":\"xfs\",\"volume_mount_group\":\"\"},\"access_type\":\"mount\"}}}","service":"democratic-csi"}
{"level":"debug","message":"operation lock keys: [\"volume_id_pvc-1c25c12a-b338-49eb-81c4-e413417e0627\"]","service":"democratic-csi"}
{"level":"verbose","message":"ZfsProcessManager command: /usr/local/sbin/zfs get -Hp -o name,property,value,received,source volblocksize main/k8s/iscsi/v/pvc-1c25c12a-b338-49eb-81c4-e413417e0627","service":"democratic-csi"}
{"level":"verbose","message":"ZfsProcessManager command: /usr/local/sbin/zfs set volsize=\"2147483648\" main/k8s/iscsi/v/pvc-1c25c12a-b338-49eb-81c4-e413417e0627","service":"democratic-csi"}
{"level":"verbose","message":"FreeNAS reloading iscsi daemon using api","service":"democratic-csi"}
{"level":"debug","message":"FREENAS HTTP REQUEST: {\"method\":\"POST\",\"url\":\"https://truenas.[REDACTED]/api/v2.0/service/reload\",\"headers\":{\"Accept\":\"application/json\",\"User-Agent\":\"democratic-csi-driver\",\"Content-Type\":\"application/json\"},\"json\":true,\"body\":{\"service\":\"iscsitarget\",\"service-control\":{\"ha_propagate\":true}},\"agentOptions\":{\"rejectUnauthorized\":true}}","service":"democratic-csi"}
{"level":"debug","message":"FREENAS HTTP ERROR: null","service":"democratic-csi"}
{"level":"debug","message":"FREENAS HTTP STATUS: 200","service":"democratic-csi"}
{"level":"debug","message":"FREENAS HTTP HEADERS: {\"server\":\"nginx\",\"date\":\"Tue, 01 Mar 2022 16:24:21 GMT\",\"content-type\":\"text/plain; charset=utf-8\",\"content-length\":\"4\",\"connection\":\"close\",\"strict-transport-security\":\"max-age=63072000; includeSubDomains; preload\",\"x-content-type-options\":\"nosniff\",\"x-xss-protection\":\"1; mode=block\",\"permissions-policy\":\"geolocation=(),midi=(),sync-xhr=(),microphone=(),camera=(),magnetometer=(),gyroscope=(),fullscreen=(self),payment=()\",\"referrer-policy\":\"strict-origin\",\"x-frame-options\":\"SAMEORIGIN\"}","service":"democratic-csi"}
{"level":"debug","message":"FREENAS HTTP BODY: true","service":"democratic-csi"}
{"level":"info","message":"new response - driver: FreeNASSshDriver method: ControllerExpandVolume response: {\"capacity_bytes\":2147483648,\"node_expansion_required\":true}","service":"democratic-csi"}
{"level":"info","message":"new request - driver: FreeNASSshDriver method: Probe call: {\"_events\":{},\"_eventsCount\":1,\"call\":{},\"cancelled\":false,\"metadata\":{\"_internal_repr\":{\"user-agent\":[\"grpc-node/1.24.0-pre1 grpc-c/8.0.0 (linux; chttp2; game)\"]},\"flags\":0},\"request\":{}}","service":"democratic-csi"}
{"level":"debug","message":"performing exec sanity check..","service":"democratic-csi"}
{"level":"info","message":"new response - driver: FreeNASSshDriver method: Probe response: {\"ready\":{\"value\":true}}","service":"democratic-csi"}

@travisghansen
Copy link
Member

I don't know why you would see that warning...I'm assuming there is indeed a lun defined for the target otherwise you wouldn't be able to attach to it and use the volume.

In any case, the logs look exactly like what we're after so that part is good.

@reefland
Copy link

reefland commented Mar 1, 2022

It shows lun 0 -- perhaps it assigns 0 if one is not specified?

I just did:

$ kubectl apply -f pv-pod.yaml 
persistentvolumeclaim/test-claim-iscsi created
pod/task-pv-pod created

TrueNAS Console then shows:
Mar 1 12:57:19 truenas 1 2022-03-01T12:57:19.778710-05:00 truenas ctld 91366 - - no LUNs defined for target "iqn.2005-10.org.freenas.ctl:csi-pvc-914daf4b-2c59-4fe6-913b-f107198f12e8-clustera"

Is there an annotation (or something else) which can be used in the claim to populate the extent's "Description" field?

@travisghansen
Copy link
Member

Well, was that error from when the volume was provisioned? You should only see that during initial creation of the volume as the target gets created and then later the lun assigned (never to be removed unless/until the volume is deleted).

                comment: "", // TODO: allow this to be templated

I guess I never allowed for setting the comment on the extent field..

@reefland
Copy link

reefland commented Mar 1, 2022

                comment: "", // TODO: allow this to be templated

I guess I never allowed for setting the comment on the extent field..

Should I open an issue to request this?

@travisghansen
Copy link
Member

Sure.

@zrav
Copy link

zrav commented May 16, 2022

Has someone been able to get rootless NFS shares working on Linux?

@travisghansen
Copy link
Member

Can you elaborate more on what you want to achieve?

@zrav
Copy link

zrav commented May 16, 2022

I want to avoid having to give root access to the CSI driver on the storage box. The problems seems to be that new NFS exports cannot be created by non-root users on Linux. Unable to create temporary file: Permission deniedcannot share 'tank/k8s/ds: system error': NFS share creation failed property may be set but unable to reshare filesystem

@travisghansen
Copy link
Member

Which driver are you using? Can you send over the logs?

@zrav
Copy link

zrav commented May 16, 2022

This is before even using the driver, just manually settings the sharenfs property as a non-root user. I can reproduce this on different hosts. I just wanted to confirm that this is expected and that no workaround exists.

@travisghansen
Copy link
Member

I see so you're using delegated permissions and trying to set the sharenfs property on the dataset as a non-root user and that failure is what you see?

@zrav
Copy link

zrav commented May 16, 2022

Correct. Environment is Ubuntu 22.04.

@travisghansen
Copy link
Member

Do you have all the deps installed? openzfs/zfs#4534

@zrav
Copy link

zrav commented May 16, 2022

Yes of course. I'm perfectly able to create the export as root. Are you able to create it as non-root on Linux?

@travisghansen
Copy link
Member

I've never actually tried to run a fully delegated setup no. Just wanted to make sure it was specific to delegation and not a general issue.

Honestly that behavior seems odd to me but I don't exactly how the internals of it are handled by zfsd or whatever it is that handles those. Do the exports generally show up in /etc/exports or similar when run successfully as root?

@zrav
Copy link

zrav commented May 16, 2022

Zfs doesn't use /etc/exports to create the share. Instead it alters /var/lib/nfs/etab and calls exportfs. I tried changing owners of the directory and files, but that doesn't help. The error seems to come from https://github.com/openzfs/zfs/blob/master/lib/libshare/nfs.c#L102 however I don't know where it's trying to create the temp file.

@travisghansen
Copy link
Member

Well you certainly know more than me about the matter! I've just asked in the irc channel to see if anyone has some input they can provide.

@travisghansen
Copy link
Member

Can you possibly run the zfs command with strace to figure out what folder etc it's trying to create etc?

@zrav
Copy link

zrav commented May 17, 2022

Thanks for the hint. With strace I found that it was trying to write to /etc/exports.d/zfs.exports. Giving permissions on this file (and directory) setting the sharenfs property did not err anymore, however the share was still not created. I additionally had to give permissions to /var/lib/nfs/etab (and containing directory), then it works.! :)

@travisghansen
Copy link
Member

We definitely need to get that documented!

@travisghansen
Copy link
Member

It would be great to test the same for sharesmb as well and document what is needed there. The upcoming release will debut support for windows nodes/clusters and so smb has far greater test coverage etc than previously.

@zrav
Copy link

zrav commented May 17, 2022

I managed to get non-root sharesmb going, too:
Apart from the zfs allow permissions, it also requires write access to /etc/exports.d/zfs.exports. Additionally, smb.conf needs usershare allow guests = yes and the user must be part of the sambashare group.

@travisghansen
Copy link
Member

targetcli was made more robust to support sudo here: f626a93

I think this was the last place that required sudo on something other than zfs: bd08538

Essentially that should fix it to use sudo on the zfs commands individually instead of a whole sh -c ... command.

Released in v1.7.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants