Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ZFS + LUKS not correctly mounted on boot #2043

Closed
grigio opened this issue Jan 12, 2014 · 13 comments
Closed

ZFS + LUKS not correctly mounted on boot #2043

grigio opened this issue Jan 12, 2014 · 13 comments

Comments

@grigio
Copy link

grigio commented Jan 12, 2014

Hi,
I've a partition LUKS(ZFS) with is correctly decrypted the problem is that it seems the "zfs mount command" is executed before that the LUKS container is decrypted.

Steps

  • boot
  • zpool status DEGRADED or sometimes it is ONLINE but I get an empty directory
  • I run zpool export cpool and zpool import cpool, then it works

Is possible to avoid these additional commands at every reboot? The OS is Ubuntu 12.04.3
Thanks

@grigio
Copy link
Author

grigio commented Jan 12, 2014

probably related to #599
I also have to use -f because sometimes I get "pool may be in use from other system"

@behlendorf
Copy link
Contributor

@grigio For now your probably going to need to manually add a delay to postpone the mount until the LUKS container is open and the devices are available.

@grigio
Copy link
Author

grigio commented Jan 14, 2014

i imagined, any hint where to add the delay in ubuntu/upstart ?
i tried with rc.local but i have to force the zfs mount

@behlendorf
Copy link
Contributor

For ubuntu you can try increasing the ZFS_INITRD_PRE_MOUNTROOT_SLEEP value in /etc/defaults/zfs. For most people 30 seconds is enough.

@grigio
Copy link
Author

grigio commented Jan 14, 2014

@behlendorf I removed my hacks in rc.local and tried it but I alwais get no pools available

zfs-mountall is installed, to see the volume after the boot I've to run:

$ sudo zpool import cpool
cannot mount '/cpool': directory is not empty
$ sudo zpool status
  pool: cpool
 state: ONLINE
  scan: scrub repaired 79K in 0h0m with 0 errors on Sun Jan 12 13:06:21 2014
config:

    NAME          STATE     READ WRITE CKSUM
    cpool         ONLINE       0     0     0
      sda3_crypt  ONLINE       0     0     0

errors: No known data errors

It's weird, i get "cannot mount.." but then it is mounted correctly

@behlendorf
Copy link
Contributor

Are you sure it mounted? It looks like it imported but didn't mount because something was already at the mount point. You might refer to the following for some troubleshooting tips. Sorry, I'm not familiar with getting things working with LUKS.

https://github.com/zfsonlinux/pkg-zfs/wiki/Ubuntu-ZFS-mountall-FAQ-and-troubleshooting

@grigio
Copy link
Author

grigio commented Jan 14, 2014

Yes it mounted and the volumes and data are correcly shown inside the directory, I read the LUKS part but it isn't my situation. My root / is LUKS(ext4) and /cpool is LUKS(zfs). The LUKS(zfs) container is correctly unlocked at boot but then somethng goes wrong, sometimes zpool status says ONLINE but it isn't.

My hacky solution is a /etc/rc.local with:

# HACK: zfs crypto mount
zpool export cpool
sleep 1
zpool import cpool -f

exit 0

I've to export (because sometime it is uncorrectly imported) and use the -f @behlendorf do you think I risk to damage the data in this way?

@behlendorf
Copy link
Contributor

@grigio That safe, but it shouldn't be needed either. Is it possible the vdev device name is different after each boot? That would cause some trouble.

@grigio
Copy link
Author

grigio commented Jan 15, 2014

No, it's alwais dm-1

@behlendorf
Copy link
Contributor

@grigio I ask because the output from the command above says sda3_crypt

@grigio
Copy link
Author

grigio commented Jan 15, 2014

i think it just label generated by luks

@behlendorf
Copy link
Contributor

@grigio As long as it's consistent that's OK. If it's possible that it changes between reboots that could cause your issue.

@behlendorf
Copy link
Contributor

For distributions which use systemd this style of configuration will be better supported in 0.6.4 thanks to commits like 4f6a147, 07a3312, and d94fd5f. For non-systemd systems this will need to be handled in the distribution specific init process.

@behlendorf behlendorf modified the milestone: 0.6.5 Nov 8, 2014
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants