-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: balancer with new option to free a specified disk. #78
Comments
Could this not be done with mergerfs.dup by duplicating all files on one disk to the other disks according to regular mergerfs rules and then deleting them from the disk to be removed? |
yes maby but this would requre a really big include exclude list (really big). |
Using the mounted disk as a starting directory for the script will not work? Here on my system all single disks are mounted in addition to the pooled disk that mergerfs provides. |
No. You are not understanding how the tool works. It reads the full list of branches from mergerfs and then using the include/exclude arguments finds a file and then transfers it to the drive with the most free space. If you want to "empty" a drive you'd need to explicitly exclude that drive from the list of target branches but keep in the list of source branches. It's not hard to do but I just haven't gotten around to it. In part because when I need that behavior I just mount a new pool with that drive excluded and rsync it into that pool. That's preferable in my case because I only empty the drive when I'm looking to retire and I want to ensure the data is safely on the pool before clearing the data. |
To those who liked this issue... can you explain the use case rather than the solution to me? Are you trying to retire a drive or otherwise remove it from the pool? Move data because you want to keep it in the pool for another reason? I think it matters because wrt retiring it's a bit risky to just move the data. Especially if the drive is not known to be fully working. You'd really want to do as I mention above and copy the data first... then delete it when you know things are OK. |
the idee is to free a disk so that it can be replaced later. so blanace all files fron this disk to the other. the disk is still good but the block realocation count of the disk is growing (predictive failed) so i normally replace this disk bevor it becomes a problem. it would be great if the tool can do this for me. |
But why would you want to put more stress on the drive if you think it's in a bad state? It is safer to copy the data. Probably mounted read only to ensure only reads are occurring. Once you confirm the drive was successfully copied you can remove the drive from the pool and erase it if need be. |
also in a case where you wan to replace a disk with a bigger one without to mutch downtime (smb/nfs export). this way balancer frees disk; replace entry of disk in fstab; schutdown; replace disk; startup. the other way would be remount without disk (as you described) copy manually the files (in this time the files are missing in the smb/nfs export) then fstab shutdown replace disk startup change mount to include new disk in short to many ways to make an error. so this request for the free a disk with the balancer |
It would seem to me that the safest way to remove a drive, especially one that is considered at risk of dying, is to remove it from the pool and rsync the data to the pool at least twice to ensure nothing got missed. If you really want to leave it in the pool while copying then I could enhance the dup tool so it can more easily include just the one branch. |
I don't really understand. You don't need to remove the drive from the pool to copy data. I do it all the time. Create another pool and rsync from the drive into second pool. Then you get to completely control the balancing algo which I'm not going to do with this tool because the logic is already in mergerfs.
Regardless... it doesn't make sense to put this in the "balance" tool. It's not a balance nor fits the logic of the tool. At most it should be a tool that takes a pool directory and branch to drain. Something like |
I find myself in a similar issue as many of of the users here. I like the tool but if I am serving my pool as a NFS share, Samba share, scp destination, and/or CDN. Moving the files off the pool for any amount of time is not ideal. Having the balance tool able to take a mount point as an argument for skip this with a drain or clean or some other terminology that would allow the pool and all data to remain intact while the drive is emptied for whatever reason would be awesome. |
I'm not exactly following. Are you suggesting it create a new mount on the fly and remove that one drive? The problem with doing these things in a tool is that then the tool needs to support all the permutations people want. When I want to drain a drive in the pool I create a new pool, minus the drives in question, and rsync from the drive into that pool. I suppose I can try to make the tool do that for you but if anyone ever wants to change their options that complicates things again and you might as well just do it manually. Balancing and draining aren't the same behavior. Draining a drive has an end condition. The drive is empty*. Balancing is, currently, when they get near one another % wise. Adding "draining" to the balance tool would seem confusing to me. If you exclude a drive does that mean it drains or not? Some people don't want draining. They just want balance. I'd prefer specific tools per behavior. *I very much do not suggest deleting files after copying them. That can lead to lost data. It's better to copy the data and then when you've confirmed the transfer erase the drive. |
as the title already says could there be a new option be added to free a specified disk.
in a case of an harddisk failer appearing it would be great to move all the files away from the disk and balance it to the remaining ones.
yes there are other tools (snapraid..) but this would be an great addition. to the tool
The text was updated successfully, but these errors were encountered: