Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mergerfs.balance Or mergerfs.EVACUATE ?! #139

Open
wbrione opened this issue Jun 27, 2023 · 1 comment
Open

mergerfs.balance Or mergerfs.EVACUATE ?! #139

wbrione opened this issue Jun 27, 2023 · 1 comment

Comments

@wbrione
Copy link

wbrione commented Jun 27, 2023

I would like the tool, in fact, to balance the data between all the disks in my pool, but what it is doing is distributing/moving all the contents of the disk that had more space allocated, to the rest of the pool, at the end, leaving the original disk, where all the data was, practically empty. I understand that: If I have, for example, 3 disks, disk 1 has 3 tb of data, and disk 2 and 3 empty, at the end of the process, should have 1 tb allocated to each disk, totaling 3 tb... Shouldn't that be the behavior ?!

I have a process now underway, and this is the result...

~# du -sh /srv/dev-disk-by-uuid-/ISO
70G /srv/dev-disk-by-uuid-
/ISO
271G /srv/dev-disk-by-uuid-/ISO
272G /srv/dev-disk-by-uuid-
/ISO
311G /srv/dev-disk-by-uuid-*/ISO

As I said, the disk currently with 70gb, was the disk where all the files were stored, the process is, in fact, "evacuating" its contents to the other disks :-(

Note(1) Command => mergerfs.balance /srv/mergerfs/DATA/ISO/

Note(2) MergerFS options => cache.files=partial,dropcacheonclose=true,category.create=mfs

Note(3) Before balancing, I changed the pool behavior from "epmfs" to "mfs" followed by the pool restart

@wbrione
Copy link
Author

wbrione commented Jun 29, 2023

So, I realized that the term "balancing" is the balancing of free space, not data, so when I pointed to a specific folder, with the idea of balancing/distributing that data among the disks in the pool, as the disk in question was the one that contained all the data, its collateral action was to move all the data from that folder to the other disks, that were empty :-(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant