-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Data Leak on ZFS #4485
Comments
What does it show if you're exporting the pool and re-imported it ? |
@kernelOfTruth your suggestion is totally right, the space reappears after an export->import, Before the export:
Just after the import:
The space does come back:
However this doesn't help me with the fact that the filesystems fills up completely during my mongo resync... Is this a known bug/problem? Thanks! |
@kernelOfTruth could this be: #1548 |
@write0nly yes it looks that way. Closing as a duplicate of #1548. |
Hi All,
I'm having a really strange situation where zfs seems to be writing twice as much data than it should, or occupying twice the space it should.
System is running centos with 3.10.0-327.4.5.el7.x86_64, and the following:
libzfs2-0.6.5.3-1.el7.centos.x86_64
zfs-dkms-0.6.5.3-1.el7.centos.noarch
zfs-0.6.5.3-1.el7.centos.x86_64
spl-0.6.5.3-1.el7.centos.x86_64
spl-dkms-0.6.5.3-1.el7.centos.noarch
On ZFS copies is 1 and compression is lz4
ZPOOL layout is (I know, not recommended) using a partition on /dev/sda3, I can't give it a full disk unfortunately. /dev/sda3 has been created starting on a 4k boundary alignment and ashift has been set to 12 to align as well.
If i run zfs list i get that it's using 1.24T, there are no snaps whatsoever, I just created the pool and ran mongo on it to replicate some data:
however, du -sh on the filesystem /local/data, says that it's only got 615GB, ie half of the data seen above.
The zpool has more than 10% of fragmentation.
zfs get all shows that data written is 1.24T and that referred is also 1.24T, but du only sees half of the data, and the filesystem goes full as my dataset is 1.2T in size.
Can anyone shed some light on this? Is the fact that I am using /dev/sda3 instead of the full device or some strange alignment issue or cluster/block size under occupation the cause of this?
Cheers,
Eduardo.
The text was updated successfully, but these errors were encountered: