You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Linux Controller 3.19.0-58-generic #64~14.04.1-Ubuntu SMP Fri Mar 18 19:05:43 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
filename: /lib/modules/3.19.0-58-generic/updates/dkms/zfs.ko
version: 0.6.5.11-1~trusty
license: CDDL
author: OpenZFS on Linux
description: ZFS
srcversion: B9DC860740C2225772C1B4F
I experimented with pool and explored using space in raidz1 on disks with 4K block size (may will be reproduce on other too, no have such disks at this moment). I have problem (may be understanding) with using space of pool for zvol.
In first case I create pool raidz1 on 4 disks, then zvol and obtain result
zpool create -o ashift=12 test raidz1 ...
zfs create -V 50G test/vol
dd if=/dev/urandom of=/dev/zd48 bs=128K
dd: error writing ‘/dev/zd48’: No space left on device
409601+0 records in
409600+0 records out
53687091200 bytes (54 GB) copied, 4089.8 s, 13.1 MB/s
zpool status test
pool: test
state: ONLINE
scan: none requested
config:
errors: No known data errors
`# zfs list
NAME USED AVAIL REFER MOUNTPOINT
test 73.8G 2.99T 140K /test
test/vol 73.8G 2.99T 73.8G -
zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
test 4.34T 102G 4.24T - 1% 2% 1.00x ONLINE -
zpool get all test
NAME PROPERTY VALUE SOURCE
test size 4.34T -
test capacity 2% -
test dedupratio 1.00x -
test free 4.24T -
test allocated 102G -
test ashift 12 local
test fragmentation 1% -
test leaked 0 default
zfs get all test
NAME PROPERTY VALUE SOURCE
test type filesystem -
test used 73.8G -
test available 2.99T -
test referenced 140K -
test compressratio 1.00x -
test recordsize 128K default
test compression off default
test copies 1 default
test version 5 --
test normalization none -
test casesensitivity sensitive -
test primarycache all default
test secondarycache all default
test usedbysnapshots 0 -
test usedbydataset 140K -
test usedbychildren 73.8G -
test usedbyrefreservation 0 -
test logbias latency default
test dedup off default
test mlslabel none default
test sync standard default
test refcompressratio 1.00x -
test written 140K -
test logicalused 50.4G -
test logicalreferenced 40K -
test redundant_metadata all default
zfs get all test/vol
NAME PROPERTY VALUE SOURCE
test/vol type volume -
test/vol used 73.8G -
test/vol available 2.99T -
test/vol referenced 73.8G -
test/vol compressratio 1.00x -
test/vol reservation none default
test/vol volsize 50G local
test/vol volblocksize 8K -
test/vol checksum on default
test/vol compression off default
test/vol copies 1 default
test/vol refreservation 51.6G local
test/vol primarycache all default
test/vol secondarycache all default
test/vol usedbysnapshots 0 -
test/vol usedbydataset 73.8G -
test/vol usedbychildren 0 -
test/vol usedbyrefreservation 0 -
test/vol logbias latency default
test/vol dedup off default
test/vol mlslabel none default
test/vol sync standard default
test/vol refcompressratio 1.00x -
test/vol written 73.8G -
test/vol logicalused 50.4G -
test/vol logicalreferenced 50.4G -
test/vol redundant_metadata all default`
in second case I create pool raidz1 on 5 disks, then zvol and obtain result
`# dd if=/dev/zero of=/dev/zd48 bs=128K
dd: error writing ‘/dev/zd48’: No space left on device
409601+0 records in
409600+0 records out
53687091200 bytes (54 GB) copied, 169.021 s, 318 MB/s
zpool status test -P -L
pool: test
state: ONLINE
scan: none requested
config:
NAME PROPERTY VALUE SOURCE
test size 5.44T -
test capacity 1% -
test dedupditto 0 default
test dedupratio 1.00x -
test free 5.34T -
test allocated 101G -
test ashift 12 local
test freeing 0 default
test fragmentation 1% -
test leaked 0 default
zfs get all test
NAME PROPERTY VALUE SOURCE
test type filesystem -
test used 80.5G -
test available 4.13T -
test referenced 153K -
test compressratio 1.00x -
test recordsize 128K default
test copies 1 default
test version 5 -
test primarycache all default
test secondarycache all default
test usedbysnapshots 0 -
test usedbydataset 153K -
test usedbychildren 80.5G -
test usedbyrefreservation 0 -
test logbias latency default
test dedup off default
test mlslabel none default
test sync standard default
test refcompressratio 1.00x -
test written 153K -
test logicalused 50.2G -
test logicalreferenced 40K -
test redundant_metadata all default
zfs get all test/vol
NAME PROPERTY VALUE SOURCE
test/vol type volume -
test/vol creation Wed Dec 13 15:48 2017 -
test/vol used 80.5G -
test/vol available 4.13T -
test/vol referenced 80.5G -
test/vol compressratio 1.00x -
test/vol reservation none default
test/vol volsize 50G local
test/vol volblocksize 8K -
test/vol checksum on default
test/vol compression off default
test/vol copies 1 default
test/vol refreservation 51.6G local
test/vol primarycache all default
test/vol secondarycache all default
test/vol usedbysnapshots 0 -
test/vol usedbydataset 80.5G -
test/vol usedbychildren 0 -
test/vol usedbyrefreservation 0 -
test/vol logbias latency default
test/vol dedup off default
test/vol mlslabel none default
test/vol sync standard default
test/vol refcompressratio 1.00x -
test/vol written 80.5G -
test/vol logicalused 50.2G -
test/vol logicalreferenced 50.2G -
test/vol redundant_metadata all default`
Why in first case for zvol was allocated in pool space 73.8G, but in second 80.5G.
But I expected that in case growth disks in raidz will be decrease "used/logicalused".
In what I mistake ? and why?
The text was updated successfully, but these errors were encountered:
Linux Controller 3.19.0-58-generic #64~14.04.1-Ubuntu SMP Fri Mar 18 19:05:43 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
filename: /lib/modules/3.19.0-58-generic/updates/dkms/zfs.ko
version: 0.6.5.11-1~trusty
license: CDDL
author: OpenZFS on Linux
description: ZFS
srcversion: B9DC860740C2225772C1B4F
I experimented with pool and explored using space in raidz1 on disks with 4K block size (may will be reproduce on other too, no have such disks at this moment). I have problem (may be understanding) with using space of pool for zvol.
In first case I create pool raidz1 on 4 disks, then zvol and obtain result
zpool create -o ashift=12 test raidz1 ...
zfs create -V 50G test/vol
dd if=/dev/urandom of=/dev/zd48 bs=128K
dd: error writing ‘/dev/zd48’: No space left on device
409601+0 records in
409600+0 records out
53687091200 bytes (54 GB) copied, 4089.8 s, 13.1 MB/s
zpool status test
pool: test
state: ONLINE
scan: none requested
config:
errors: No known data errors
`# zfs list
NAME USED AVAIL REFER MOUNTPOINT
test 73.8G 2.99T 140K /test
test/vol 73.8G 2.99T 73.8G -
zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
test 4.34T 102G 4.24T - 1% 2% 1.00x ONLINE -
zpool get all test
NAME PROPERTY VALUE SOURCE
test size 4.34T -
test capacity 2% -
test dedupratio 1.00x -
test free 4.24T -
test allocated 102G -
test ashift 12 local
test fragmentation 1% -
test leaked 0 default
zfs get all test
NAME PROPERTY VALUE SOURCE
test type filesystem -
test used 73.8G -
test available 2.99T -
test referenced 140K -
test compressratio 1.00x -
test recordsize 128K default
test compression off default
test copies 1 default
test version 5 --
test normalization none -
test casesensitivity sensitive -
test primarycache all default
test secondarycache all default
test usedbysnapshots 0 -
test usedbydataset 140K -
test usedbychildren 73.8G -
test usedbyrefreservation 0 -
test logbias latency default
test dedup off default
test mlslabel none default
test sync standard default
test refcompressratio 1.00x -
test written 140K -
test logicalused 50.4G -
test logicalreferenced 40K -
test redundant_metadata all default
zfs get all test/vol
NAME PROPERTY VALUE SOURCE
test/vol type volume -
test/vol used 73.8G -
test/vol available 2.99T -
test/vol referenced 73.8G -
test/vol compressratio 1.00x -
test/vol reservation none default
test/vol volsize 50G local
test/vol volblocksize 8K -
test/vol checksum on default
test/vol compression off default
test/vol copies 1 default
test/vol refreservation 51.6G local
test/vol primarycache all default
test/vol secondarycache all default
test/vol usedbysnapshots 0 -
test/vol usedbydataset 73.8G -
test/vol usedbychildren 0 -
test/vol usedbyrefreservation 0 -
test/vol logbias latency default
test/vol dedup off default
test/vol mlslabel none default
test/vol sync standard default
test/vol refcompressratio 1.00x -
test/vol written 73.8G -
test/vol logicalused 50.4G -
test/vol logicalreferenced 50.4G -
test/vol redundant_metadata all default`
in second case I create pool raidz1 on 5 disks, then zvol and obtain result
`# dd if=/dev/zero of=/dev/zd48 bs=128K
dd: error writing ‘/dev/zd48’: No space left on device
409601+0 records in
409600+0 records out
53687091200 bytes (54 GB) copied, 169.021 s, 318 MB/s
zpool status test -P -L
pool: test
state: ONLINE
scan: none requested
config:
errors: No known data errors
zpool get all test
NAME PROPERTY VALUE SOURCE
test size 5.44T -
test capacity 1% -
test dedupditto 0 default
test dedupratio 1.00x -
test free 5.34T -
test allocated 101G -
test ashift 12 local
test freeing 0 default
test fragmentation 1% -
test leaked 0 default
zfs get all test
NAME PROPERTY VALUE SOURCE
test type filesystem -
test used 80.5G -
test available 4.13T -
test referenced 153K -
test compressratio 1.00x -
test recordsize 128K default
test copies 1 default
test version 5 -
test primarycache all default
test secondarycache all default
test usedbysnapshots 0 -
test usedbydataset 153K -
test usedbychildren 80.5G -
test usedbyrefreservation 0 -
test logbias latency default
test dedup off default
test mlslabel none default
test sync standard default
test refcompressratio 1.00x -
test written 153K -
test logicalused 50.2G -
test logicalreferenced 40K -
test redundant_metadata all default
zfs get all test/vol
NAME PROPERTY VALUE SOURCE
test/vol type volume -
test/vol creation Wed Dec 13 15:48 2017 -
test/vol used 80.5G -
test/vol available 4.13T -
test/vol referenced 80.5G -
test/vol compressratio 1.00x -
test/vol reservation none default
test/vol volsize 50G local
test/vol volblocksize 8K -
test/vol checksum on default
test/vol compression off default
test/vol copies 1 default
test/vol refreservation 51.6G local
test/vol primarycache all default
test/vol secondarycache all default
test/vol usedbysnapshots 0 -
test/vol usedbydataset 80.5G -
test/vol usedbychildren 0 -
test/vol usedbyrefreservation 0 -
test/vol logbias latency default
test/vol dedup off default
test/vol mlslabel none default
test/vol sync standard default
test/vol refcompressratio 1.00x -
test/vol written 80.5G -
test/vol logicalused 50.2G -
test/vol logicalreferenced 50.2G -
test/vol redundant_metadata all default`
Why in first case for zvol was allocated in pool space 73.8G, but in second 80.5G.
But I expected that in case growth disks in raidz will be decrease "used/logicalused".
In what I mistake ? and why?
The text was updated successfully, but these errors were encountered: