Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem (may be understanding) with using space of pool for zvol #6954

Closed
Ufynjy opened this issue Dec 13, 2017 · 1 comment
Closed

Problem (may be understanding) with using space of pool for zvol #6954

Ufynjy opened this issue Dec 13, 2017 · 1 comment

Comments

@Ufynjy
Copy link

Ufynjy commented Dec 13, 2017

Linux Controller 3.19.0-58-generic #64~14.04.1-Ubuntu SMP Fri Mar 18 19:05:43 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

filename: /lib/modules/3.19.0-58-generic/updates/dkms/zfs.ko
version: 0.6.5.11-1~trusty
license: CDDL
author: OpenZFS on Linux
description: ZFS
srcversion: B9DC860740C2225772C1B4F

I experimented with pool and explored using space in raidz1 on disks with 4K block size (may will be reproduce on other too, no have such disks at this moment). I have problem (may be understanding) with using space of pool for zvol.

In first case I create pool raidz1 on 4 disks, then zvol and obtain result

zpool create -o ashift=12 test raidz1 ...

zfs create -V 50G test/vol

dd if=/dev/urandom of=/dev/zd48 bs=128K

dd: error writing ‘/dev/zd48’: No space left on device
409601+0 records in
409600+0 records out
53687091200 bytes (54 GB) copied, 4089.8 s, 13.1 MB/s

zpool status test

pool: test
state: ONLINE
scan: none requested
config:

NAME        STATE     READ WRITE CKSUM
test        ONLINE       0     0     0
  raidz1-0  ONLINE       0     0     0
    sdn1    ONLINE       0     0     0
    sdo1    ONLINE       0     0     0
    sdp1    ONLINE       0     0     0
    sdq1    ONLINE       0     0     0

errors: No known data errors
`# zfs list
NAME USED AVAIL REFER MOUNTPOINT
test 73.8G 2.99T 140K /test
test/vol 73.8G 2.99T 73.8G -

zpool list

NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
test 4.34T 102G 4.24T - 1% 2% 1.00x ONLINE -

zpool get all test

NAME PROPERTY VALUE SOURCE
test size 4.34T -
test capacity 2% -
test dedupratio 1.00x -
test free 4.24T -
test allocated 102G -
test ashift 12 local
test fragmentation 1% -
test leaked 0 default

zfs get all test

NAME PROPERTY VALUE SOURCE
test type filesystem -
test used 73.8G -
test available 2.99T -
test referenced 140K -
test compressratio 1.00x -
test recordsize 128K default
test compression off default
test copies 1 default
test version 5 --
test normalization none -
test casesensitivity sensitive -
test primarycache all default
test secondarycache all default
test usedbysnapshots 0 -
test usedbydataset 140K -
test usedbychildren 73.8G -
test usedbyrefreservation 0 -
test logbias latency default
test dedup off default
test mlslabel none default
test sync standard default
test refcompressratio 1.00x -
test written 140K -
test logicalused 50.4G -
test logicalreferenced 40K -
test redundant_metadata all default

zfs get all test/vol

NAME PROPERTY VALUE SOURCE
test/vol type volume -
test/vol used 73.8G -
test/vol available 2.99T -
test/vol referenced 73.8G -
test/vol compressratio 1.00x -
test/vol reservation none default
test/vol volsize 50G local
test/vol volblocksize 8K -
test/vol checksum on default
test/vol compression off default
test/vol copies 1 default
test/vol refreservation 51.6G local
test/vol primarycache all default
test/vol secondarycache all default
test/vol usedbysnapshots 0 -
test/vol usedbydataset 73.8G -
test/vol usedbychildren 0 -
test/vol usedbyrefreservation 0 -
test/vol logbias latency default
test/vol dedup off default
test/vol mlslabel none default
test/vol sync standard default
test/vol refcompressratio 1.00x -
test/vol written 73.8G -
test/vol logicalused 50.4G -
test/vol logicalreferenced 50.4G -
test/vol redundant_metadata all default`

in second case I create pool raidz1 on 5 disks, then zvol and obtain result

`# dd if=/dev/zero of=/dev/zd48 bs=128K
dd: error writing ‘/dev/zd48’: No space left on device
409601+0 records in
409600+0 records out
53687091200 bytes (54 GB) copied, 169.021 s, 318 MB/s

zpool status test -P -L

pool: test
state: ONLINE
scan: none requested
config:

NAME           STATE     READ WRITE CKSUM
test           ONLINE       0     0     0
  raidz1-0     ONLINE       0     0     0
    /dev/sdn1  ONLINE       0     0     0
    /dev/sdo1  ONLINE       0     0     0
    /dev/sdp1  ONLINE       0     0     0
    /dev/sdq1  ONLINE       0     0     0
    /dev/sdt1  ONLINE       0     0     0

errors: No known data errors

zpool get all test

NAME PROPERTY VALUE SOURCE
test size 5.44T -
test capacity 1% -
test dedupditto 0 default
test dedupratio 1.00x -
test free 5.34T -
test allocated 101G -
test ashift 12 local
test freeing 0 default
test fragmentation 1% -
test leaked 0 default

zfs get all test

NAME PROPERTY VALUE SOURCE
test type filesystem -
test used 80.5G -
test available 4.13T -
test referenced 153K -
test compressratio 1.00x -
test recordsize 128K default
test copies 1 default
test version 5 -
test primarycache all default
test secondarycache all default
test usedbysnapshots 0 -
test usedbydataset 153K -
test usedbychildren 80.5G -
test usedbyrefreservation 0 -
test logbias latency default
test dedup off default
test mlslabel none default
test sync standard default
test refcompressratio 1.00x -
test written 153K -
test logicalused 50.2G -
test logicalreferenced 40K -
test redundant_metadata all default

zfs get all test/vol

NAME PROPERTY VALUE SOURCE
test/vol type volume -
test/vol creation Wed Dec 13 15:48 2017 -
test/vol used 80.5G -
test/vol available 4.13T -
test/vol referenced 80.5G -
test/vol compressratio 1.00x -
test/vol reservation none default
test/vol volsize 50G local
test/vol volblocksize 8K -
test/vol checksum on default
test/vol compression off default
test/vol copies 1 default
test/vol refreservation 51.6G local
test/vol primarycache all default
test/vol secondarycache all default
test/vol usedbysnapshots 0 -
test/vol usedbydataset 80.5G -
test/vol usedbychildren 0 -
test/vol usedbyrefreservation 0 -
test/vol logbias latency default
test/vol dedup off default
test/vol mlslabel none default
test/vol sync standard default
test/vol refcompressratio 1.00x -
test/vol written 80.5G -
test/vol logicalused 50.2G -
test/vol logicalreferenced 50.2G -
test/vol redundant_metadata all default`

Why in first case for zvol was allocated in pool space 73.8G, but in second 80.5G.
But I expected that in case growth disks in raidz will be decrease "used/logicalused".
In what I mistake ? and why?

@richardelling
Copy link
Contributor

This discussion should move to the mailing list.

Begin with https://www.delphix.com/blog/delphix-engineering/zfs-raidz-stripe-width-or-how-i-learned-stop-worrying-and-love-raidz
Then realize your volblocksize to physical block size ratio is 2 (row value=2 on the spreadsheet). Then rethink your experiment.

It is possible there is a bug in the dataset accounting. However, per above, the configuration is impractical and not a best practice.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants
@richardelling @Ufynjy and others