Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slow read speed on 0.6.5.3 #3950

Closed
OverLocker opened this issue Oct 24, 2015 · 10 comments
Closed

Slow read speed on 0.6.5.3 #3950

OverLocker opened this issue Oct 24, 2015 · 10 comments
Labels
Status: Inactive Not being actively updated Type: Performance Performance improvement or performance problem

Comments

@OverLocker
Copy link

Hello,
On Gentoo with 4.0.5 kernel and 0.6.5.3 version of zfs i have slow read speed (50-60MB/s).

Write speed is ok - 112MB/s. Disks are not overloaded and CPU and memory is mostly free.

Its exactly ZFS read speed, because copy speed from single ext4 disk in the same system is 90+ MB/s.

Pool is RaidZ2, 7+2, 3tb disks.

Is it possible to speed up read perfomance?

$uname -a
Linux NAS01 4.0.5-gentoo #1 SMP Sat Oct 24 11:32:43 MSK 2015 x86_64 Intel(R) Pentium(R) CPU G630 @ 2.70GHz GenuineIntel GNU/Linux

lsb_release -a

LSB Version: n/a
Distributor ID: Gentoo
Description: Gentoo Base System release 2.2
Release: 2.2
Codename: n/a

dmesg | grep -i zfs
[ 8.785441] ZFS: Loaded module v0.6.5.3-r0-gentoo, ZFS pool version 5000, ZFS filesystem version 5

zpool status

pool: ZFS01
state: ONLINE
scan: scrub canceled on Sat Oct 24 15:25:37 2015
config:

    NAME                                          STATE     READ WRITE CKSUM
    ZFS01                                         ONLINE       0     0     0
      raidz2-0                                    ONLINE       0     0     0
        ata-WDC_WD30EFRX-68AX9N0_WD-XXXXXXXXXXX  ONLINE       0     0     0
        ata-WDC_WD30EFRX-68AX9N0_WD-XXXXXXXXXXX  ONLINE       0     0     0
        ata-WDC_WD30EZRX-00SPEB0_WD-XXXXXXXXXXX  ONLINE       0     0     0
        ata-WDC_WD30EZRX-00DC0B0_WD-XXXXXXXXXXX  ONLINE       0     0     0
        ata-WDC_WD30EZRX-00D8PB0_WD-XXXXXXXXXXX  ONLINE       0     0     0
        ata-WDC_WD30EZRX-00DC0B0_WD-XXXXXXXXXXX  ONLINE       0     0     0
        ata-WDC_WD30EZRX-00MMMB0_WD-XXXXXXXXXXX  ONLINE       0     0     0
        ata-WDC_WD30EZRX-00MMMB0_WD-XXXXXXXXXXX  ONLINE       0     0     0
        ata-WDC_WD30EFRX-68EUZN0_WD-XXXXXXXXXXX  ONLINE       0     0     0

errors: No known data errors

@igsol
Copy link

igsol commented Oct 24, 2015

I can confirm.
Before upgrade to 0.6.5.3 the files downloading via Samba saturated 1Gbps link completely (~112MB/s).
Now it's observed ~80-90MB/s only.

$ uname -a
Linux nas 3.13.0-66-generic #108~precise1-Ubuntu SMP Thu Oct 8 10:07:36 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

$  lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 12.04.5 LTS
Release:    12.04
Codename:   precise

$ # dmesg |grep -i zfs
[    1.235286] ZFS: Loaded module v0.6.5.3-1~precise, ZFS pool version 5000, ZFS filesystem version 5

Pool:
    NAME        STATE     READ WRITE CKSUM
    ZP          ONLINE       0     0     0
      raidz1-0  ONLINE       0     0     0
        sdf     ONLINE       0     0     0
        sde     ONLINE       0     0     0
        sdb     ONLINE       0     0     0
        sdc     ONLINE       0     0     0
        sdd     ONLINE       0     0     0

$ pool list ZP 
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
ZP    13.6T  9.06T  4.56T         -     7%    66%  1.00x  ONLINE  -

@igsol
Copy link

igsol commented Oct 25, 2015

In addition to above.
I have another machine with Ubuntu 15.04. There is one virtual machine on zvol. Before upgrade to ZoL 0.6.5.3 VM worked quite fast, now - like a slug. What a disaster! 😞

I'd like to downgrade to ZoL 0.6.5.2, but it seems Ubuntu repository on launchpad doesn't contain previous versions of ZoL. Or could anybody suggest a way to downgrade?

@stevenj
Copy link

stevenj commented Oct 26, 2015

Can you check if you are getting the same errors in dmesg as me, because I also see the same slow read speeds.

@OverLocker
Copy link
Author

I cant see the same errors in dmesg, only loaded ZFS module, nothing about txg.

@igsol
Copy link

igsol commented Oct 26, 2015

I've grepped dmesg output as well. Nothing relative to txg.

@dimez
Copy link

dimez commented Dec 15, 2015

I confirm this problem on Ubuntu 14.04.3 LTS (kernel 3.16.0-x and 4.2.0.19/4.2.0-21) on ZoL 0.6.5.3

@behlendorf
Copy link
Contributor

It would be tremendously helpful if someone could determine where this performance issue was introduced. All previous releases are available from Github as tarballs and they will work with the 14.04 LTS kernel.

@mailinglists35
Copy link

i'm getting this too on a zvol in a simple two-way mirror

It would be tremendously helpful if someone could determine where this performance issue was introduced

I was running 0.6.4-something from official repo on debian jessie until the upgrading process broke (zfsonlinux/pkg-zfs#184, #3807, zfsonlinux/pkg-zfs#181) so I manually built the 0.6.5.3 debs as instructed on main zol webpage.

if someone manages to prebuild all those debs mentioned by @behlendorf for debian jessie's linux-image-amd64 version 3.16+63 and upload them binaries somewhere, I can test all of them starting with 0.6.3.

@OverLocker
Copy link
Author

Speed is slow on 0.6.5.3 when copy from network. It is about 50-60MB/s. On early versions it was 100-112MB/s.
I am hoping it will be fixed somewhere....

@behlendorf behlendorf added the Type: Performance Performance improvement or performance problem label Dec 28, 2015
@koplover
Copy link

koplover commented Jan 7, 2016

I have also upgraded recently from 0.6.4.2-1 to 0.6.5.3-1, and have again experienced a marked slowdown in IO performance since this upgrade.

We run a virtualized environment (under Xen) and run zfs zvols as the volumes supporting the individual VMs. Benchmarking shows figures show read and write performance down by 50%, and simple iostat on the zvol taking the brunt of the benchmark concurs with a 50% throughput downgrade for reads and writes (of course the writes could be impacted by not being able to read quickly enough).

We have zpool configuration as follows:

NAME                                STATE     READ WRITE CKSUM
mypool                           ONLINE       0     0     0
  mirror-0                          ONLINE       0     0     0
    ata-MB4000GCWDC_Z1Z2YB3F-part4  ONLINE       0     0     0
    ata-MB4000GCWDC_Z1Z2XLZ3-part4  ONLINE       0     0     0
  mirror-1                          ONLINE       0     0     0
    ata-MB4000GCWDC_Z1Z2Y4DJ-part4  ONLINE       0     0     0
    ata-MB4000GCWDC_Z1Z2Y4XS-part4  ONLINE       0     0     0

and running under ubuntu:
uname -a
Linux zdiskdd0000-0013-00-00 3.19.0-26-zdomu #27~14.04.1 SMP Thu Nov 26 05:23:55 GMT 2015 x86_64 x86_64 x86_64 GNU/Linux

As I say the only change in the two systems being tested is zfs versions as above

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Status: Inactive Not being actively updated Type: Performance Performance improvement or performance problem
Projects
None yet
Development

No branches or pull requests

8 participants
@behlendorf @stevenj @dimez @mailinglists35 @koplover @OverLocker @igsol and others