Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RAID & SSD performance #142

Closed
robnagler opened this issue Mar 25, 2018 · 1 comment
Closed

RAID & SSD performance #142

robnagler opened this issue Mar 25, 2018 · 1 comment
Assignees

Comments

@robnagler
Copy link
Member

robnagler commented Mar 25, 2018

Documenting performance of performance of PERC H310 (non-RAID) vs motherboard S110 in AHCI and RAID modes. The summary is that SSD performance varies wildly (2x). Perhaps its about wear, because most of these disks had been used 3K+ hours. However, there doesn't seem to be a correlation.

  • Newer drives are faster, because they have newer technology
  • Smaller drives seem to be faster
  • Luks and madam have almost no effect, considering what they are doing

The SMART stats have been saved in full on f1's db.

Benchmark code:

lvcreate -y --wipesignatures y -L 20G -n test centos
mkfs.xfs /dev/mapper/centos-test
mount /dev/mapper/centos-test /mnt
cd /mnt
# sync on close (every 10G with enough RAM (64G))
i=$(date +%s); dd bs=1M count=10240 if=/dev/zero of=test conv=fdatasync; echo $(( $(date +%s) - $i ))
rm -f test
sync
# sync on write (every 1M)
i=$(date +%s); dd bs=1M count=10240 if=/dev/zero of=test oflag=dsync; echo $(( $(date +%s) - $i ))
cd
umount /mnt
lvremove /dev/mapper/centos-test
  • H310 E2650 64g 850 256g: 384MBps (26s), 156MBps (64s)
  • S110 E2660 128g 860 256g: 222MBps (45s), 131MBps (76s)
  • S110 E2660 64g 850 256g: 222MBps (45s), 129MBps (77s)
  • f1 S110 E2660 64g 850 2TB luks mdadm: 232MBps (43s), 85MBps (117s)
  • AHCI E2660 64g 850 256g: 217MBps (46s), 75MBps (133s)
  • f8 E2660 64g 850 88s
  • f13 2660 128g 1TB S110 no-mdadm no-luks: dsync 125s 129s
  • f13 2660 128g luks 1TB 119s
  • f14 H310 megaraid sas 2008 850 256g 128g 2650 dsync 63s 67s
  • f2 CY1 E2660 H310 64g 1TB luks no-mdadm: fdatasync 91s, 94s; dsync: 113s, 124s
  • f11 E2650 AHCI 192g 860 256g no-mdadm: fdatasync 44s; dsync 77s
  • f4 E2660 S110 64g 2x1TB no-luks mdadm: fdatasync 44s; dsync: 101s, 102s
  • f2-CY1 E2660 H310 MegaRAID SAS 2008 64g 2x1TB luks mdadm: fdatasync 98s,89s,63s; dsync: 137s,131s,126s,124s,125s
  • f3-QV1 E2660 H310 MegaRAID SAS 2008 64g 2x1TB luks mdadm: fdatasync 40s, 33s; dsync: 84s,89s,86s
  • f4 E2660 AHCI 64g 2x1TB luks mdadm: fdatasync 43s; dsync: 111s, 112s
  • f7 E2660 AHCI 64g 2x1tb luks mdadm: dsync: 166s, 188s
  • f17 1x500g WD Scorpio Black (WD5000BPKT-0) AHCI e2660 64g luks: fdatasync: 133s; dsync: 386s,386s
  • f16 1x500g WD Scorpio Black (WD5000BPKT-7) AHCI e2660 64g luks: fdatasync: 99s; dsync: 388s,386s
  • f4 E2660 H310 64g 2x1TB luks mdadm: dsync: 87s,85s
  • f7 E2660 H310 64g 2x1TB luks mdadm: dsync: 146s,147s
  • f3-1Y1 E2660 H310 MegaRAID SAS 2008 64g 2x1TB luks mdadm: fdatasync: 35s,35s; dsync: 86s

Swapped slots and disks f2 & f3:

  • f2-QV1 f2-sda&b: dsync: 130s, 135s
  • f3-CY1 f3-sda&b: dsync: 89s, 90s

Swapped disks f2 & f3:

  • f3-CY1 f2-sda&b: dsync: 136s

Removed sdb from f2:

  • f3-CY1 f2-sda: dsync: 121s

Remove sda put sdb in sda slot from f2:

  • f3-CY1 f2-sdb: dsync: 108s, 106s, 108s

Removed H310 from f2 136s disk in slot 0:

  • f2-CY1 AHCI f2-sda: dsync: 142s, 148s, 145s

Some configuration:

  • mdadm: raid1
  • luks: aes xts-plain64 sha256
  • Intel e5-2650 v2 & e5-2660 v2
  • Samsung EVO 850/860, 64g/128g; Write through, no read ahead
  • Some 1TB 850s bought two years ago
  • f2 & f3: identical config (compared in bios and hdparm, lshw)
  • f4 same as f2 & f3, but AHCI or S110

SMART info for drives (Writes=Total_LBAs_Written, Hours=Power_On_Hours):

Mach-Dev Writes Hours
f11-sda   119E6   306
f2-sda   2857E6  3620
f2-sdb   2709E6  3616
f3-sda   4237E6  3658
f3-sdb    263E6  3658
f4-sda    533E6  3659
f4-sdb   6459E6  3659
f7-sda   1615E6 36350
f7-sdb   4610E6 35015

iostat

Bit of a red herring. iostat 1 while running dsync test on f2:

procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 0  0      0 58743840   3380 6405120    0    0     0 74830 3545 6096  0  1 99  0  0
 0  0      0 58668548   3380 6480976    0    0     0 73805 3302 6540  0  1 99  0  0
 0  0      0 58591740   3380 6558976    0    0     0 75860 3718 5751  0  1 99  1  0
 0  1      0 58511940   3380 6637740    0    0     0 76879 3618 6019  0  1 99  1  0
 1  0      0 58433568   3380 6716860    0    0     0 76881 3540 6884  0  1 99  0  0
 0  0      0 58349632   3380 6800896    0    0     0 82005 3768 7112  0  1 99  1  0
 0  1      0 58260360   3380 6888652    0    0     0 85079 3977 6599  0  1 99  1  0
 0  1      0 58182408   3380 6966532    0    0     0 75860 3582 5736  0  1 99  1  0
 0  0      0 58101428   3380 7048000    0    0     0 79956 3783 6722  0  1 99  1  0
 0  0      0 58019780   3380 7130376    0    0     0 79955 3867 6587  0  1 99  1  0
 0  0      0 57939324   3380 7209944    0    0     0 77905 3813 6205  0  1 99  1  0
 0  1      0 57852468   3380 7297048    0    0     0 84054 4001 7072  0  1 99  1  0
 0  0      0 57774636   3380 7374944    0    0     0 75861 3556 6544  0  1 99  1  0

on f3:

procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 0  0      0 62882420   3356 2266916    0    0     0 117881 5315 10353  0  1 98  1  0
 0  1      0 62759968   3356 2390028    0    0     0 119934 5155 10148  0  1 98  1  0
 0  0      0 62638844   3356 2510308    0    0     0 116862 5148 9382  0  1 98  1  0
 0  1      0 62514976   3356 2634508    0    0     0 120959 5285 10501  0  1 98  1  0
 1  0      0 62399292   3356 2751804    0    0     0 113786 4940 9064  0  1 98  1  0
 0  0      0 62278528   3356 2871444    0    0     0 116855 5264 9215  0  1 98  1  0
 0  1      0 62160376   3356 2989464    0    0     0 114809 5124 9057  0  1 98  1  0
 0  0      0 62040028   3356 3110248    0    0     0 117887 5236 10282  0  1 98  1  0
 0  1      0 61914328   3356 3235740    0    0     0 121984 5156 10772  0  1 98  1  0
 0  1      0 61798476   3356 3351284    0    0     0 112760 5134 9541  0  1 98  1  0
 0  0      0 61674668   3356 3475632    0    0     0 120982 5354 9564  0  1 98  1  0
 1  1      0 61553560   3356 3595340    0    0     0 116859 5027 10058  0  1 98  1  0
 0  1      0 61435180   3356 3715728    0    0     0 116860 5252 9298  0  1 98  1  0

References on performance of hardware RAID vs madam:

@robnagler robnagler self-assigned this Mar 25, 2018
@robnagler
Copy link
Member Author

We are stable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant