-
Notifications
You must be signed in to change notification settings - Fork 4
DellRAID
Our RAID disks are really Just a Bunch of Disks (JBOD) as it is easier to manage disks from the file system. ZFS and Linux RAID want to talk to the disks directly, and it's easier to move such disks around.
We've encapsulated all the m620 setup code
in idrac-m620-setup.sh, which is a sequence of
functions you can selectively execute, including idrac
.
RAID And Storage Configuration using RACADM Commands in iDRAC7 is a simple manual documenting the raid commands. RACADM Command Line Reference Guide for iDRAC7 1.50.50 and CMC 4.5 (p106) is the complete reference.
We first have to reset the RAID config:
$ idrac 1 storage resetconfig:RAID.Integrated.1-1
$ idrac 1 jobqueue create RAID.Integrated.1-1 -r pwrcycle -s TIME_NOW -e TIME_NA
After this completes, we'll follow ThornLabs instructions:
$ for p in $(idrac 1 raid get pdisks); do
idrac 1 raid createvd:RAID.Integrated.1-1 -rl r0 -wp wt -rp nra -ss 64k "-pdkey:$p"a
done
$ idrac 1 job RAID.Integrated.1-1
This sets up all the physical disks (pdisks) as independent virtual disks with RAID0 (r0), write through (wt), no read ahead (nra), and 64k strip size (seems what people recommend for SSDs).
Wait a bit and verify that the vdisks have been created:
$ idrac 1 raid get vdisks -o
Disk.Virtual.0:RAID.Integrated.1-1
Status = Ok
DeviceDescription = Virtual Disk 0 on Integrated RAID Controller 1
Name = Virtual Disk 0
RollupStatus = Ok
State = Online
OperationalState = Not applicable
Layout = Raid-0
Size = 931.00 GB
SpanDepth = 1
AvailableProtocols = SATA
MediaType = SSD
ReadPolicy = No Read Ahead
WritePolicy = Write Through
StripeSize = 64K
DiskCachePolicy = Enabled
BadBlocksFound = NO
Secured = NO
RemainingRedundancy = 0
EnhancedCache = Not Applicable
T10PIStatus = Disabled
BlockSizeInBytes = 512
Testing performance of PERC H310 (non-RAID) vs motherboard S110:
lvcreate -y --wipesignatures y -L 20G -n test centos
mkfs.xfs /dev/mapper/centos-test
mount /dev/mapper/centos-test /mnt
cd /mnt
# sync on close (every 10G with enough RAM (64G))
i=$(date +%s); dd bs=1M count=10240 if=/dev/zero of=test conv=fdatasync; echo $(( $(date +%s) - $i ))
rm -f test
sync
# sync on write (every 1M)
i=$(date +%s); dd bs=1M count=10240 if=/dev/zero of=test oflag=dsync; echo $(( $(date +%s) - $i ))
cd
umount /mnt
lvremove /dev/mapper/centos-test
- H310 E2650 64g 850 256g: 384MBps (26s), 156MBps (64s)
- S110 E2660 128g 860 256g: 222MBps (45s), 131MBps (76s)
- S110 E2660 64g 850 256g: 222MBps (45s), 129MBps (77s)
- f1 S110 E2660 64g 850 2TB luks mdadm: 232MBps (43s), 85MBps (117s)
- AHCI E2660 64g 850 256g: 217MBps (46s), 75MBps (133s)
- f8 E2660 64g 850 88s
- f13 2660 128g 1TB S110 no-mdadm no-luks: dsync 125s 129s
- f13 2660 128g luks 1TB 119s
- f14 H310 megaraid sas 2008 850 256g 128g 2650 dsync 63s 67s
- f2 E2660 H310 64g 1TB (newer) luks no-mdadm: fdatasync 91s, 94s; dsync: 113s, 124s
- f11 E2650 AHCI 192g 860 256g no-mdadm: fdatasync 44s; dsync 77s
- f4 E2660 S110 64g 2x1TB (newer) no-luks mdadm: fdatasync 44s; dsync: 101s, 102s
- f2 E2660 H310 MegaRAID SAS 2008 64g 2x1TB (newer) luks mdadm: fdatasync 98s,89s,63s; dsync: 137s,131s,126s
- f3 E2660 H310 MegaRAID SAS 2008 64g 2x1TB (newer) luks mdadm: fdatasync 40s, 33s; dsync: 84s,89s
- f4 E2660 AHCI 64g 2x1TB (newer) luks mdadm: fdatasync 43s; dsync: 111s, 112s
Some configuration:
- mdadm: raid1
- luks: aes xts-plain64 sha256
- Intel e5-2650 v2 & e5-2660 v2
- Samsung EVO 850/860, 64g/128g; Write through, no read ahead
- Some 1TB 850s bought two years ago
- f2 & f3: identical config (compared in bios and hdparm, lshw)
- f4 same as f2 & f3, but AHCI or S110
iostat 1 while running dsync test on f2:
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 0 58743840 3380 6405120 0 0 0 74830 3545 6096 0 1 99 0 0
0 0 0 58668548 3380 6480976 0 0 0 73805 3302 6540 0 1 99 0 0
0 0 0 58591740 3380 6558976 0 0 0 75860 3718 5751 0 1 99 1 0
0 1 0 58511940 3380 6637740 0 0 0 76879 3618 6019 0 1 99 1 0
1 0 0 58433568 3380 6716860 0 0 0 76881 3540 6884 0 1 99 0 0
0 0 0 58349632 3380 6800896 0 0 0 82005 3768 7112 0 1 99 1 0
0 1 0 58260360 3380 6888652 0 0 0 85079 3977 6599 0 1 99 1 0
0 1 0 58182408 3380 6966532 0 0 0 75860 3582 5736 0 1 99 1 0
0 0 0 58101428 3380 7048000 0 0 0 79956 3783 6722 0 1 99 1 0
0 0 0 58019780 3380 7130376 0 0 0 79955 3867 6587 0 1 99 1 0
0 0 0 57939324 3380 7209944 0 0 0 77905 3813 6205 0 1 99 1 0
0 1 0 57852468 3380 7297048 0 0 0 84054 4001 7072 0 1 99 1 0
0 0 0 57774636 3380 7374944 0 0 0 75861 3556 6544 0 1 99 1 0
on f3:
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 0 62882420 3356 2266916 0 0 0 117881 5315 10353 0 1 98 1 0
0 1 0 62759968 3356 2390028 0 0 0 119934 5155 10148 0 1 98 1 0
0 0 0 62638844 3356 2510308 0 0 0 116862 5148 9382 0 1 98 1 0
0 1 0 62514976 3356 2634508 0 0 0 120959 5285 10501 0 1 98 1 0
1 0 0 62399292 3356 2751804 0 0 0 113786 4940 9064 0 1 98 1 0
0 0 0 62278528 3356 2871444 0 0 0 116855 5264 9215 0 1 98 1 0
0 1 0 62160376 3356 2989464 0 0 0 114809 5124 9057 0 1 98 1 0
0 0 0 62040028 3356 3110248 0 0 0 117887 5236 10282 0 1 98 1 0
0 1 0 61914328 3356 3235740 0 0 0 121984 5156 10772 0 1 98 1 0
0 1 0 61798476 3356 3351284 0 0 0 112760 5134 9541 0 1 98 1 0
0 0 0 61674668 3356 3475632 0 0 0 120982 5354 9564 0 1 98 1 0
1 1 0 61553560 3356 3595340 0 0 0 116859 5027 10058 0 1 98 1 0
0 1 0 61435180 3356 3715728 0 0 0 116860 5252 9298 0 1 98 1 0
References on performance of hardware RAID vs madam:
- https://serverfault.com/a/685328
- http://en.community.dell.com/support-forums/servers/f/906/t/19475037
- https://delightlylinux.wordpress.com/2016/05/24/motherboard-raid-or-linux-mdadm-which-is-faster/
- https://linuxaria.com/pills/how-to-properly-use-dd-on-linux-to-benchmark-the-write-speed-of-your-disk
- https://romanrm.net/dd-benchmark
- https://linuxaria.com/pills/how-to-properly-use-dd-on-linux-to-benchmark-the-write-speed-of-your-disk
- https://www.dell.com/community/PowerEdge-HDD-SCSI-RAID/PE-T320-and-S110-RAID-10-Server-2012-Abysmal-Disk-Subsystem/td-p/4220489