Re-name
Shrink
Disk removal & cleanup
https://www.digitalocean.com/community/tutorials/how-to-create-raid-arrays-with-mdadm-on-ubuntu-16-04
how-to , except use parted (not fdisk) to gpt-format disk(s) or gdisk.
cheat-sheet
email-alert
create (degraded) raid1, to transfer existing data :: ref_01
- create an empty (broken) mirror with the new disk
- copy the existing data onto it
- VERIFY USEABLE DATA
- create mountpoint
- edit fstab to use new array
- VERIFY USEABLE DATA
- format old drive
- add old drive to array
Tools
du -sh file_path
du command estimates file_path space usage
The options -sh are (from man du):
-s, --summarize
display only a total for each argument
-h, --human-readable
print sizes in human readable format (e.g., 1K 234M 2G)
- Install disk &...
- Identify the device names of your new hard drives
sudo fdisk -l
orlsblk
- Determine whether there is any existing raid configuration
mdadm -E /dev/sd[x,y]
- Identify the device names of your new hard drives
- Create partition.
- The partition table needs to be GPT because desired volume > 2TB; use parted, not fdisk.
parted /dev/sd(x)
- At the (parted) prompt, create the partition table:
mklabel gpt
- Check the free space on the drive by typing
print free
- Create the partition
mkpart primary 1M 3001GB
This starts the partition at 1M offset giving a 4096 byte alignment. This may or may not be necessary, but won't hurt if its not. p
#displays partition setupq
#quit
- The partition table needs to be GPT because desired volume > 2TB; use parted, not fdisk.
- initialize array /dev/md(#) using /dev/sd(x#) and a missing disk
sudo mdadm --create --verbose /dev/md0 --level=raid1 --raid-devices=2 /dev/sd(x#) missing
- Create a file system on the array:
sudo mkfs.ext4 /dev/md0
- Create a mountpoint for the array, & mount volumes
sudo mkdir -p /storage/share
mount /dev/md0 /mnt/zero
- & copy old stuff to new /dev/md0 :: rsync_01 :: rsync_if_again
rsync -avhW --progress --no-compress /src/ /dst/
- Mount it
sudo mount /mnt/md0
- Next, save the raid configuration manually to ‘mdadm.conf‘ file using the below command.
mdadm --detail --scan --verbose >> /etc/mdadm.conf
- All good? Add entry to /etc/fstab: [dev/ID]
/dev/md0 /storage/share auto defaults 0 0
- All good? Add entry to /etc/fstab: [UUID]
- find device
cat /proc/mdstat
- find uuid of device
sudo blkid /dev/md0
- add entry to fstab
UUID=4a2b3c6d-0ada-4228-8043-7a2f40a13d4a /storage/share auto defaults 0 0
- find device
perform 1a through 2f on the old (3tb wd black)
- let’s add the other drive...
mdadm /dev/md0 --add /dev/sd(y#)
- Save the raid configuration manually to /etc/mdadm.conf
mdadm --detail --scan --verbose >> /etc/mdadm.conf
- Now you can start using your array. Bear in mind, however, that before it is fully operational it will need to complete its initial sync.
- Track/Watch sync progress:
watch -n1 sudo mdadm --detail /dev/md0
cat /proc/mdstat
- fail disk
sudo mdadm /dev/md0 --fail --remove /dev/sdb1
- stop array
sudo mdadm --stop md0
- wipe the filesystem
wipefs -a /dev/sd(x#)
- Shrink (Resize) filesystem to just larger than the space used by the content resize2fs /dev/md# [size]
- since intact, spends time rebuilding
watch -n1 cat /proc/mdstat
- Reshape RAID volume to just larger than than the (shrunken) filesystem
mdadm --grow /dev/md# --size [size+]
- Modify partition(s) as needed
- Grow the RAID volume to fill the re-worked/new partition
mdadm --grow /dev/md# -z max
- Resize filesystem to occupy entire RAID volume restore filesystem to full raid vol?
resize2fs /dev/md#
shrink filesystem: resize2fs to-size
shrink (reshape) raid vol: mdadm --grow
shrink partition by delete/recreation: gdisk
Helps to communicate array purpose/function/intent.
Links the array to /dev/md/newname
, and applies newname
to its listing in yast and elsewhere.
-
unmount, then stop the array
umount /dev/md#
mdadm --stop /dev/md#
ormdadm --stop --scan
-
Define & use "newname"
mdadm --assemble /dev/md/newname --name=newname --update=name /dev/sd[xy]#
edit /etc/mdadm.conf; changeoldname
tonewname
-
Make persistent across reboots
dracut -f
This is somewhat faster, but forfeits grub-rollbacks (thumbs-down).
System (root) on a raid-0 volume seems to have exposed a limitation of the Dell optiplex 3010 UEFI firmware.
Corresponding efi and boot partitions are expected to be on the same physical disk.
If the root is on soft-raid0, then /boot may need to be on a separate partition, because of firmware limitations similar to that experienced above.
EFI and /boot cannot share a partition because EFI needs to have a 'FAT' filesystem, and '/boot' needs a posix compliant filesystem (which 'FAT' is NOT).
Config used:
partition | sdX | sdY | raid-0 |
---|---|---|---|
1 | 260M [FAT] /boot/efi | 260M | - |
3 | 500M [XFS] /boot | 500M | - |
4 | 20G -> | 20G -> | [Btrfs] / |
5 | remainder -> | remainder -> | [XFS] /home |
2 | 2G -> | 2G -> | [swap] |