How to setup RAID
fdisk
the drives that you would like to have in your array. Lets assume we're using /dev/hde
and /dev/hdg
- I create a primary partition on each disk and set its type to “fd” (Linux Raid Autodetect)md
devices already (we use the next available device in the next step):# ls -l /dev/md* drwxr-xr-x 2 root root 80 2008-02-16 21:07 /dev/md lrwxrwxrwx 1 root root 4 2008-02-16 21:07 /dev/md0 -> md/0
I already have a /dev/md0
device, so I'll use /dev/md1
in the next step.
# mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/hd[eg]1
/dev/hdf
, a 250gb drive which currently has data on it. However, I want to raid this disk, so I'll be creating another RAID device using a spare 250gb drive (/dev/hdh
). I'll mark the second disk as missing
, copy the data onto the new 250gb RAID device, and then add /dev/hdf
to the RAID device (thus replacing the missing
disk).# mdadm --create /dev/md2 --level=1 --raid-devices=2 /dev/hdh1 missing
# pvcreate /dev/md2 # vgextend raidvg0 /dev/md2 # lvextend -l +59651 /dev/raidvg0/wspace_media (extends by the number of "+extents")
# resize2fs /dev/raidvg0/wspace_media
/dev/md2
device:fd
(Linux Raid Autodetect)# mdadm --add /dev/md2 /dev/hdf1
root@artemis:/home/john# fdisk /dev/sda (sdb, sdc, sdd) The number of cylinders for this disk is set to 91201. There is nothing wrong with that, but this is larger than 1024, and could in certain setups cause problems with: 1) software that runs at boot time (e.g., old versions of LILO) 2) booting and partitioning software from other OSs (e.g., DOS FDISK, OS/2 FDISK) Command (m for help): p Disk /dev/sda: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 1 91201 732572001 fd Linux raid autodetect Command (m for help):
root@artemis:/home/john# mdadm --create /dev/md4 --level=5 --raid-devices=4 /dev/sd[abc]1 missing root@artemis:/home/john# mdadm --detail /dev/md4 /dev/md4: Version : 00.90.03 Creation Time : Sun Jul 27 19:13:23 2008 Raid Level : raid5 Array Size : 2197715712 (2095.91 GiB 2250.46 GB) Used Dev Size : 732571904 (698.64 GiB 750.15 GB) Raid Devices : 4 Total Devices : 3 Preferred Minor : 4 Persistence : Superblock is persistent Update Time : Sun Jul 27 19:13:23 2008 State : clean, degraded Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K UUID : 3190d1da:092e295e:959943bc:2f00e666 Events : 0.1 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 0 0 3 removed
root@artemis:/home/john# mdadm --detail --scan >> /etc/mdadm.conf
root@artemis:/home/john# mkfs.ext3 /dev/md4
From Ferg's Gaff: Growing a RAID 5 Array:
mdadm --add /dev/md1 /dev/sdf1 mdadm --grow /dev/md1 --raid-devices=4
This then took about 3 hours to reshape the array.
The filesystem the needs to be expanded to fill up the new space.
fsck.ext3 /dev/md1 resize2fs /dev/md1
# mdadm --manage /dev/mdfoo --fail /dev/sdfoo # mdadm --manage /dev/mdfoo --remove /dev/sdfoo # mdadm --manage --stop /dev/mdfoo
You can do a lot of tweaking by changing the stripe/cluster size and block size.
# mdadm --detail /dev/md4
# dumpe2fs -h /dev/device#
http://www.pcguide.com/ref/hdd/perf/raid/concepts/perfStripe-c.html
http://www.pcguide.com/ref/hdd/perf/raid/levels/single_Level0.htm