====== RAID info ======
How to setup RAID
===== RAID 1 (mirrored) =====
* [[http://linux-raid.osdl.org/index.php/Main_Page|mdadm wiki]]
* [[http://www.slacksite.com/slackware/raid.html|Some older slackware specific info]]
* [[http://pumpump.blogspot.com/2007/07/installing-slackware-12-on-linux.html|slackware 12 raid setup]]
* [[http://tldp.org/HOWTO/Software-RAID-HOWTO-5.html|tldp raid info]]
* [[wp>RAID]]
==== slackware raid 1 and lvm ====
- First, you must ''fdisk'' the drives that you would like to have in your array. Lets assume we're using ''/dev/hde'' and ''/dev/hdg'' - I create a primary partition on each disk and set its type to "fd" (Linux Raid Autodetect)
- Check to see if you have any ''md'' devices already (we use the next available device in the next step):\\ # ls -l /dev/md*
drwxr-xr-x 2 root root 80 2008-02-16 21:07 /dev/md
lrwxrwxrwx 1 root root 4 2008-02-16 21:07 /dev/md0 -> md/0
I already have a ''/dev/md0'' device, so I'll use ''/dev/md1'' in the next step.
- now we create the RAID devices:\\ # mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/hd[eg]1
- I also have a disk, ''/dev/hdf'', a 250gb drive which currently has data on it. However, I want to raid this disk, so I'll be creating another RAID device using a spare 250gb drive (''/dev/hdh''). I'll mark the second disk as ''missing'', copy the data onto the new 250gb RAID device, and then add ''/dev/hdf'' to the RAID device (thus replacing the ''missing'' disk).
- # mdadm --create /dev/md2 --level=1 --raid-devices=2 /dev/hdh1 missing
- # pvcreate /dev/md2
# vgextend raidvg0 /dev/md2
# lvextend -l +59651 /dev/raidvg0/wspace_media (extends by the number of "+extents")
- now to resize my partition:\\ # resize2fs /dev/raidvg0/wspace_media
- Now to add the second drive to the ''/dev/md2'' device:
- set the partition type to ''fd'' (Linux Raid Autodetect)
- add the new device:\\ # mdadm --add /dev/md2 /dev/hdf1
===== Raid 5 =====
==== slackware raid 5 ====
- Setup the disks (I have one disk missing from the array): root@artemis:/home/john# fdisk /dev/sda (sdb, sdc, sdd)
The number of cylinders for this disk is set to 91201.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): p
Disk /dev/sda: 750.1 GB, 750156374016 bytes
255 heads, 63 sectors/track, 91201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 1 91201 732572001 fd Linux raid autodetect
Command (m for help):
- Create the RAID 5 Array (check [[#filesystem/raid tweaking]] for fine-tuning): root@artemis:/home/john# mdadm --create /dev/md4 --level=5 --raid-devices=4 /dev/sd[abc]1 missing
root@artemis:/home/john# mdadm --detail /dev/md4
/dev/md4:
Version : 00.90.03
Creation Time : Sun Jul 27 19:13:23 2008
Raid Level : raid5
Array Size : 2197715712 (2095.91 GiB 2250.46 GB)
Used Dev Size : 732571904 (698.64 GiB 750.15 GB)
Raid Devices : 4
Total Devices : 3
Preferred Minor : 4
Persistence : Superblock is persistent
Update Time : Sun Jul 27 19:13:23 2008
State : clean, degraded
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : 3190d1da:092e295e:959943bc:2f00e666
Events : 0.1
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1
3 0 0 3 removed
- Save the RAID info (not required): root@artemis:/home/john# mdadm --detail --scan >> /etc/mdadm.conf
- Create the filesystem: root@artemis:/home/john# mkfs.ext3 /dev/md4
===== Growing RAID 1/5/6/10 Arrays =====
From [[http://scotgate.org/?p=107|Ferg's Gaff: Growing a RAID 5 Array]]:
mdadm --add /dev/md1 /dev/sdf1
mdadm --grow /dev/md1 --raid-devices=4
This then took about 3 hours to reshape the array.
The filesystem the needs to be expanded to fill up the new space.
fsck.ext3 /dev/md1
resize2fs /dev/md1
[[http://linux-raid.osdl.org/index.php/Growing|mdadm wiki page on growing RAID arrays]]
===== Failing RAID Arrays =====
* Fail all the devices, then remove them, then stop the raid. eg# mdadm --manage /dev/mdfoo --fail /dev/sdfoo
# mdadm --manage /dev/mdfoo --remove /dev/sdfoo
# mdadm --manage --stop /dev/mdfoo
===== Monitoring =====
* [[http://prefetch.net/articles/linuxsoftwareraid.html]]
===== Filesystem/RAID Tweaking =====
You can do a lot of tweaking by changing the stripe/cluster size and block size.
* stripe/chunk size: size of each stripe block (RAID level) - a 64KB stripe size means that each block of each stripe is 64KB
* apparently a smaller stripe size is better for fewer large files, and
* large stripe sizes are better for larger numbers of small files
* To check stripe/chunk size, run: # mdadm --detail /dev/md4
* block size: size of each block (individual disk level) - this should be less than or equal to the stripe/chunk size?
* not entirely sure how this relates to the stripe/chunk size yet...
* To check block size, run: # dumpe2fs -h /dev/device#
http://www.pcguide.com/ref/hdd/perf/raid/concepts/perfStripe-c.html
http://www.pcguide.com/ref/hdd/perf/raid/levels/single_Level0.htm
==
==
\\ \\
{{tag>:linux :linux:server}}