This is an old revision of the document!
Table of Contents
RAID 1
bla bla
RAID 5
Crear el RAID
mdadm --create --verbose /dev/md0 --raid-devices=4 --level=raid5 /dev/sda /dev/sdb /dev/sdc /dev/sdd
Formateamos el RAID
mkfs.ext3 /dev/md0
Instalación Raid en Debian
http://www.debian-administration.org/articles/512
http://www.esdebian.org/forum/viewtopic.php?forum=12&showtopic=103488
One thing should be added to this nice article in case this installation is being done on brand new pristine disks.
If Grub is being installed on the RAID1 boot sector rather than MBR and you are on x86 or x86_64, the debian installer will probably prompt you about having an MBR installed (as this is required for the BIOS to initially access the disk).
At this step you can only pick from one of the physical devices and not the RAID partitions. So the MBR should be manually installed on the other disks as a post installation task to ensure that no disk is being left MBRless and so unusable by the BIOS.
This should be true with PATA hardware and is something i went through when performing RAID sanity tests after an etch install (a year ago or so).
Most of the time i have no specific requirements for an MBR, so i usually tend to install the bootloader on the MBR and then duplicate it by hand on the other disks.
For the record, here's how I do the MBR replication:
# grub –no-floppy
device (hd0) /dev/sda
root (hd0,0)
setup (hd0)
device (hd0) /dev/sdb
root (hd0,0)
setup (hd0)
device (hd0) /dev/sdc
root (hd0,0)
setup (hd0)
… and so on.
Notes: * –no-floppy speeds up grub's loading * the 'device' trick insures that the 2nd stage and the kernel are loaded from the same disk as the MBR, provides some independence from the BIOS settings (i've seen some voodoo cases where this was required) * to be noted that after the first disk, the grub-shell history is of great use: 3xup,bksp,b, enter, 3xup, enter, 3xup, enter , and so on ;) * take great care that the raid1 is in sync, to insure that all the required files are in their final position on disk * thanks to grub's architecture, this only has to be done when upgrading grub or when changing a disk, not on every reconfiguration or kernel upgrade.
Cambiar un disco
Mostramos los discos que hay
# clear;cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] md1 : active raid5 sda2[0] sdc2[2] sdb2[1] 957232896 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
# mdadm - - detail /dev/md1
/dev/md1: Version : 00.90.03 Creation Time : Thu Oct 25 21:16:03 2007 Raid Level : raid5 Array Size : 957232896 (912.89 GiB 980.21 GB) Device Size : 478616448 (456.44 GiB 490.10 GB) Raid Devices : 3 Total Devices : 3 Preferred Minor : 1 Persistence : Superblock is persistent
Update Time : Fri Oct 26 11:48:40 2007 State : clean Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0
Layout : left-symmetric Chunk Size : 64K
UUID : 141d4151:1b6badaa:ac063430:591eaac6 Events : 0.10
Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 18 1 active sync /dev/sdb2 2 8 34 2 active sync /dev/sdc2
Los 3 discos del RAID 5 están funcionando correctamente.
Forzamos el fallo en un disco:
# mdadm –manage –set-faulty /dev/md1 /dev/sdb2
mdadm: set /dev/sdb2 faulty in /dev/md1
# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] md1 : active raid5 sda2[0] sdc2[2] sdb2[3](F) 957232896 blocks level 5, 64k chunk, algorithm 2 [3/2] [U_U]
# clear;mdadm –detail /dev/md1
/dev/md1: Version : 00.90.03 Creation Time : Thu Oct 25 21:16:03 2007 Raid Level : raid5 Array Size : 957232896 (912.89 GiB 980.21 GB) Device Size : 478616448 (456.44 GiB 490.10 GB) Raid Devices : 3 Total Devices : 3 Preferred Minor : 1 Persistence : Superblock is persistent
Update Time : Fri Oct 26 12:04:08 2007 State : clean, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 1 Spare Devices : 0
Layout : left-symmetric Chunk Size : 64K
UUID : 141d4151:1b6badaa:ac063430:591eaac6 Events : 0.16
Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 0 0 1 removed 2 8 34 2 active sync /dev/sdc2
3 8 18 - faulty spare /dev/sdb2
En /var/log/syslog vemos las lineas:
Oct 26 12:04:03 servidor kernel: --- rd:3 wd:2 fd:1 Oct 26 12:04:03 servidor kernel: disk 0, o:1, dev:sda2 Oct 26 12:04:03 servidor kernel: disk 1, o:0, dev:sdb2 Oct 26 12:04:03 servidor kernel: disk 2, o:1, dev:sdc2 Oct 26 12:04:03 servidor kernel: RAID5 conf printout: Oct 26 12:04:03 servidor kernel: --- rd:3 wd:2 fd:1 Oct 26 12:04:03 servidor kernel: disk 0, o:1, dev:sda2 Oct 26 12:04:03 servidor kernel: disk 2, o:1, dev:sdc2 Oct 26 12:04:03 servidor mdadm: Fail event detected on md device /dev/md1, component device /dev/sdb2
Sacamos el disco del RAID5. Se saca en caliente si no está activo en el RAID:
# mdadm /dev/md1 –remove /dev/sdb2
mdadm: hot removed /dev/sdb2
Aparece quitado:
# mdadm –detail /dev/md1
/dev/md1: Version : 00.90.03 Creation Time : Thu Oct 25 21:16:03 2007 Raid Level : raid5 Array Size : 957232896 (912.89 GiB 980.21 GB) Device Size : 478616448 (456.44 GiB 490.10 GB) Raid Devices : 3 Total Devices : 2 Preferred Minor : 1 Persistence : Superblock is persistent
Update Time : Fri Oct 26 12:14:21 2007 State : clean, degraded Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0
Layout : left-symmetric Chunk Size : 64K
UUID : 141d4151:1b6badaa:ac063430:591eaac6 Events : 0.62
Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 0 0 1 removed 2 8 34 2 active sync /dev/sdc2
Lo volvemos a añadir:
# mdadm /dev/md1 -a /dev/sdb2
mdadm: re-added /dev/sdb2
# clear;mdadm –detail /dev/md1
/dev/md1: Version : 00.90.03 Creation Time : Thu Oct 25 21:16:03 2007 Raid Level : raid5 Array Size : 957232896 (912.89 GiB 980.21 GB) Device Size : 478616448 (456.44 GiB 490.10 GB) Raid Devices : 3 Total Devices : 3 Preferred Minor : 1 Persistence : Superblock is persistent
Update Time : Fri Oct 26 12:15:12 2007 State : active, degraded, recovering Active Devices : 2 Working Devices : 3 Failed Devices : 0 Spare Devices : 1
Layout : left-symmetric Chunk Size : 64K
Rebuild Status : 0% complete
UUID : 141d4151:1b6badaa:ac063430:591eaac6 Events : 0.65
Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 3 8 18 1 spare rebuilding /dev/sdb2 2 8 34 2 active sync /dev/sdc2
Si volvemos a lanzar el comando:
Rebuild Status : 18% complete
Vemos como se está reconstruyendo el disco:
# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] md1 : active raid5 sdb2[3] sda2[0] sdc2[2] 957232896 blocks level 5, 64k chunk, algorithm 2 [3/2] [U_U] [>....................] recovery = 0.8% (3914236/478616448) finish=112.2min speed=70464K/sec
Configuracion
Para mirar el estado del RAID:
#cat /proc/mdstat
md1 : active raid5 sda2[0] sdc2[2] sdb2[1] 957232896 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU] md0 : active raid1 sda1[0] sdc1[2] 9767424 blocks [3/2] [U_U] unused devices: <none>
Añadir un disco
Si tenemos un RAID 5 con 3 discos y queremos añadir otro, primero lo añadimos.
Actualmente el RAID está así:
#cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4] md0 : active raid1 sda1[0] sdd1[2] sdc1[1] 9767424 blocks [3/3] [UUU]
#mdadm –detail /dev/md0
/dev/md0:
Version : 00.90.03 \\ Creation Time : Thu Oct 25 21:15:28 2007 \\ Raid Level : raid1 \\ Array Size : 9767424 (9.31 GiB 10.00 GB) \\ Device Size : 9767424 (9.31 GiB 10.00 GB) \\ Raid Devices : 3 \\ Total Devices : 3 \\
Preferred Minor : 0
Persistence : Superblock is persistent \\
Update Time : Sat Nov 3 15:07:36 2007
State : clean \\
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0 \\
UUID : a912d356:3a213509:fb13e982:631824f5 \\ Events : 0.1284
Number Major Minor RaidDevice State \\ 0 8 1 0 active sync /dev/sda1 \\ 1 8 33 1 active sync /dev/sdc1 \\ 2 8 49 2 active sync /dev/sdd1
Añadimos el disco:
#mdadm /dev/md0 -a /dev/sdb1
mdadm: added /dev/sdb1
Ahora el disco aparecerà en reserva:
servidor:~# mdadm –detail /dev/md0 /dev/md0:
Version : 00.90.03 Creation Time : Thu Oct 25 21:15:28 2007 Raid Level : raid1 Array Size : 9767424 (9.31 GiB 10.00 GB) Device Size : 9767424 (9.31 GiB 10.00 GB) Raid Devices : 3 Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Sat Nov 3 15:12:17 2007 State : clean
Active Devices : 3 Working Devices : 4 Failed Devices : 0
Spare Devices : 1
UUID : a912d356:3a213509:fb13e982:631824f5 Events : 0.1284
Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 33 1 active sync /dev/sdc1 2 8 49 2 active sync /dev/sdd1
3 8 17 - spare /dev/sdb1
Aumentamos el tamaño del RAID para que lo coja: #mdadm –grow /dev/md0 –raid-devices=4
md0 : active raid1 sdb1[4] sda1[0] sdd1[2] sdc1[1]
9767424 blocks [4/3] [UUU_] resync=DELAYED
md1 : active raid5 sda2[0] sdb2[3] sdd2[2] sdc2[1]
957232896 blocks super 0.91 level 5, 64k chunk, algorithm 2 [4/4] [UUUU] [>....................] reshape = 1.4% (7020736/478616448) finish=539.3min speed=14570K/sec
Para que coja todo el tamaño del disco:
#mdadm --grow /dev/md1 --size=max
Todavía no ha cogido el filesystem todo el tamaño: #pvdisplay
- – Physical volume —
PV Name /dev/md1
VG Name servidor PV Size 912.89 GB / not usable 0 Allocatable yes (but full) PE Size (KByte) 4096 Total PE 233699 Free PE 0 Allocated PE 233699 PV UUID FWHDaX-piDe-3962-ThyA-xUoX-I49J-v2qOoF
Le decimos que lo coja todo:
# pvresize /dev/md1
Physical volume "/dev/md1" changed 1 physical volume(s) resized / 0 physical volume(s) not resized
servidor:~# pvdisplay
- – Physical volume —
PV Name /dev/md1
VG Name servidor PV Size 1.34 TB / not usable 0 Allocatable yes PE Size (KByte) 4096 Total PE 350549 Free PE 116850 Allocated PE 233699 PV UUID FWHDaX-piDe-3962-ThyA-xUoX-I49J-v2qOoF
Primero hacemos un test: #lvresize -v -d -t -L +457G /dev/servidor/servidor_home
Nos la jugamos:
#lvresize -v -d -L +457G /dev/servidor/servidor_home
Found volume group "servidor" Loading servidor-servidor_home table Suspending servidor-servidor_home (253:3) Found volume group "servidor" Resuming servidor-servidor_home (253:3) Logical volume servidor_home successfully resized
Una vez añadido incrementamos el tamaño online. Necesitamos el paquete ext2resize: #ext2online /dev/servidor/servidor_home