User Tools

Site Tools


informatica:microservers

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
informatica:microservers [2020/05/10 15:57] joseinformatica:microservers [2020/12/16 00:15] (current) jose
Line 1: Line 1:
 +====== Especificaciones ======
 +
 +https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-c03793258
 +
 ====== Xarxa ====== ====== Xarxa ======
 LACP LACP
Line 15: Line 19:
  
  
 +===== Poner IP fija: =====
  
 +  /etc/network/interfaces
 +<code>
 +allow-hotplug eno1
 +iface eno1 inet static
 +address 192.168.1.76
 +netmask 255.255.255.0
 +gateway 192.168.1.1
 +dns-nameservers 192.168.1.1 8.8.8.8 8.8.4.4
 +</code>
  
 ====== ILO ====== ====== ILO ======
Line 80: Line 94:
  
 ====== NAS ====== ====== NAS ======
-nas4free+Para que arranque del SSD puesto en el CDROM hay que cambiar en la BIOS:
  
-Virtual Device / Single Party RAID-Z+Pulsar F9 para etrar en BIOS 
 + 
 +Cambiar a modo Legacy y el controller el 2, que es el CDROM en vez de los 4 discos 
 + 
 +{{:informatica:bios1.png|}} 
 + 
 +{{:informatica:bios2.png|}} 
 + 
 +{{:informatica:bios3.png|}} 
 + 
 +{{:informatica:bios4.png|}} 
 + 
 +{{:informatica:bios5.png|}} 
 + 
 +{{:informatica:bios6.png|}} 
 + 
 +{{:informatica:bios7.png|}} 
 + 
 +===== Configuración del RAID ===== 
 +==== Formatear los discos ==== 
 +Vemos como están las particiones: 
 +  fdisk -l 
 + 
 +<code> 
 +Disk /dev/sda: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors 
 +Disk model: ST8000DM004-2CX1 
 +Units: sectors of 1 * 512 = 512 bytes 
 +Sector size (logical/physical): 512 bytes / 4096 bytes 
 +I/O size (minimum/optimal): 4096 bytes / 4096 bytes 
 +Disklabel type: gpt 
 +Disk identifier: 02ED88CD-1EBC-445B-89DB-6522BEB7EA03 
 + 
 +Device     Start         End     Sectors  Size Type 
 +/dev/sda1   2048 15628053134 15628051087  7.3T Linux RAID 
 + 
 +</code> 
 +<code> 
 + 
 +Disk /dev/sdc: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors 
 +Disk model: ST8000DM004-2CX1 
 +Units: sectors of 1 * 512 = 512 bytes 
 +Sector size (logical/physical): 512 bytes / 4096 bytes 
 +I/O size (minimum/optimal): 4096 bytes / 4096 bytes 
 + 
 + 
 +</code> 
 +<code> 
 +Disk /dev/sde: 111.8 GiB, 120034123776 bytes, 234441648 sectors 
 +Disk model: KINGSTON SA400S3 
 +Units: sectors of 1 * 512 = 512 bytes 
 +Sector size (logical/physical): 512 bytes / 512 bytes 
 +I/O size (minimum/optimal): 512 bytes / 512 bytes 
 +Disklabel type: dos 
 +Disk identifier: 0x57311578 
 + 
 +Device     Boot     Start       End   Sectors  Size Id Type 
 +/dev/sde1  *         2048 200959999 200957952 95.8G 83 Linux 
 +/dev/sde2       200962046 234440703  33478658   16G  5 Extended 
 +/dev/sde5       200962048 234440703  33478656   16G 82 Linux swap / Solaris 
 + 
 + 
 +</code> 
 +<code> 
 +Disk /dev/sdd: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors 
 +Disk model: ST8000DM004-2CX1 
 +Units: sectors of 1 * 512 = 512 bytes 
 +Sector size (logical/physical): 512 bytes / 4096 bytes 
 +I/O size (minimum/optimal): 4096 bytes / 4096 bytes 
 + 
 + 
 +</code> 
 +<code> 
 +Disk /dev/sdb: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors 
 +Disk model: ST8000DM004-2CX1 
 +Units: sectors of 1 * 512 = 512 bytes 
 +Sector size (logical/physical): 512 bytes / 4096 bytes 
 +I/O size (minimum/optimal): 4096 bytes / 4096 bytes 
 + 
 + 
 +</code> 
 +<code> 
 +Disk /dev/sdf: 29.3 GiB, 31444697088 bytes, 61415424 sectors 
 +Disk model: Internal SD-CARD 
 +Units: sectors of 1 * 512 = 512 bytes 
 +Sector size (logical/physical): 512 bytes / 512 bytes 
 +I/O size (minimum/optimal): 512 bytes / 512 bytes 
 +Disklabel type: dos 
 +Disk identifier: 0xeefb95d3 
 + 
 +Device     Boot Start      End  Sectors  Size Id Type 
 +/dev/sdf1  *    16384 61415423 61399040 29.3G 83 Linux 
 +</code> 
 + 
 + 
 + 
 + 
 + 
 +Tenemos 4 discos de 8Tb Tenemos que poner label GPT y crear una partición como linux raid 
 + 
 +Creamos label GPT seleccionando g: 
 +  fdisk /dev/sda 
 +<code> 
 +Command (m for help): g 
 +Created a new GPT disklabel (GUID: 99B4091D-BC19-D542-9331-B99666D7F464). 
 +The old dos signature will be removed by a write command. 
 +</code> 
 + 
 + 
 + 
 +Ahora creamos la partición y luego modificamos a LINUX RAID 
 +  root@nas:~# fdisk /dev/sda 
 +<code> 
 +Welcome to fdisk (util-linux 2.33.1). 
 +Changes will remain in memory only, until you decide to write them. 
 +Be careful before using the write command. 
 + 
 + 
 +Command (m for help): p 
 +Disk /dev/sdd: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors 
 +Disk model: ST8000DM004-2CX1 
 +Units: sectors of 1 * 512 = 512 bytes 
 +Sector size (logical/physical): 512 bytes / 4096 bytes 
 +I/O size (minimum/optimal): 4096 bytes / 4096 bytes 
 +Disklabel type: gpt 
 +Disk identifier: 99B4091D-BC19-D542-9331-B99666D7F464 
 + 
 +Command (m for help): n 
 +Partition number (1-128, default 1):  
 +First sector (2048-15628053134, default 2048):  
 +Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-15628053134, default 15628053134):  
 + 
 +Created a new partition 1 of type 'Linux filesystem' and of size 7.3 TiB. 
 + 
 +Command (m for help): t 
 +Selected partition 1 
 +Partition type (type L to list all types): 29 
 +Changed type of partition 'Linux filesystem' to 'Linux RAID'
 + 
 +Command (m for help): w 
 +The partition table has been altered. 
 +Calling ioctl() to re-read partition table. 
 +Syncing disks. 
 +</code> 
 + 
 +Nos tienen que quedar así: 
 +  root@nas:~# blkid 
 +<code> 
 +/dev/sde1: UUID="d89fcee2-25a7-4c9f-a307-f84d9eb5269d" TYPE="ext4" PARTUUID="57311578-01" 
 +/dev/sde5: UUID="ec8c87b5-7c08-4552-8c4a-189a29c0220c" TYPE="swap" PARTUUID="57311578-05" 
 +/dev/sda1: UUID="fe89990a-d658-a1bc-0f69-c4cb06191398" UUID_SUB="c4914342-9da4-1485-cf6a-23fc22bb65cd" LABEL="nas:0" TYPE="linux_raid_member" PARTUUID="861fdab6-092b-554e-94ad-cc6904040338" 
 +/dev/sdb1: UUID="fe89990a-d658-a1bc-0f69-c4cb06191398" UUID_SUB="6f3cad1b-1c99-f179-6aef-4b7944bff122" LABEL="nas:0" TYPE="linux_raid_member" PARTUUID="8b3890a4-39e0-9344-bf3c-2564f2178cf8" 
 +/dev/sdc1: UUID="fe89990a-d658-a1bc-0f69-c4cb06191398" UUID_SUB="d8fa217c-cbb5-a06a-7282-2167bc504ca7" LABEL="nas:0" TYPE="linux_raid_member" PARTUUID="b6c7c5d5-ef51-574f-8932-46b7094af9c8" 
 +/dev/sdd1: UUID="fe89990a-d658-a1bc-0f69-c4cb06191398" UUID_SUB="c0a4c476-0869-c721-1c41-cd0616840a41" LABEL="nas:0" TYPE="linux_raid_member" PARTUUID="e02f4317-a109-fd43-94fc-f68f28cf232a" 
 +/dev/sdf1: LABEL="REAR-000" UUID="952ad047-3dd0-44f8-ad2a-61c2b6c324c7" SEC_TYPE="ext2" TYPE="ext3" PARTUUID="eefb95d3-01" 
 +</code> 
 + 
 +==== Crear el RAID ==== 
 +En este caso ya teníamos un RAID y primero hay que borrarlo porque se queda colgado: 
 +  root@nas:~# cat /proc/mdstat  
 +<code> 
 +Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]  
 +md127 : inactive sda1[0](S) 
 +      7813893447 blocks super 1.2 
 +        
 +unused devices: <none> 
 +</code> 
 + 
 +Lo borramos: 
 +  root@nas:~#  mdadm --stop /dev/md127 
 +  mdadm: stopped /dev/md127 
 +Ahora si lo podemos crear: 
 +  root@nas:~# mdadm --create --verbose /dev/md0 --raid-devices=4 --level=raid5 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 
 + 
 +<code> 
 +mdadm: layout defaults to left-symmetric 
 +mdadm: layout defaults to left-symmetric 
 +mdadm: chunk size defaults to 512K 
 +mdadm: /dev/sda1 appears to be part of a raid array: 
 +       level=raid5 devices=4 ctime=Wed Nov 25 15:41:12 2020 
 +mdadm: size set to 7813893120K 
 +mdadm: automatically enabling write-intent bitmap on large array 
 +Continue creating array? y 
 +mdadm: Defaulting to version 1.2 metadata 
 +mdadm: array /dev/md0 started. 
 +</code> 
 + 
 + 
 +Montamos el RAID por UUID 
 +  blkid 
 + 
 +<code> 
 +/dev/md0: UUID="955edf36-f785-441e-95e6-ff7cd77fc510" TYPE="ext4" 
 +/dev/sda1: UUID="ba93d654-1e00-4b85-b2f1-f9930af9cc43" UUID_SUB="f61e84e9-271d-a311-9ae4-6eca19a84c10" LABEL="nas:0" TYPE="linux_raid_member" PARTUUID="b638f829-b354-4953-9e08-f96c8f4f031d" 
 +/dev/sdb1: UUID="ba93d654-1e00-4b85-b2f1-f9930af9cc43" UUID_SUB="6984a8d2-694a-b00b-0f23-809b2c123924" LABEL="nas:0" TYPE="linux_raid_member" PARTUUID="c9f7459b-cef8-434c-8a41-a471989eee60" 
 +/dev/sdc1: UUID="ba93d654-1e00-4b85-b2f1-f9930af9cc43" UUID_SUB="12d795a6-a34e-feec-4c8f-6ad962a59536" LABEL="nas:0" TYPE="linux_raid_member" PARTUUID="eebd20a6-6f32-46a9-9015-adc50649514a" 
 +/dev/sde1: UUID="a7edb0b3-d69b-43da-9dc6-66d046c4e344" TYPE="ext4" PARTUUID="c3c3e823-01" 
 +/dev/sde5: UUID="b5c2a2a5-7217-4ab0-bdd9-55469ddcfaf9" TYPE="swap" PARTUUID="c3c3e823-05" 
 +/dev/sdd1: UUID="ba93d654-1e00-4b85-b2f1-f9930af9cc43" UUID_SUB="cfd1a1fd-d4c7-a1f8-0779-c235b8784b5b" LABEL="nas:0" TYPE="linux_raid_member" PARTUUID="ca58c1f5-abc7-4b18-b5ae-f738788cb1ea" 
 +/dev/sdf1: PARTUUID="0e2b0ddc-a8e9-11e9-a82e-d0bf9c45d8b4" 
 +/dev/sdf2: LABEL="freenas-boot" UUID="15348038225366585637" UUID_SUB="12889063831144199016" TYPE="zfs_member" PARTUUID="0e4dff28-a8e9-11e9-a82e-d0bf9c45d8b4" 
 +</code> 
 + 
 +Ponemos en /etc/fstab 
 +  UUID=955edf36-f785-441e-95e6-ff7cd77fc510 /mnt/raid ext4 defaults 0 2
  
-https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c03793258 
  
 Desde 192.168.1.32 Desde 192.168.1.32
Line 96: Line 312:
   mkdir /nfs   mkdir /nfs
   mount 192.168.1.250:/mnt/dades/media /nfs   mount 192.168.1.250:/mnt/dades/media /nfs
 +root@nas:/mnt/raid# apt-get install nfs-kernel-server
 +
 +root@nas:/mnt/raid# cat /etc/exports
 +/mnt/raid/nfs 192.168.1.0/255.255.255.0(rw,async,subtree_check,no_root_squash)
 +
 +
 +
 +Reiniciamos el servicio
 +root@nas:/mnt/raid# exportfs -rav
 +exporting 192.168.1.0/255.255.255.0:/mnt/raid/nfs
 +
 +
 +En el cliente instalamos nfs:
 +apt-get install nfs-common
 +
 +Mostramos si lo ve
 +root@avtp239:~# showmount -e 192.168.1.250
 +Export list for 192.168.1.250:
 +/mnt/raid/nfs 192.168.1.0/255.255.255.0
 +
 +Lo montamos
 +root@avtp239:/mnt# mount -t nfs 192.168.1.250:/mnt/raid/nfs /mnt/nfs
 +
 +
 +===== Pruebas de recuperación =====
 +===== Recuperación Sistema Operativo =====
 +El disco de arranque es un disco sólido de 120Gb
 +
 +El disco de recuperación es una microsd de 32Gb colocada internamente.
 +
 +Formateamos la microsd con el LABEL: REAR-0000. Buscamos que disco es:
 +  fdisk -l
 +<code>
 +Disk /dev/sda: 29.3 GiB, 31444697088 bytes, 61415424 sectors
 +Disk model: Internal SD-CARD
 +Units: sectors of 1 * 512 = 512 bytes
 +Sector size (logical/physical): 512 bytes / 512 bytes
 +I/O size (minimum/optimal): 512 bytes / 512 bytes
 +Disklabel type: dos
 +Disk identifier: 0xffbcc21c
 +
 +Device     Boot Start      End  Sectors  Size Id Type
 +/dev/sda1  *    16384 61415423 61399040 29.3G 83 Linux
 +
 +</code>
 +
 +Lo formateamos:
 +  rear format /dev/sda
 +
 +  USB device /dev/sda is not formatted with ext2/3/4 or btrfs filesystem
 +  Type exactly 'Yes' to format /dev/sda with ext3 filesystem
 +  (default 'No' timeout 300 seconds)
 +
 +  Yes
 +
 +Vemos que lo ha creado correctamente:
 +  blkid 
 +  /dev/sda1: LABEL="REAR-000" UUID="6065120e-3477-485d-9e99-84227f44a7d2" TYPE="ext3" PARTUUID="3c4e9100-01"
 +
 +Vemos que está vacia
 +  mount /dev/sda1 /mnt/sdcard/
 +  ls /mnt/sdcard/
 +
 +  lost+found
 +
 +Instalamos rear
 +  apt-get install rear
 +Configuramos y excluimos la particion de /mnt/raid:
 +    /etc/rear/local.conf
 +
 +<code>
 +### write the rescue initramfs to USB and update the USB bootloader
 +OUTPUT=USB
 +### create a backup using the internal NETFS method, using 'tar'
 +BACKUP=NETFS
 +### write both rescue image and backup to the device labeled REAR-000
 +BACKUP_URL=usb:///dev/disk/by-label/REAR-000
 +</code>
 +
 +
 +Creamos backup que tarda unos 4 minutos. Si hemos montado la tarjeta, hay que desmontarla, nos sale un error dicíendolo:
 +  rear -v mkbackup
 +<code>
 +Relax-and-Recover 2.4 / Git
 +Using log file: /var/log/rear/rear-nas.log
 +Using backup archive '/tmp/rear.W9D4MwcWoV2EzuJ/outputfs/rear/nas/20201108.1342/backup.tar.gz'
 +Creating disk layout
 +Using guessed bootloader 'EFI' (found in first bytes on /dev/sdb)
 +Creating root filesystem layout
 +Cannot include keyboard mappings (no keymaps default directory '')
 +Copying logfile /var/log/rear/rear-nas.log into initramfs as '/tmp/rear-nas-partial-2020-11-08T13:42:16+01:00.log'
 +Copying files and directories
 +Copying binaries and libraries
 +Copying kernel modules
 +Copying all files in /lib*/firmware/
 +Creating recovery/rescue system initramfs/initrd initrd.cgz with gzip default compression
 +Created initrd.cgz with gzip default compression (67642238 bytes) in 17 seconds
 +Saved /var/log/rear/rear-nas.log as rear/nas/20201108.1342/rear-nas.log
 +Copying resulting files to usb location
 +Saving /var/log/rear/rear-nas.log as rear-nas.log to usb location
 +Creating tar archive '/tmp/rear.W9D4MwcWoV2EzuJ/outputfs/rear/nas/20201108.1342/backup.tar.gz'
 +Archived 529 MiB [avg 5263 KiB/sec] OK
 +Archived 529 MiB in 104 seconds [avg 5212 KiB/sec]
 +Exiting rear mkbackup (PID 1753) and its descendant processes
 +Running exit tasks
 +</code>
 +
 +Vemos que ha escrito en la tarjeta:
 +<code>
 +lost+found
 +boot
 +boot/syslinux
 +boot/syslinux/hdt.c32
 +boot/syslinux/ldlinux.c32
 +boot/syslinux/cat.c32
 +boot/syslinux/libgpl.c32
 +boot/syslinux/kbdmap.c32
 +boot/syslinux/sysdump.c32
 +boot/syslinux/chain.c32
 +boot/syslinux/lua.c32
 +boot/syslinux/cmd.c32
 +boot/syslinux/disk.c32
 +boot/syslinux/ldlinux.sys
 +boot/syslinux/reboot.c32
 +boot/syslinux/libmenu.c32
 +boot/syslinux/config.c32
 +boot/syslinux/libutil.c32
 +boot/syslinux/libcom32.c32
 +boot/syslinux/rosh.c32
 +boot/syslinux/menu.c32
 +boot/syslinux/ls.c32
 +boot/syslinux/vesamenu.c32
 +boot/syslinux/rear.help
 +boot/syslinux/message
 +boot/syslinux/host.c32
 +boot/syslinux/cpuid.c32
 +boot/syslinux/extlinux.conf
 +rear
 +rear/syslinux.cfg
 +rear/nas
 +rear/nas/20201108.1408
 +rear/nas/20201108.1408/initrd.cgz
 +rear/nas/20201108.1408/rear-nas.log
 +rear/nas/20201108.1408/backup.log
 +rear/nas/20201108.1408/syslinux.cfg
 +rear/nas/20201108.1408/kernel
 +rear/nas/20201108.1408/backup.tar.gz
 +nas
 +nas/rear-nas.log
 +nas/.lockfile
 +nas/VERSION
 +nas/README
 +
 +</code>
 +
 +Reiniciamos y arrancamos desde la tarjeta pulsando F11:
 +{{:informatica:nas01.png|}}
 +
 +Seleccionamos la microsd, opción 3
 +{{:informatica:nas02.png|}}
 +
 +Subimos en el menu de grub y seleccionamos Recovery images, en nuestro caso nas:
 +{{:informatica:nas03.png|}}
 +
 +Seleccionamos el backup que nos interese si hay varios:
 +{{:informatica:nas04.png|}}
 +
 +Cuando nos salga la pantalla de login, ponemos root (no nos pedirá contraseña) y luego rear recover
 +
 +{{:informatica:nas05.png|}}
 +
 +Pulsamos siempre 1) por defecto en todas las preguntas, al no ser que queremos hacer algún cambio, como los layouts, por si el disco destino es mas pequeño de tamaño.
 +
 +Tarda un minuto 
 +
 +Cuando acabe, reiniciamos y ya lo tenemos recuperado.
 +
 +===== Recuperación Raid =====
 +Tenemos 4 discos en RAID 5. 
 +
 +Miramos los discos que hay:
 +  cat /proc/mdstat
 +
 +<code>
 +Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] 
 +md0 : active raid5 sdb1[2] sda1[0] sdc1[1] sdd1[3]
 +      23441679360 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
 +      [>....................]  resync =  3.5% (277980108/7813893120) finish=43149.1min speed=2910K/sec
 +      bitmap: 58/59 pages [232KB], 65536KB chunk
 +
 +unused devices: <none>
 +</code>
 +
 +Mas detalle:
 +  mdadm --detail /dev/md0
 +<code>
 +/dev/md0:
 +           Version : 1.2
 +     Creation Time : Tue Sep 15 00:16:25 2020
 +        Raid Level : raid5
 +        Array Size : 23441679360 (22355.73 GiB 24004.28 GB)
 +     Used Dev Size : 7813893120 (7451.91 GiB 8001.43 GB)
 +      Raid Devices : 4
 +     Total Devices : 4
 +       Persistence : Superblock is persistent
 +
 +     Intent Bitmap : Internal
 +
 +       Update Time : Sun Nov  8 17:16:25 2020
 +             State : active, resyncing 
 +    Active Devices : 4
 +   Working Devices : 4
 +    Failed Devices : 0
 +     Spare Devices : 0
 +
 +            Layout : left-symmetric
 +        Chunk Size : 512K
 +
 +Consistency Policy : bitmap
 +
 +     Resync Status : 3% complete
 +
 +              Name : nas:0  (local to host nas)
 +              UUID : ba93d654:1e004b85:b2f1f993:0af9cc43
 +            Events : 3722
 +
 +    Number   Major   Minor   RaidDevice State
 +                    1        0      active sync   /dev/sda1
 +                   33        1      active sync   /dev/sdc1
 +                   17        2      active sync   /dev/sdb1
 +                   49        3      active sync   /dev/sdd1
 +
 +</code>
 +
 +Quitamos un disco a saco como si fallara:
 +  cat /proc/mdstat
 +<code>
 +Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] 
 +md0 : active raid5 sdb1[2] sda1[0] sdc1[1]
 +      23441679360 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
 +      bitmap: 58/59 pages [232KB], 65536KB chunk
 +
 +unused devices: <none>
 +</code>
 +
 +  mdadm --detail /dev/md0
 +<code>
 +/dev/md0:
 +           Version : 1.2
 +     Creation Time : Tue Sep 15 00:16:25 2020
 +        Raid Level : raid5
 +        Array Size : 23441679360 (22355.73 GiB 24004.28 GB)
 +     Used Dev Size : 7813893120 (7451.91 GiB 8001.43 GB)
 +      Raid Devices : 4
 +     Total Devices : 3
 +       Persistence : Superblock is persistent
 +
 +     Intent Bitmap : Internal
 +
 +       Update Time : Sun Nov  8 17:28:03 2020
 +             State : clean, degraded 
 +    Active Devices : 3
 +   Working Devices : 3
 +    Failed Devices : 0
 +     Spare Devices : 0
 +
 +            Layout : left-symmetric
 +        Chunk Size : 512K
 +
 +Consistency Policy : bitmap
 +
 +              Name : nas:0  (local to host nas)
 +              UUID : ba93d654:1e004b85:b2f1f993:0af9cc43
 +            Events : 4069
 +
 +    Number   Major   Minor   RaidDevice State
 +                    1        0      active sync   /dev/sda1
 +                   33        1      active sync   /dev/sdc1
 +                   17        2      active sync   /dev/sdb1
 +                    0        3      removed
 +</code>
 +
 +Paramos el servidor y metemos el disco nuevo. Al arrancar está igual:
 +  cat /proc/mdstat 
 +<code>
 +Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] 
 +md0 : active raid5 sdb1[2] sda1[0] sdc1[1]
 +      23441679360 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
 +      bitmap: 58/59 pages [232KB], 65536KB chunk
 +
 +unused devices: <none>
 +</code>
 +
 +  mdadm --detail /dev/md0
 +<code>
 +/dev/md0:
 +           Version : 1.2
 +     Creation Time : Tue Sep 15 00:16:25 2020
 +        Raid Level : raid5
 +        Array Size : 23441679360 (22355.73 GiB 24004.28 GB)
 +     Used Dev Size : 7813893120 (7451.91 GiB 8001.43 GB)
 +      Raid Devices : 4
 +     Total Devices : 3
 +       Persistence : Superblock is persistent
 +
 +     Intent Bitmap : Internal
 +
 +       Update Time : Sun Nov  8 17:43:17 2020
 +             State : active, degraded 
 +    Active Devices : 3
 +   Working Devices : 3
 +    Failed Devices : 0
 +     Spare Devices : 0
 +
 +            Layout : left-symmetric
 +        Chunk Size : 512K
 +
 +Consistency Policy : bitmap
 +
 +              Name : nas:0  (local to host nas)
 +              UUID : ba93d654:1e004b85:b2f1f993:0af9cc43
 +            Events : 4236
 +
 +    Number   Major   Minor   RaidDevice State
 +                    1        0      active sync   /dev/sda1
 +                   33        1      active sync   /dev/sdc1
 +                   17        2      active sync   /dev/sdb1
 +                    0        3      removed
 +</code>
 +
 +Lo añadimos:
 +  mdadm  /dev/md0 -a /dev/sdd
 +  mdadm: added /dev/sdd
 +
 +Vemos que hace el rebuild:
 +  mdadm --detail /dev/md0
 +
 +<code>
 +/dev/md0:
 +           Version : 1.2
 +     Creation Time : Tue Sep 15 00:16:25 2020
 +        Raid Level : raid5
 +        Array Size : 23441679360 (22355.73 GiB 24004.28 GB)
 +     Used Dev Size : 7813893120 (7451.91 GiB 8001.43 GB)
 +      Raid Devices : 4
 +     Total Devices : 4
 +       Persistence : Superblock is persistent
 +
 +     Intent Bitmap : Internal
 +
 +       Update Time : Sun Nov  8 17:47:59 2020
 +             State : active, degraded, recovering 
 +    Active Devices : 3
 +   Working Devices : 4
 +    Failed Devices : 0
 +     Spare Devices : 1
 +
 +            Layout : left-symmetric
 +        Chunk Size : 512K
 +
 +Consistency Policy : bitmap
 +
 +    Rebuild Status : 0% complete
 +
 +              Name : nas:0  (local to host nas)
 +              UUID : ba93d654:1e004b85:b2f1f993:0af9cc43
 +            Events : 4454
 +
 +    Number   Major   Minor   RaidDevice State
 +                    1        0      active sync   /dev/sda1
 +                   33        1      active sync   /dev/sdc1
 +                   17        2      active sync   /dev/sdb1
 +                   48        3      spare rebuilding   /dev/sdd
 +</code>
 +
 +Al mirar el estado, indica que tardará 8.000 minutos (mas de 5 días) en sincronizar:
 +  cat /proc/mdstat
 +<code>
 +Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10] 
 +md0 : active raid5 sdd[4] sdb1[2] sda1[0] sdc1[1]
 +      23441679360 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
 +      [>....................]  recovery =  0.0% (1023064/7813893120) finish=8019.4min speed=16236K/sec
 +      bitmap: 58/59 pages [232KB], 65536KB chunk
 +
 +unused devices: <none>
 +</code>
 +
 +
  
  
informatica/microservers.1589126254.txt.gz · Last modified: 2020/05/10 15:57 by jose