proxmox:proxmox4
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
proxmox:proxmox4 [2016/09/06 15:41] – jose | proxmox:proxmox4 [2019/08/03 19:32] (current) – jose | ||
---|---|---|---|
Line 4: | Line 4: | ||
En resumen, en cada disco creo 3 particiones: | En resumen, en cada disco creo 3 particiones: | ||
- | 32 Gb / y RAID | + | |
- | 4 Gb swap | + | 4 Gb swap |
- | El que sobri RAID | + | |
- | I faig dos RAIDS el de / con a boot | + | Y hacemos RAID en / con opción |
+ | |||
+ | ====== Configuración de RED ====== | ||
+ | Hacemos un bonding y encima un bridge con las dos tarjetas. | ||
+ | / | ||
+ | |||
+ | < | ||
+ | auto lo | ||
+ | iface lo inet loopback | ||
+ | |||
+ | iface eth0 inet manual | ||
+ | |||
+ | iface eth1 inet manual | ||
+ | |||
+ | auto bond0 | ||
+ | iface bond0 inet manual | ||
+ | slaves eth0 eth1 | ||
+ | bond-mode 802.3ad | ||
+ | bond-miimon 100 | ||
+ | |||
+ | auto vmbr0 | ||
+ | iface vmbr0 inet static | ||
+ | address | ||
+ | netmask | ||
+ | gateway | ||
+ | bridge_ports bond0 | ||
+ | bridge_stp off | ||
+ | bridge_fd 0 | ||
+ | </ | ||
+ | |||
+ | En el switch tenemos que activar port trunk. En mi caso es un tplink tl sg 1024de y entro a la configuración en 192.168.0.1 | ||
+ | |||
+ | {{: | ||
+ | |||
+ | ===== Configurado red containers ===== | ||
+ | / | ||
+ | net0: name=eth0, | ||
+ | net1: name=eth1, | ||
- | Cuando instalo se queda el segundo RAID (el de datos) así: | ||
- | # cat / | ||
- | md1 : active (auto-read-only) raid1 sdb3[1] sda3[0] | ||
- | resync=PENDING | ||
- | | ||
- | Para forzar resync: | ||
- | # mdadm --readwrite /dev/md1 | ||
- | http:// | + | ====== Instalación Proxmox ====== |
+ | Fuente: | ||
- | Asegurarse que el valor que resuelva hostname lo tenga en el /etc/hosts | + | Asegurarse que el valor que resuelva hostname lo tenga en el /etc/hosts. Por ejemplo: |
+ | 127.0.0.1 localhost | ||
+ | 192.168.1.100 proxmoxescorxador | ||
+ | Añadimos repositorios de proxmox: | ||
echo "deb http:// | echo "deb http:// | ||
Line 50: | Line 84: | ||
Desde el primer nodo que será master | Desde el primer nodo que será master | ||
< | < | ||
- | root@proxmox01:/gluster# pvecm create clusterproxmox | + | root@proxmox1:~# pvecm create clusterproxmox |
Corosync Cluster Engine Authentication key generator. | Corosync Cluster Engine Authentication key generator. | ||
Gathering 1024 bits for key from / | Gathering 1024 bits for key from / | ||
Writing corosync key to / | Writing corosync key to / | ||
- | root@proxmox01:/gluster# pvecm status | + | </ |
+ | |||
+ | < | ||
+ | root@proxmox1:~# pvecm status | ||
Quorum information | Quorum information | ||
------------------ | ------------------ | ||
- | Date: Tue Aug 11 23:23:53 2015 | + | Date: Mon Sep 12 22:37:19 2016 |
Quorum provider: | Quorum provider: | ||
Nodes: | Nodes: | ||
Node ID: 0x00000001 | Node ID: 0x00000001 | ||
- | Ring ID: 4 | + | Ring ID: |
Quorate: | Quorate: | ||
Line 75: | Line 112: | ||
---------------------- | ---------------------- | ||
Nodeid | Nodeid | ||
- | 0x00000001 | + | 0x00000001 |
</ | </ | ||
- | Desde el segundo nodo | + | Desde el segundo nodo lo añadimos poniendo la ip del primero |
< | < | ||
- | root@proxmox02:/mnt# pvecm add 192.168.1.4 | + | root@proxmox2:~# pvecm add 192.168.2.1 |
- | The authenticity of host ' | + | The authenticity of host ' |
- | ECDSA key fingerprint is 8a:88:8a:2a:d2:8f:96:62:c1:85:ab:fc:c7:23:00:11. | + | ECDSA key fingerprint is 3a:17:aa:ca:c4:1b:55:2a:12:bb:fe:b4:ed:af:1e:af. |
Are you sure you want to continue connecting (yes/no)? yes | Are you sure you want to continue connecting (yes/no)? yes | ||
- | root@192.168.1.4's password: | + | root@192.168.2.1's password: |
copy corosync auth key | copy corosync auth key | ||
stopping pve-cluster service | stopping pve-cluster service | ||
Line 92: | Line 129: | ||
merge known_hosts file | merge known_hosts file | ||
restart services | restart services | ||
- | successfully added node 'proxmox02' to cluster. | + | successfully added node 'proxmox2' to cluster. |
+ | </ | ||
+ | |||
+ | Ahora vemos que ya hay dos miembros: | ||
+ | < | ||
+ | root@proxmox1: | ||
+ | Quorum information | ||
+ | ------------------ | ||
+ | Date: Mon Sep 12 22:47:44 2016 | ||
+ | Quorum provider: | ||
+ | Nodes: | ||
+ | Node ID: 0x00000001 | ||
+ | Ring ID: 1/12 | ||
+ | Quorate: | ||
+ | |||
+ | Votequorum information | ||
+ | ---------------------- | ||
+ | Expected votes: | ||
+ | Highest expected: 2 | ||
+ | Total votes: | ||
+ | Quorum: | ||
+ | Flags: | ||
+ | |||
+ | Membership information | ||
+ | ---------------------- | ||
+ | Nodeid | ||
+ | 0x00000001 | ||
+ | 0x00000002 | ||
</ | </ | ||
Line 108: | Line 173: | ||
https:// | https:// | ||
- | Versión | + | Instalamos versión |
- | http:// | + | http:// |
+ | |||
+ | Instalamos: | ||
+ | wget -O - http:// | ||
+ | echo deb http:// | ||
+ | apt-get update | ||
+ | apt-get install glusterfs-server | ||
- | De las dos máquinas | + | Queremos montar lo siguiente: |
- | # gluster peer probe gluster01 | + | |
- | peer probe: success. | + | |
- | # gluster peer probe gluster02 | + | {{:proxmox: |
- | peer probe: success. Probe on localhost not needed | + | |
- | + | En el /etc/hosts añadimos los dos servidores: | |
< | < | ||
- | root@proxmox01:/mnt# gluster peer status | + | root@proxmox1:~# cat /etc/hosts |
+ | 127.0.0.1 localhost | ||
+ | 192.168.2.1 proxmox1 | ||
+ | |||
+ | 192.168.2.2 | ||
+ | </ | ||
+ | |||
+ | Conectamos los dos servidores. Desde el server1: | ||
+ | root@proxmox1: | ||
+ | Vemos que están conectados: | ||
+ | < | ||
+ | root@proxmox1: | ||
Number of Peers: 1 | Number of Peers: 1 | ||
- | Hostname: | + | Hostname: |
- | Uuid: e95baaf8-8029-4181-89f3-d9e3ebbb9648 | + | Uuid: 62eecf86-2e71-4487-ac5b-9b5f16dc0382 |
State: Peer in Cluster (Connected) | State: Peer in Cluster (Connected) | ||
</ | </ | ||
+ | |||
+ | Y desde el server2 igual | ||
< | < | ||
- | root@proxmox02:/gluster# gluster peer status | + | root@proxmox2:~# gluster peer status |
Number of Peers: 1 | Number of Peers: 1 | ||
- | Hostname: | + | Hostname: |
- | Uuid: 67d84d45-7551-4b45-b068-8230d1b05cb6 | + | Uuid: 061807e7-75a6-4636-adde-e9fef4cfa3ec |
State: Peer in Cluster (Connected) | State: Peer in Cluster (Connected) | ||
</ | </ | ||
+ | Creamos las particiones y formateamos en xfs | ||
+ | < | ||
- | Creamos el volumen: | + | </code> |
- | # gluster volume create volumen_gluster replica 2 transport tcp gluster01:/mnt/gluster gluster02:/ | + | |
- | volume create: volumen_gluster: | + | |
- | Lo iniciamos: | + | Montamos las particiones en /gluster/brick1 y / |
- | root@proxmox01: | + | |
- | volume start: volumen_gluster: | + | |
- | Miramos el estado: | + | |
- | | + | |
+ | /dev/sda3: UUID=" | ||
+ | /dev/sdb3: UUID=" | ||
+ | |||
+ | Fichero /etc/fstab | ||
< | < | ||
+ | #brick 1 | ||
+ | UUID=" | ||
+ | |||
+ | #brick 2 | ||
+ | UUID=" | ||
+ | </ | ||
+ | Creamos el volúmen. Mejor un volumen grande que dos pequeños: | ||
+ | gluster volume create volumen_gluster replica 2 transport tcp proxmox1:/ | ||
+ | volume create: volumen_gluster: | ||
+ | |||
+ | |||
+ | Lo iniciamos: | ||
+ | < | ||
+ | root@proxmox1: | ||
+ | volume start: volumen_gluster1: | ||
+ | </ | ||
+ | |||
+ | Miramos el estado: | ||
+ | < | ||
Status of volume: volumen_gluster | Status of volume: volumen_gluster | ||
Gluster process | Gluster process | ||
------------------------------------------------------------------------------ | ------------------------------------------------------------------------------ | ||
- | Brick proxmox01:/gluster/proxmox | + | Brick proxmox1:/bricks/disc1/ |
- | NFS Server on localhost | + | Brick proxmox2:/ |
- | Self-heal Daemon on localhost | + | Brick proxmox1:/ |
+ | Brick proxmox2:/ | ||
+ | Self-heal Daemon on localhost | ||
+ | Self-heal Daemon on proxmox2 | ||
+ | |||
+ | Task Status of Volume volumen_gluster | ||
+ | ------------------------------------------------------------------------------ | ||
+ | There are no active volume tasks | ||
</ | </ | ||
===== Conectar como cliente ===== | ===== Conectar como cliente ===== | ||
- | #mount -t glusterfs | + | #mount -t glusterfs |
+ | | ||
/etc/fstab | /etc/fstab | ||
- | | + | |
| | ||
- | En un container lxc falla, hay que crear fuse a mano: | + | ====== Almacenamiento compartido Proxmox ====== |
- | mknod /dev/fuse c 10 229 | + | De momento los containers no soportan GlusterFS directamente desde proxmox, las VMs si. |
- | + | Montamos /glusterfs y lo ponemos como almacenamiento de Containers (y también de VMs): | |
- | | + | |
+ | {{: | ||
- | ====== Almacenamiento compartido Proxmox ====== | ||
- | | + | {{: |
+ | |||
+ | ====== NFS ====== | ||
+ | Montamos el recurso por nfs en el servidor de proxmox. En los containers los montamos por bind: | ||
+ | |||
+ | En la carpeta / | ||
+ | |||
+ | **Nota:** No poner / delante de container/ | ||
+ | lxc.mount.entry: | ||
+ | **Fuente:** https:// | ||
+ | Ejemplo: | ||
+ | lxc.mount.entry: | ||
+ | lxc.mount.entry: |
proxmox/proxmox4.1473176467.txt.gz · Last modified: 2016/09/06 15:41 by jose