User Tools

Site Tools


proxmox:proxmox4

This is an old revision of the document!


Instalamos servidor en RAID:

http://matarosensefils.net/wiki/index.php?n=Proxmox.DebianJessieNetinstall

En resumen, en cada disco creo 3 particiones: 32 Gb / y RAID 4 Gb swap El que sobri RAID

I faig dos RAIDS el de / con a boot

Cuando instalo se queda el segundo RAID (el de datos) así:

# cat /proc/mdstat
md1 : active (auto-read-only) raid1 sdb3[1] sda3[0]
  resync=PENDING
  

Para forzar resync:

# mdadm --readwrite /dev/md1

Instalación Proxmox

Fuente: http://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Jessie#Adapt_your_sources.list

Asegurarse que el valor que resuelva hostname lo tenga en el /etc/hosts. Por ejemplo:

127.0.0.1 localhost
192.168.1.100 proxmoxescorxador

Añadimos repositorios de proxmox:

echo "deb http://download.proxmox.com/debian jessie pvetest" > /etc/apt/sources.list.d/pve-install-repo.list
wget -O- "http://download.proxmox.com/debian/key.asc" | apt-key add -
apt-get update && apt-get dist-upgrade
apt-get install proxmox-ve ntp ssh postfix ksm-control-daemon open-iscsi

Vemos que cambia el kernel al reiniciar:

Linux proxmox02 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt11-1+deb8u3 (2015-08-04) x86_64 GNU/Linux
Linux proxmox02 4.1.3-1-pve #1 SMP Thu Jul 30 08:54:37 CEST 2015 x86_64 GNU/Linux

Configuramos la red así:

auto vmbr0
iface vmbr0 inet static
	address  192.168.1.100
	netmask  255.255.255.0
	gateway  192.168.1.1
	bridge_ports eth0
	bridge_stp off
	bridge_fd 0

Cluster Proxmox

Desde el primer nodo que será master

root@proxmox01:/gluster# pvecm create clusterproxmox
Corosync Cluster Engine Authentication key generator.
Gathering 1024 bits for key from /dev/urandom.
Writing corosync key to /etc/corosync/authkey.
root@proxmox01:/gluster# pvecm status
Quorum information
------------------
Date:             Tue Aug 11 23:23:53 2015
Quorum provider:  corosync_votequorum
Nodes:            1
Node ID:          0x00000001
Ring ID:          4
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   1
Highest expected: 1
Total votes:      1
Quorum:           1  
Flags:            Quorate 

Membership information
----------------------
    Nodeid      Votes Name
0x00000001          1 192.168.1.4 (local)

Desde el segundo nodo

root@proxmox02:/mnt# pvecm add 192.168.1.4
The authenticity of host '192.168.1.4 (192.168.1.4)' can't be established.
ECDSA key fingerprint is 8a:88:8a:2a:d2:8f:96:62:c1:85:ab:fc:c7:23:00:11.
Are you sure you want to continue connecting (yes/no)? yes
root@192.168.1.4's password: 
copy corosync auth key
stopping pve-cluster service
backup old database
waiting for quorum...OK
generating node certificates
merge known_hosts file
restart services
successfully added node 'proxmox02' to cluster.

Borrar nodo cluster

Si al borrar un nodo da error, le decimos que espere (e=expected) solo un nodo:

root@proxmox01:/var/log# pvecm delnode proxmox02
cluster not ready - no quorum?
root@proxmox01:/var/log# pvecm e 1
root@proxmox01:/var/log# pvecm delnode proxmox02

Gluster:

https://www.howtoforge.com/high-availability-storage-with-glusterfs-3.2.x-on-debian-wheezy-automatic-file-replication-mirror-across-two-storage-servers

Versión 3.7.3 Gluster:

http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.3/Debian/jessie/apt/

De las dos máquinas

# gluster peer probe gluster01
peer probe: success. 
#  gluster peer probe gluster02
peer probe: success. Probe on localhost not needed
root@proxmox01:/mnt# gluster peer status
Number of Peers: 1

Hostname: proxmox02
Uuid: e95baaf8-8029-4181-89f3-d9e3ebbb9648
State: Peer in Cluster (Connected)
root@proxmox02:/gluster# gluster peer status
Number of Peers: 1

Hostname: proxmox01
Uuid: 67d84d45-7551-4b45-b068-8230d1b05cb6
State: Peer in Cluster (Connected)

Creamos el volumen:

# gluster volume create volumen_gluster replica 2 transport tcp gluster01:/mnt/gluster gluster02:/mnt/gluster force
volume create: volumen_gluster: success: please start the volume to access data

Lo iniciamos:

root@proxmox01:~# gluster volume start volumen_gluster
volume start: volumen_gluster: success

Miramos el estado:

# gluster volume status
Status of volume: volumen_gluster
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick proxmox01:/gluster/proxmox            49152     0          Y       16096
NFS Server on localhost                     2049      0          Y       16120
Self-heal Daemon on localhost               N/A       N/A        Y       16125

Conectar como cliente

#mount -t glusterfs proxmox01:/volumen_gluster /mnt/glusterfs

/etc/fstab

proxmox01:/volumen_gluster /mnt/glusterfs glusterfs defaults,_netdev 0 2

En un container lxc falla, hay que crear fuse a mano:

 mknod /dev/fuse c 10 229

Almacenamiento compartido Proxmox

proxmox/proxmox4.1473176918.txt.gz · Last modified: 2016/09/06 15:48 by jose