Adding a Third Proxmox+Gluster Node

Finally

Table of Contents

I freed up the system I was using to do data recovery is free, and so I’m finally adding my third node to my Proxmox+Gluster cluster.

Install Proxmox

Same as before, allocated 8GB for swap, 10GB max for root, and 200GB for extra LVM partitions.

Create the gluster brick

I ran into an issue that I had one fewer available physical extents on this disk due to it being a different model. This is not an issue, but I was annoyed.

Recreate Thin Pool, only smaller

root@bronze:~# lvremove /dev/pve/data
# Check PE availability
root@bronze:~# vgdisplay
# Create new volume with 12 PEs buffer, then convert
root@bronze:~# lvcreate pve /dev/sda3 -l 5146 -n data
root@bronze:~# lvconvert --type thin-pool pve/data
# Verify available PEs
root@bronze:~# vgdisplay

Create gluster volume

root@bronze:~# lvcreate pve /dev/sda3 -l 51200 -n gluster
root@bronze:~# mkfs.xfs -i size=512  /dev/pve/gluster
root@bronze:~# mkdir /glusterfs
root@bronze:~# echo "/dev/pve/gluster /glusterfs xfs defaults 0 0" >> /etc/fstab
root@bronze:~# mount -a
root@bronze:~# mkdir /glusterfs/brick

Add the gluster brick to the existing volume

This is pretty simple, with a little preliminary work

Add everyone to /etc/hosts

To ensure total stability, I want to make sure all nodes know where each node is, so configure each node something like this:

root@bronze:~# grep 'lan' /etc/hosts
10.3.3.73 bronze.lan.nathancurry.com bronze pvelocalhost
10.3.3.71 gold.lan.nathancurry.com gold
10.3.3.72 silver.lan.nathancurry.com silver

And update /etc/ntp.conf with

pool 0.debian.pool.ntp.org iburst
pool 1.debian.pool.ntp.org iburst
pool 2.debian.pool.ntp.org iburst
pool 3.debian.pool.ntp.org iburst
peer gold.lan.nathancurry.com iburst
peer silver.lan.nathancurry.com iburst

Add the brick

From one of the existing gluster peers:

root@silver:~# gluster volume add-brick gluster replica 3 bronze.lan.nathancurry.com:/glusterfs/brick/

Add the Proxmox node to the cluster

This was as simple as going to the GUI on the existing cluster, and copying the info from Datacenter > Cluster > Join Information into the appropriate fields on the new node.