Proxmox VE 5.2-1 with a GlusterFS Store

"Hey man, I thought you were doing oVirt?"

Table of Contents

I had initially planned on migrating from plain KVM to oVirt. My reasoning was thus:

  1. oVirt is the upstream of Red Hat Enterprise Virtualization, and I plan on pursuing higher Red Hat certifications
  2. It inegrates with OpenStack, another technology adopted by Red Hat
  3. It’s robust and flexible

However, upon installing * Documentation for RHEV is complete, but behind a version or two * Documentation for oVirt is poorly maintained, and some documents stop almost mid-sentence * Terraform plugins for oVirt are not ready for primetime * The network management is needlessly ornate

I’ve read emphatic praise for Proxmox, and so I figured I’d give it a try despite them being Debian-based savages.


Super easy. Download, DD to one of those tiny free flash drives some guy with a lanyard standing in a field gave me, and click next a lot.

One minor complaint: I couldn’t tab through the fields on the EULA page, so I had to walk all the way to the other side of the room to get a mouse.

Hard drive partitioning

There’s no manual partitioning, but they give you the option to set max root size, minimum free space, etc.

I set: * swap = 8GB * max root = 10GB * min free = 200GB

This rendered the following:

root@gold:/# fdisk -l /dev/sda
Device        Start       End   Sectors   Size Type
/dev/sda1      2048      4095      2048     1M BIOS boot
/dev/sda2      4096    528383    524288   256M EFI System
/dev/sda3  528384 488397134 487868751 232.6G Linux LVM
# Abbreviated LVM setup:
root@gold:/# lvdisplay
  --- Logical volume ---
  LV Path                /dev/pve/swap
  LV Name                swap
  VG Name                pve
  LV Size                8.00 GiB
  --- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV Size                10.00 GiB
  --- Logical volume ---
  LV Name                data
  VG Name                pve
  LV Pool metadata       data_tmeta
  LV Pool data           data_tdata
  LV Size                12.63 GiB

The thin pool is for provisioning containers. It’s a little light, but my containers will be doing next to nothing.

Email notification

I set up email notification to to make use of the autofilters feature in gmail

First boot

It takes a minute for everything to come online, but you navigate to https://hostname:8006 and get rolling

I also copied my public key to the server:

nc@tiny: ~ $ ssh-copy-id -i ~/.ssh/ root@gold

The web interface is fantastic. Not very modern looking, but fast and full-featured. Creating a cluster is very easy, and it even automatically shares authorized keys to all nodes. I’m literally crying right now. What lovely savages.

Add Storage:

Create Gluster

This is basically a repeat of the previous article on Gluster, but to summarize:

#On both hosts
root@gold:~# apt-get update
root@gold:~# apt-get install glusterfs-server glusterfs-client
root@gold:~# systemctl start glusterfs-server
root@gold:~# systemctl enable glusterfs-server
#Get free extents
root@gold:~# vgdisplay
  --- Volume group ---
  VG Name               pve
  Free  PE / Size       51200 / 200.00 GiB
#Create volume
root@gold:~# lvcreate pve /dev/sda3 -l 51200 -n gluster
#Format with inode size=512 for glusterfs performance
root@gold:~# mkfs.xfs -i size=512 /dev/pve/gluster
root@gold:~# echo "/dev/pve/gluster /glusterfs xfs defaults 0 0" >> /etc/fstab
root@gold:~# mkdir /glusterfs
root@gold:~# mount -a
root@gold:~# mkdir /glusterfs/brick

#On one host
root@gold:~# gluster peer probe silver
peer probe: success.
root@gold:~# gluster volume create gluster replica 2
volume create: gluster: success: please start the volume to access data
root@gold:~# gluster volume start gluster
volume start: gluster: success

Add through WebUI

Navigate to Datacenter > Storage, click Add, and follow the instructions:

ID: gluster #Proxmox label
second server:
Volume name: gluster #Gluster volume name
Content: everything


So far I love Proxmox. I’m going to cover setting up a VM and a container in the next post.