Two-Node GlusterFS on oVirt Node 4.2.5.1

Table of Contents

Generally speaking, you shouldn’t configure two nodes in a Gluster, since data divergence is resolved by a vote among the nodes. However, my third bare metal host is currently occupied recovering data from a dying hard drive, so I’m doing this for now and adding the third brick later.

Install oVirt

I booted the oVirt 4.2 installer ISO off a flash drive, and made only minor changes.

I took the suggested partition scheme, but reduced the root partition to 8GiB, and used the spare space for my glusterfs partition, formatting it XFS and mounting it on /glusterfs

Pretty straightforward.

Configure the Volume

Create a directory for the brick on all the machines. It should be a subdirectory of your mountpoint so that a failed mount triggers a mayday.

[root@gold data]# mkdir /glusterfs/goldbrick
# And on my second node
[root@silver data]# mkdir /glusterfs/silverbrick

Then you need to set up connectivity between the nodes. Only one host shown for brevity, but do it on both.

# Open firewall to other glusterfs hosts:
[root@gold ~]# firewall-cmd --add-rich-rule 'rule family=ipv4 source address=192.168.7.72 accept' --add-rich-rule 'rule family=ipv4 source address=192.168.7.73 accept'
# Make it permanent
[root@gold ~]# firewall-cmd --add-rich-rule 'rule family=ipv4 source address=192.168.7.72 accept' --add-rich-rule 'rule family=ipv4 source address=192.168.7.73 accept' --permanent
#Open glusterfs ports to clients
#This is enabled by default, but necessary.
[root@gold /]# firewall-cmd --add-service glusterfs
[root@gold /]# firewall-cmd --add-service glusterfs --permanent
#Start the service
[root@gold ~]# systemctl start glusterd
[root@gold ~]# systemctl enable glusterd
Created symlink from /etc/systemd/system/multi-user.target.wants/glusterd.service to /usr/lib/systemd/system/glusterd.service.
#Set the SELinux boolean for glusterfs/virt
[root@gold ~]# setsebool -P  virt_use_glusterd=1

Now check connectivity and creat the volume:

[root@gold ~]# gluster peer probe silver
peer probe: success.
[root@gold data]# gluster volume create pool replica 2 gold.lan.nathancurry.com:/glusterfs/goldbrick silver.lan.nathancurry.com:/glusterfs/silverbrick
[root@gold data]# gluster volume start pool

Save yourself several hours

Seriously. Grant the Node Virtualization Manager ownership of the new volume, or you the Ansible script will throw a vague permission denied error. It’s not your firewall. It’s not SELinux. It’s this:

[root@gold ~]# gluster volume set gv0 storage.owner-uid 36
volume set: success
[root@gold ~]# gluster volume set gv0 storage.owner-gid 36
volume set: success

GlusterFS is now live. Test on my laptop;

nc@tiny: /mnt $ sudo mount -t glusterfs gold:/gv0 /mnt/
nc@tiny: /mnt $ mount | grep gluster
gold:/gv0 on /mnt type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)

Install oVirt Engine

You have two options, the self-hosted engine, or installing directly to a node.

Self Hosted Engine

As of this release and 9/2/2018, the Self-Hosted Engine script seems broken. (update: 4.2.6 failed as well). In any case, I got to the point where it transferred the image to the glusterfs partition, and choked after setting up the redirects in the WebUI, but before the host was working. At this point, the script would no longer recover, so I gave up.

Installed Directly on the Host/Node

This was super easy, and I just followed the oVirt documentation.

Hot dang.

Resources