Fedora setup as Xen host

History

I’ve been running Xen since something like 2008, originally on an AMD Athlon X2 dual-core, when Paravirtual installs were the only option as hardware virtualisation wasn’t a thing yet.  Obviously I’m bloody-minded 🙂  These days, virtual machines are much easier to do, but I still like to push boundaries, and for the last 3 years, inside my Xen dom0 hypervisor, I’ve been running a pair of VM’s which are the backbone of my home network; pfSense and freeNAS.  Both are FreeBSD derivatives, which 3 years ago made it very painful to virtualise, requiring me to recompile the base images to add Xen support to their kernels.

Now, thanks to freeBSD 10’s baked-in support, I’ve been running the stock pfSense image – but my freeNAS VM has languished and I’m stuck on 9.2.1.8 with my home-compiled Xen support.. but finally, happy happy, joy joy, I’m going to rebuild my home server and in the process get to see how much the world has moved.. or not.

Requirements

  • Xen dom0 capable of supporting a range of VM’s including:
    • pfSense or similar – firewall with pci-passthrough of a dual-port PCIe NIC as dedicated external and internal interfaces (so there is a decent router/firewall separating my home network from the interwebs)
    • FreeNAS – providing network storage for all the home machines (personal and shared network drives, music server etc.)
    • Various mix of Win10, Win8, Centos, Fedora, Docker etc. VM’s with a variety of uses (backup server, mail server, and more)

Nothing especially challenging, except for the pci-passthrough of the LSI Host Bridge Adapter (HBA) to the FreeNAS VM, and pci-passthrough of the network card for the router VM – which was a painful challenge 3.5 years ago, but hopefully much easier this time around thanks to FreeBSD 10’s native xen support.

I weighed up Xenserver 7.2 – but as much as I like the concept, their kernel build is just overly power hungry for a home server (tested against Centos and Fedora on the exact same hardware), which here in Oz means a fair difference in running cost.

This time, I’m pushing the envelope more on storage – and I’ll be passing through a second disk controller (onboard Intel chipset SATA controller with 6 ports) as well as the HBA, to make 3 separate ZFS volumes (zvol’s) managed by the fileserver VM:

$ lspci | grep controller | grep SA
00:1f.2 SATA controller: Intel Corporation C600/X79 series chipset 6-Port SATA AHCI Controller (rev 06)
01:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05)
07:00.0 Serial Attached SCSI controller: Intel Corporation C602 chipset 4-Port SATA Storage Control Unit (rev 06)

With 3 controllers, that’s one for dom0, and 2 for the ZFS domU:

  • dom0 (4-port controller) – 2x Intel S3500 SSD
  • 8 port LSI HBA:
    • 5x 500GB Samsung 840 Pro SSD – Raidz1 for approx 2TB of fast storage to use as iSCSI for VMs
    • 2x 2TB drives – ZFS Mirror for backup management (synced to cloud)
  • 6 port Intel C600 Controller:
    • 6x 2TB WD Red drives – Raidz2 for 8TB of slower, but still fast enough to saturate gigabit links, storage, for VM data, mythtv recordings, family albums etc.

I got the Samsung SSD’s cheaply (<$90 each), and already have all the 2TB drives.  Overall this is way more storage than any family needs – we don’t watch many movies more than once..  which according to the internet makes us freaks, as people on tech forums seem to feel like their lives are incomplete without at least 10,000 hours of video footage archived which they’ll never actually watch.  Bottom line – no point spending big on all new drives.  The raidZ arrays insulate us from drive failure, and an adequate backup policy means even if we lose a whole array (or get burgled), we can recover with minimal emotional scarring.  I have a spare WD 2TB Red drive – but rather than having it sit there powered, I’ll just actually plug it in if a drive fails.

Configuration

Hardware

I’m lazy, so I’ll just link to my semi-recent post about Cheap home datacentre-quality gear.. Supermicro X9DRH-7F.  If I get around to it, I’ll include more info on the hardware overkill – its nothing special, and is really just a case with a motherboard, PSU, and a lot of SATA cables zip-tied together in an attempt to avoid spaghetti!

Fedora Install – Dom0 Config

Lets start by fetching a Fedora spin, and making a writeable USB.  To do this, I went to my local (and hence fast) fedora repo, and fetched this:

http://mirror.aarnet.edu.au/pub/fedora/linux/releases/26/Spins/x86_64/iso/Fedora-Xfce-Live-x86_64-26-1.5.iso

Why not the server ISO? Good question.. and it has a tricky answer (remember, per my opening paragraph, I’m bloody minded).

This time, as Fedora includes native Xen support, I’m hopeful of 2 outcomes my last Centos-based dom0 build struggled with.. firstly, an ability to run GUI tools for managing/creating VM’s instead of just command line, and related, but secondarily, the ability to run the dom0 graphical management tools remotely.  Of course, I’m going to do it a little weirdly with old-school X-windows magic, using Cygwin/X to run GUI tools on my windows desktop off the dom0, complete with clipboard integration etc. instead of just using VNC (I miss the days of wondering which darn machine the xterm window was actually running on.. I used to work for Sun, so we had a lot of hardware around!).

Next, as much as I’d like to just mount the ISO using IPMI and the iKVM, it bugs me just how slow that approach is, so I’m going to burn the ISO to a USB and (gasp) plug it into the server!  If I was working remotely from a datacentre and had no choice I’d use the iKVM, but its painful, even over local gigabit ethernet.

ISO to USB tool of choice:  Rufus – found here: https://rufus.akeo.ie/

I’ll assume you don’t need to be told how to put the USB into the slot and boot your machine from it, as if you struggle with that, you’re going to find the rest of this page hard to follow.  On to the magic steps!

Step 1 – Install Fedora

Key notes:

  • Configure local storage as LVM on top of RAID1 (so if one of the OS drives dies, it can be replaced without much downtime
  • Stick with ext4 as its well supported by tools and OS drive performance isn’t an issue
  • /boot on a RAID1 ext4 partition
  • Use the server-netinst  iso to network install, so you can pick the bare window manager install
  • Install using LXDE window manager as its Lightweight and works comfortably on an 8meg onboard video card

Step 2 – First boot

Once booted, first step is update everything:

sudo dnf -y upgrade

Then enable sshd so you can cut and paste into the console

sudo systemctl start sshd
sudo systemctl enable sshd

Install xen (https://wiki.xen.org/wiki/Fedora_Host_Installation)

# install xen
sudo dnf install -y xen
# configure grub to boot it
sudo grub2-mkconfig -o /boot/grub2/grub.cfg
sudo grep ^menuentry /boot/grub2/grub.cfg | cut -d "'" -f2
# after reading the above, pick this one :-)
sudo grub2-set-default "Fedora, with Xen hypervisor"

# sort out network
sudo chkconfig network on
sudo vi /etc/sysconfig/network-scripts/ifcfg-xenbr0

Then in that config file, lets put the config for a network bridge which will support the xen-br interface.. add these contents:

DEVICE=xenbr0
TYPE=Bridge
BOOTPROTO=dhcp
ONBOOT=yes
DELAY=0
NM_CONTROLLED=no

Edit the ifcfg file for your interface

sudo vi /etc/sysconfig/network-scripts/ifcfg-enp5s0f0

to add these lines:

BRIDGE=xenbr0
# if this exists already as a "yes", make it no, else add it just in case
NM_CONTROLLED=no

and then test by restarting the network service:

sudo systemctl restart network.service

And check to see if the bridge is working and has an IP address:

$ ifconfig xenbr0
xenbr0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
 inet 192.168.0.80 netmask 255.255.0.0 broadcast 192.168.255.255
 inet6 fe80::ec4:7aff:fe12:d326 prefixlen 64 scopeid 0x20<link>
 ether 0c:c4:7a:12:d3:26 txqueuelen 1000 (Ethernet)
 RX packets 1760 bytes 520377 (508.1 KiB)
 RX errors 0 dropped 0 overruns 0 frame 0
 TX packets 167 bytes 20861 (20.3 KiB)
 TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

Reboot time!

Step 3 – Xen

After reboot, first thing is to see whether xen is up and running (as root):

# xl list
Name ID Mem VCPUs State Time(s)
Domain-0 0 129235 8 r----- 45.9

Success – it can see the dom0 (and yes, I have 128GB of memory 😀 ).

NAS / Router setup – DomU Config

Ok – putting this title in this post was ambitious.. I’ll leave this to a new post to deal with, where I’ll sort out PCI passthrough of controller devices such as LSI HBA (for ZFS on a NAS VM), and ethernet controllers (for separate interfaces for a pfsense router VM).

ooroo!

Leave a Reply

Your email address will not be published. Required fields are marked *