Introduction
This page describes how Alexis Huxley installed and configures a replicated storage and virtualisation environment providing VM services and storage services, although note that storage services are provided by a VM with a large disk.
Hardware
- two systems are required
- RAID1 is not needed (redundancy will be provided by DRBD)
- optionally RAID0 over multiple disks in each server (RAID0 will help improve IO speeds)
- two physical NICs are required in each host
Local storage
Virtualisation servers will use DRBD-replicated storage for most VMs. However, occasionally, local space is useful (e.g. for a test VM).
- If the system was installed with PCMS then skip this section as it will already have been done by PCMS.
- Create LVs:
lvcreate --name=local --size=200g vg0
- Format for XFS, which offers online size changing:
mkfs -t xfs -f /dev/vg0/local
- Add fstab entries for them all as below, create mountpoints and mount them:
/dev/mapper/vg0-local /vol/local xfs auto,noatime,nodiratime 0 2
Replicated storage
- Install drbd-utils.
- As DRBD devices are created, the various LVM commands will start producing the error message:
fiori# lvs /dev/drbd_pestaroli: open failed: Wrong medium type ... fiori# pvs ... /dev/vg0/fettuce_pub vg2 lvm2 a-- 1.95t 0 /dev/vg0/fettuce_small vg1 lvm2 a-- 149.99g 0 /dev/vg0/gigli_p2p vg1 lvm2 a-- 149.99g 0 ... fiori#
The first is because LVM looks for block devices, and the second is because some of the LVs have LVM within them. To fix this add the following to /etc/lvm/lvm.conf:
devices { ... filter = [ "r|/dev/drbd.*|", "r|/dev/vg.*|" ] ... }
Persistent NIC naming
- Add suitable entries to /etc/udev/rules.d/70-persistent-net.rules, e.g.:
fiori# cat /etc/udev/rules.d/70-persistent-net.rules SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:1b:21:24:ea:32", ATTR{dev_id}=="0x0", ATTR{type}=="1", NAME="eth1" SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="10:fe:ed:05:93:2a", ATTR{dev_id}=="0x0", ATTR{type}=="1", NAME="eth2" fiori# torchio# cat /etc/udev/rules.d/70-persistent-net.rules SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:0e:0c:c5:f0:6d", ATTR{dev_id}=="0x0", ATTR{type}=="1", NAME="eth1" SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="10:fe:ed:05:92:6d", ATTR{dev_id}=="0x0", ATTR{type}=="1", NAME="eth2" torchio#
(Note that eth0 is not specified as it is on the systemboard and is always eth0.)
Shared public network interface
The VMs that run on each node will need access to the public network interface.
- If the system was installed with PCMS then skip this section as it will already have been done by PCMS.
- Reconfigure the stanza for etho in /etc/network/interfaces accordingly. E.g.:
iface eth0 inet manual auto br0 iface br0 inet static address 192.168.1.6 netmask 255.255.255.0 gateway 192.168.1.1 bridge_ports eth0
Dedicated network interface for cluster communications
It is essential to use a dedicated network card for cluster communications in order to ensure that public traffic does not impact replication.
- If the system was installed with PCMS then skip this section as it will already have been done by PCMS.
- Add a suitable entry to /etc/network/interfaces for the NIC you will use for the cluster communications and add an entry for it to /etc/hosts. E.g.:
auto eth1 iface eth1 inet static address 192.168.3.6 netmask 255.255.255.0
- Reboot.
Hypervisors
This procedure is to be run on both nodes, regardless of whether they are both being configured at the same time or not, unless explicitly stated otherwise.
- If the system was installed with PCMS then skip this section as it will already have been done by PCMS.
- Run:
apt-get install qemu-kvm libvirt-clients libvirt-daemon qemu-utils virt-top
Virtual resources
- Remove pre-defined but unwanted storage pools:
# nothing to do as there are none
- Create and define the local storage pool:
mkdir /vol/local/vmpool0 virsh pool-define-as --name=vmpool0 --type=dir \ --target=/vol/local/vmpool0 virsh pool-start vmpool0
- Create and define a pool for ISO images:
mkdir /vol/local/isoimages virsh pool-define-as --name=isoimages --type=dir \ --target=/vol/local/isoimages virsh pool-start isoimages
and copy in any ISO images you need.
- Remove pre-defined but unwanted networks:
virsh net-destroy default virsh net-undefine default
- Since we plumb VM’s NICs into the sharable br0 and br0 is not managed by libvirt there is nothing to do at this time to configure access to the public network.
- Define a network to allow co-hosted VMs to communicate with each other directly, e.g.:
virsh net-define <(cat <<EOF <network> <name>192.168.10.0</name> <uuid>$(uuidgen)</uuid> <bridge name='virbr0' stp='on' delay='0'/> <mac address='52:54:00:81:cd:08'/> <ip address='192.168.10.1' netmask='255.255.255.0'> </ip> </network> EOF ) virsh net-autostart 192.168.10.0 virsh net-start 192.168.10.0
- Set up SSH keys to allow the running of virt-manager from a remote system.
Creating replicated volumes for VMs
Storage for VMs is created on a VM by VM basis so there is nothing to do now.