Friday, 2 September 2011

Install XenServer 5.6 on software RAID 1

A problem I have encountered is that the XenServer 5.6 boot install media
does not recognise a two SATA disk RAID 1 set configured using the Intel
SW RAID BIOS - it sees the two drives separately. The SR RAID set is on a
separate Intel RAID adapter with separate SAS drives.

The first step is to disable the BIOS RAID and configure the two SATA
drives as independent drives (AHCI mode).

Then install XenServer onto the first disk /dev/sda and don't configure any
SRs at this point - once the install completes and server restarts go to
the console screen (ALT-F3) and follow these instructions:

The original source for these instructions is this blog - thanks Todd!

I copied and updated his instructions and numbered them for easier reference.

1. To copy the partition table from /dev/sda to /dev/sdb you can use dd

dd if=/dev/sda of=/dev/sdb bs=512 count=1

2. Now set the partition table up on /dev/sdb the way it should be for
Linux RAID. This means setting the partition types to 0xfd.

I used the following command:

echo -e "\nt\n1\nfd\nt\n3\nfd\nw\nx" | fdisk /dev/sdb

That tells says to fdisk, “tag partition 1 as type 0xfd, tag partition 3 as
type 0xfd”

3. Check to make sure the /dev/md? devices are present

[ -e /dev/md0 ] || mknod /dev/md0 b 9 0
[ -e /dev/md1 ] || mknod /dev/md1 b 9 1

4. Startup the degraded RAID devices

mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb1
mdadm --create /dev/md1 --level=1 --raid-devices=2 missing /dev/sdb3

pvcreate /dev/md1
volume_group=`vgscan | grep VG | awk -F \" '{print $2}'`
vgextend $volume_group /dev/md1

>> I have had to replace the next line
pvmove /dev/sda3 /dev/md1
>> with
dd if=/dev/sda3 of=/dev/md1 bs=8M
>> this is because in XenServer 5.6 the /dev/sda3 partition is of type 8e(LVM) rather than 83 -

vgreduce $volume_group /dev/sda3

5. Now we’re ready to copy the filesystem over to the RAID device /dev/md0

mkfs.ext3 /dev/md0
cd / && mount /dev/md0 /mnt && rsync -a --progress --exclude=/sys \
--exclude=/proc --exclude=/dev/shm --exclude=/dev/pts / /mnt
mkdir /mnt/sys
mkdir /mnt/proc

6. Now let’s setup initrd

mkdir /root/initrd && cd /root/initrd
zcat /boot/initrd-`uname -r`.img | cpio -i && \
cp /lib/modules/`uname -r`/kernel/drivers/md/raid1.ko lib

7. Now we have to edit the init file

q="echo Waiting for driver initialization."
sed -r -i "s,^${q}$,\n\necho Loading raid1.ko module\ninsmod
/lib/raid1.ko\n${q}\n,g" init

q="resume /var/swap/swap.001"
sed -r -i "s,^${q}$,${q}\necho Running raidautorun\nraidautorun
/dev/md0\nraidautorun /dev/md1,g" init

r=`grep mkroot /root/initrd/init`
sed -r -i "s|^${r}$|${r/sda1/md0}|g" init

8. Now we’ll copy the initial ramdisk to the /boot on the new RAID

>> This is a key step - replace
find . -print | cpio -o -c | gzip -c > /boot/initrd-`uname -r`.img
>> with
find . -print | cpio -o -c | gzip -c > /mnt/boot/initrd-`uname -r`.img

sed -r -i 's,LABEL=root-\w+ ,/dev/md0 ,g' /mnt/etc/fstab
sed -r -i 's,LABEL=root-\w+ ,/dev/md0 ,g' /etc/fstab

9. And setup the boot loader

sed -r -i 's,root=LABEL=root-\w+ ,root=/dev/md0 ,g'
sed -r -i 's,root=LABEL=root-\w+ ,root=/dev/md0 ,g' /boot/extlinux.conf
cat /usr/lib/syslinux/mbr.bin > /dev/sdb
cd /mnt
extlinux -i boot/

cp /mnt/boot/extlinux.conf /boot/
cp /mnt/boot/initrd-`uname -r`.img /boot

10. Unmount /dev/md0, sync, and reboot

cd ; umount /mnt || umount /dev/md0

11. After the server boots up again we tag the partitions as type Linux
raid, then we have to add /dev/sda to the RAID.

echo -e "\nt\n1\nfd\nt\n3\nfd\nw\nx" | fdisk /dev/sda
mdadm -a /dev/md0 /dev/sda1
mdadm -a /dev/md1 /dev/sda3


12. Allow the serve to boot up and wait for the RAID to rebuild
To check whether the RAID is completed run the following:

mdadm --monitor /dev/md0 -1
If you see "DegradedArray" its still rebuilding. Once completed reboot a couple of times to make sure all is well.

That's it!

You can configure RAID monitoring to send an email as follows:

There are a couple of things are beware of with this config.
1. Don't apply any patches as this will probably break the LVM setup.
2. Citrix Licensing policies may require you to install some patches in the
future to renew licenses.

1 comment: