I have using xenserver as my hypervisor for the past 6 months and have run a number of VMs on it. However, there were some deal breakers which led me to migrate back to running Ubuntu as the base OS and running the VMs in VirtualBox on top of Ubuntu.
- Lack of USB passthrough support - Can only passthrough MSC devices, therefore, no USB WiFi card for guest OSes.
- Unnecessarily complicated - Lots of new xe commands to learn, concepts such as storage repositories, LVM file systems... Probably more suited for larger setups, e.g. multiple physical servers forming clusters, ability to combine multiple disks into a single large storage repository. Overkill for my single server with 1 SSD and 1 HDD.
That said, I have to acknowledge that Xenserver for very quick to set up and was very stable, it did not crash even once and I had uptime of a few months.
Migration Steps
Since I couldn't find a guide on the net, I experimented with a number of techniques and finally succeeded.
- Copy out all the VHDs in
/var/run/sr-mount/
. - Backup all data in storage repositories.
qemu-img convert -f vpc /path/to/vhd/of/base/os.vhd -O raw /image.img
dd if=/image.img of=/dev/sda
where sda is SSD holding base OS.- Boot into Linux live CD and view the partitions.
sda2
should be a LVM partition containing multiple ext2/3/4 partitions. dd if=/dev/mapper/ext234partition-vg-root of=/dev/sdb
where sdb is a scratch disk- Format
sda2
into anext2/3/4
partition as desired. dd if=/dev/sdb of=/dev/sda2
- Expand
sda2
if necessary. Createlinux-swap
if necessary. - Mount
sda1
and find all references tovg-root
ingrub.cfg
and change it tosda2
Your new SSD should now be bootable and the root partition should now be ext2/3/4 format and not LVM. I have found LVM harder to work with and preferred ext4 for its simplicity, stopping at step 5 will produce a bootable LVM disk.