Xenserver V2P Migration

I have using xenserver as my hypervisor for the past 6 months and have run a number of VMs on it. However, there were some deal breakers which led me to migrate back to running Ubuntu as the base OS and running the VMs in VirtualBox on top of Ubuntu.

  • Lack of USB passthrough support - Can only passthrough MSC devices, therefore, no USB WiFi card for guest OSes.
  • Unnecessarily complicated - Lots of new xe commands to learn, concepts such as storage repositories, LVM file systems... Probably more suited for larger setups, e.g. multiple physical servers forming clusters, ability to combine multiple disks into a single large storage repository. Overkill for my single server with 1 SSD and 1 HDD.

That said, I have to acknowledge that Xenserver for very quick to set up and was very stable, it did not crash even once and I had uptime of a few months.

Migration Steps

Since I couldn't find a guide on the net, I experimented with a number of techniques and finally succeeded.

  1. Copy out all the VHDs in /var/run/sr-mount/.
  2. Backup all data in storage repositories.
  3. qemu-img convert -f vpc /path/to/vhd/of/base/os.vhd -O raw /image.img
  4. dd if=/image.img of=/dev/sda where sda is SSD holding base OS.
  5. Boot into Linux live CD and view the partitions. sda2 should be a LVM partition containing multiple ext2/3/4 partitions.
  6. dd if=/dev/mapper/ext234partition-vg-root of=/dev/sdb where sdb is a scratch disk
  7. Format sda2 into an ext2/3/4 partition as desired.
  8. dd if=/dev/sdb of=/dev/sda2
  9. Expand sda2 if necessary. Create linux-swap if necessary.
  10. Mount sda1 and find all references to vg-root in grub.cfg and change it to sda2

Your new SSD should now be bootable and the root partition should now be ext2/3/4 format and not LVM. I have found LVM harder to work with and preferred ext4 for its simplicity, stopping at step 5 will produce a bootable LVM disk.