Martin Bálint

Personal blog of IT professional

Oracle PL/SQL developer
PHP developer
Linux administrator

Oct 25 2009

How I migrated my Centos staff from XEN to KVM

A quick recapitulation of XEN to KVM migration on my Centos 5.4. It was suprisingly easy.

Finally. Centos 5.4 with KVM support was released. I was really looking forward to upgrade my XEN guests to KVM guests. For several reasons. Mostly for curiosity, and better integration of KVM with linux kernel.

Upgrade from Centos 5.3 to 5.4 went smoothly, after reading Release Notes, I upgraded all XEN guests and host. Important to mention, I was running XEN 3.4.1 from Gitco repo, which complicated migration to KVM a little bit.

OK, so how did i do that. First of all, I installed non-xen kernel on all XEN guests and host.

yum install kernel

In this step, I could set grub’s menu.lst to run non-xen kernel by default, but I didn’t want to. I wanted to return to XEN quickly, if migration goes bad.

Next, I dumped XML configuration of all XEN guests.

virsh dumpxml  > /root/.xml

We will convert those configurations to KVM later.

Now, I installed KVM support on host system. Here I had a problem, that binaries from Base repo were conflicting with Gitco’s binaries. I solved this by uninstalling Gitco’s binaries.

yum install kvm kmod-kvm
yum install qemu
modprobe kvm-intel

If all went well, we are ready to reboot host to non-xen kernel.

And now, the most interesting part came. I copied one XML configration dumped earlier to /etc/libvirt/qemu/.

XEN KVM
<domain type='xen' id='10'>
 <name>webdevel</name>
 <uuid>13d5035a-6c13-4f90-a1f3-68f4d0f90988</uuid>
 <memory>524288</memory>
 <currentMemory>524288</currentMemory>
 <vcpu>1</vcpu>
 <bootloader>/usr/bin/pygrub</bootloader>
 <bootloader_args>-q</bootloader_args>
 <os>
 <type>linux</type>
 </os>
 <clock offset='utc'/>
 <on_poweroff>destroy</on_poweroff>
 <on_reboot>restart</on_reboot>
 <on_crash>restart</on_crash>
 <devices>
 <disk type='block' device='disk'>
 <driver name='phy'/>
 <source dev='/dev/vg01/webdevel-disk'/>
 <target dev='xvda' bus='xen'/>
 </disk>
 <interface type='bridge'>
 <mac address='00:16:3e:33:07:06'/>
 <source bridge='xenbr0'/>
 <script path='/etc/xen/scripts/vif-bridge'/>
 <target dev='vif10.0'/>
 </interface>
 <console type='pty' tty='/dev/pts/3'>
 <source path='/dev/pts/3'/>
 <target port='0'/>
 </console>
 </devices>
</domain>
<domain type='kvm'>
 <name>webdevel</name>
 <uuid>13d5035a-6c13-4f90-a1f3-68f4d0f90988</uuid>
 <memory>524288</memory>
 <currentMemory>524288</currentMemory>
 <vcpu>1</vcpu>
 <os>
 <type>hvm</type>
 <boot dev='hd'/>
 </os>
 <clock offset='utc'/>
 <on_poweroff>destroy</on_poweroff>
 <on_reboot>restart</on_reboot>
 <on_crash>restart</on_crash>
 <devices>
 <disk type='block' device='disk'>
 <source dev='/dev/vg01/webdevel-disk'/>
 <target dev='hda' bus='ide'/>
 </disk>
 <interface type='bridge'>
 <mac address='00:16:3e:33:07:06'/>
 <source bridge='br0'/>
 </interface>
 <input type='mouse' bus='ps2'/>
 <graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0'/>
 </devices>
</domain>

Main changes:

  • domain type
  • <bootloader> lines removed
  • <os> type changes to hvm and <boot> device is added
  • <disk> section is changed, KVM emulated IDE bus with disks hda, hdb, hdc, hdd
  • network interface changed from xenbr0 to br0 – I had to create it, as it didn’t exist

And now, I could define configuration into the KVM hypervisor and start it.

virsh define <domain>.xml
virsh start <domain>

I did some typos at first, so I had to fix configuration and run

virsh define <domain>

again to load it.

Now I connected to concole using VNC, and could see guest’s GRUB waiting for kernel choose. I have chosen the non-xen kernel, and after a while I could log into the guest host.

I checked that networking is working, and modified grub’s menu.lst to non-xen kernel by default.

Beacuse I wanted to have this guest started then machine starts, I did

virsh autostart <domain>

Some experiences I gained during migration

  • virsh shutdown doesn’t shut guests gracefully, it just aborts them
  • qemu-0.9.0-4 in Centos 5.4 seems to support only IDE bus, so not more than 4 disk drives can be connected. This is a problem for one of my guests, which has 6 disks attached. I have got “qemu: too many IDE bus” during start. I detached 2 drives, until I find solution. SCSI bus seems to be unsupported in this qemu version.

Reference

Tagged , ,

Related Posts

Leave a reply

Your email address will not be published. Required fields are marked *

*