hi, all:

There is a problem when I use ceph rbd for qemu storage. I launch 4 virtual 
machines, and start 5G random write test at the same time. Under such heavy 
I/O, the network to 
virtual machine almost unusable, the network latency is extremely big.


I had test another situation, when I use 'virsh attach-device' command to 
attach rbd which mapped in my host machine(which run virtual machines), the 
problem was not show again.


So, I think this must be qemu-rbd 's problem.


Here is my testing environment:


# virsh version
Compiled against library: libvirt 1.2.0
Using library: libvirt 1.2.0
Using API: QEMU 1.2.0
Running hypervisor: QEMU 1.7.0


In vm's xml, I define the rbd like this:
    <disk type='network' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source protocol='rbd' name='qemu/rbd-vm4'>
        <host name='10.120.111.111' port='6789'/>
      </source>
      <auth username='libvirt'>
        <secret type='ceph' uuid='38b66185-4117-47a6-90bd-64111c3fc5d2'/>
      </auth>
      <target dev='vdb' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' 
function='0x0'/>
    </disk>




testing tool is : fio
io depth is : 32
io engine is : libaio
io direct is open




Is there anyone met such a problem? 




regards


Alan Ye 


------------------
Alan Ye

Reply via email to