2014-06-13 11:09 GMT+08:00 Ke-fei Lin <k...@kfei.net>:
> Hi list,
>
> I deployed a Windows 7 VM with qemu-rbd disk, and got an unexpected booting
> phase performance.
>
> I discovered that when booting the Windows VM up, there are consecutive ~2
> minutes that `ceph -w` gives me an interesting log like: "... 567 KB/s rd,
> 567 op/s", "... 789 KB/s rd, 789 op/s" and so on.
>
> e.g.
> 2014-06-05 15:47:43.125441 mon.0 [INF] pgmap v18095: 320 pgs: 320
> active+clean; 86954 MB data, 190 GB used, 2603 GB / 2793 GB avail; 765 kB/s
> rd, 765 op/s
> 2014-06-05 15:47:44.240662 mon.0 [INF] pgmap v18096: 320 pgs: 320
> active+clean; 86954 MB data, 190 GB used, 2603 GB / 2793 GB avail; 568 kB/s
> rd, 568 op/s
> ... (skipped)
> 2014-06-05 15:50:02.441523 mon.0 [INF] pgmap v18186: 320 pgs: 320
> active+clean; 86954 MB data, 190 GB used, 2603 GB / 2793 GB avail; 412 kB/s
> rd, 412 op/s
>
> Which shows the number of rps is always the same as the number of ops, i.e.
> every operation is nearly 1KB, and I think this leads a very long boot time
> (takes 2 mins to enter desktop). But I can't understand why, is it an issue
> of my Ceph cluster? Or just some special I/O patterns in Windows VM booting
> process?
>
> In addition, I know that there are no qemu-rbd caching benefits during boot
> phase since the cache is not persistent (please corrects me), so is it
> possible to enlarge the read_ahead size in qemu-rbd driver? And does this
> make any sense?
>
> And finally, how can I tune up my Ceph cluster for this workload (booting
> Windows VM)?
>
> Any advice and suggestions will be greatly appreciated.
>
>
> Context:
>
> 4 OSDs (7200rpm/750GB/SATA) with replication factor 2.
>
> The system disk in Windows VM is NTFS formatted with default 4K block size.
>
> $ uname -a
>     Linux ceph-consumer 3.11.0-22-generic #38~precise1-Ubuntu SMP Fri May 16
> 20:47:57 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
>
> $ ceph --version
>     ceph version 0.80.1 (a38fe1169b6d2ac98b427334c12d7cf81f809b74)
>
> $ dpkg -l | grep rbd
>     ii  librbd-dev                       0.80.1-1precise
> RADOS block device client library (development files)
>     ii  librbd1                          0.80.1-1precise
> RADOS block device client library
>
> $ virsh version
>     Compiled against library: libvir 0.9.8
>     Using library: libvir 0.9.8
>     Using API: QEMU 0.9.8
>     Running hypervisor: QEMU 1.7.1 ()
Answer myself.

The problem was because `bus=ide`, after changing bus type to virtio,
everything works like a charm!
And `virsh domblkstat` also not showing 512 bytes/sec anymore, it's
about 40 KB/s now.
Also `ceph -w` shows me a larger number of read per second.
Finally, the boot time reduced to 18 seconds.

Thanks Andrey, Sage.

kfei
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to