2014-06-13 21:23 GMT+08:00 Andrey Korolyov <and...@xdel.ru>:

> On Fri, Jun 13, 2014 at 7:09 AM, Ke-fei Lin <k...@kfei.net> wrote:
> > Hi list,
> >
> > I deployed a Windows 7 VM with qemu-rbd disk, and got an unexpected
> booting
> > phase performance.
> >
> > I discovered that when booting the Windows VM up, there are consecutive
> ~2
> > minutes that `ceph -w` gives me an interesting log like: "... 567 KB/s
> rd,
> > 567 op/s", "... 789 KB/s rd, 789 op/s" and so on.
> >
> > e.g.
> > 2014-06-05 15:47:43.125441 mon.0 [INF] pgmap v18095: 320 pgs: 320
> > active+clean; 86954 MB data, 190 GB used, 2603 GB / 2793 GB avail; 765
> kB/s
> > rd, 765 op/s
> > 2014-06-05 15:47:44.240662 mon.0 [INF] pgmap v18096: 320 pgs: 320
> > active+clean; 86954 MB data, 190 GB used, 2603 GB / 2793 GB avail; 568
> kB/s
> > rd, 568 op/s
> > ... (skipped)
> > 2014-06-05 15:50:02.441523 mon.0 [INF] pgmap v18186: 320 pgs: 320
> > active+clean; 86954 MB data, 190 GB used, 2603 GB / 2793 GB avail; 412
> kB/s
> > rd, 412 op/s
> >
> > Which shows the number of rps is always the same as the number of ops,
> i.e.
> > every operation is nearly 1KB, and I think this leads a very long boot
> time
> > (takes 2 mins to enter desktop). But I can't understand why, is it an
> issue
> > of my Ceph cluster? Or just some special I/O patterns in Windows VM
> booting
> > process?
> >
> > In addition, I know that there are no qemu-rbd caching benefits during
> boot
> > phase since the cache is not persistent (please corrects me), so is it
> > possible to enlarge the read_ahead size in qemu-rbd driver? And does this
> > make any sense?
> >
> > And finally, how can I tune up my Ceph cluster for this workload (booting
> > Windows VM)?
> >
> > Any advice and suggestions will be greatly appreciated.
> >
> >
> > Context:
> >
> > 4 OSDs (7200rpm/750GB/SATA) with replication factor 2.
> >
> > The system disk in Windows VM is NTFS formatted with default 4K block
> size.
> >
> > $ uname -a
> >     Linux ceph-consumer 3.11.0-22-generic #38~precise1-Ubuntu SMP Fri
> May 16
> > 20:47:57 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
> >
> > $ ceph --version
> >     ceph version 0.80.1 (a38fe1169b6d2ac98b427334c12d7cf81f809b74)
> >
> > $ dpkg -l | grep rbd
> >     ii  librbd-dev                       0.80.1-1precise
> > RADOS block device client library (development files)
> >     ii  librbd1                          0.80.1-1precise
> > RADOS block device client library
> >
> > $ virsh version
> >     Compiled against library: libvir 0.9.8
> >     Using library: libvir 0.9.8
> >     Using API: QEMU 0.9.8
> >     Running hypervisor: QEMU 1.7.1 ()
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
> Hi,
>
> If you are able to leave only this VM in cluster scope to check,
> you`ll perhaps may use virsh domblkstat accumulated values to compare
> real number of operations.
>

Thanks, Andrey.

I tried `virsh domblkstat <vm> hda` (only this VM in whole cluster) and got
these values:

hda rd_req 70682
hda rd_bytes 229894656
hda wr_req 1067
hda wr_bytes 12645888
hda flush_operations 0

(These values became stable after ~2 mins)

While the output of `ceph -w` is attached at: http://pastebin.com/Uhdj9drV

Any advices?
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to