[ceph-users] Does SSD Journal improve the performance?

2015-10-14 Thread hzwuli...@gmail.com
ut 5k volume in pool2, about 12k It's a big gap here, anyone can give me some suggestion here? ceph version: hammer(0.94.3) kernel: 3.10 hzwuli...@gmail.com ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/

[ceph-users] [performance] rbd kernel module versus qemu librbd

2015-10-20 Thread hzwuli...@gmail.com
lume got about 14k IOPS. We could see performance of volume2 is not good compare to volume1, so is it normal behabior of guest host? If not, what maybe the problem? Thanks! hzwuli...@gmail.com ___ ceph-users mailing list ceph-users@lists.ceph.com

Re: [ceph-users] [performance] rbd kernel module versus qemu librbd

2015-10-20 Thread hzwuli...@gmail.com
0 and 71 to 140 bytes. That's different from real machine. But maybe iptraf on the VM can't prove anything, i check the real machine which the VM located on. It seems no abnormal. BTW, my VM is located on the ceph storage node. Anyone can give me more sugestions? Thanks! hzwuli...@g

Re: [ceph-users] [performance] rbd kernel module versus qemu librbd

2015-10-21 Thread hzwuli...@gmail.com
: *** Thanks! hzwuli...@gmail.com From: Alexandre DERUMIER Date: 2015-10-21 14:01 To: hzwulibin CC: ceph-users Subject: Re: [ceph-users] [performance] rbd kernel module versus qemu librbd Damn, that's a huge difference. What is your host os, gue

Re: [ceph-users] [performance] rbd kernel module versus qemu librbd

2015-10-21 Thread hzwuli...@gmail.com
r_size_other": "512", "filestore_max_inline_xattrs": "6", "filestore_max_inline_xattrs_xfs": "10", "filestore_max_inline_xattrs_btrfs": "10", "filestore_max_inline_xattrs_other": "2", "filestore_max_al

Re: [ceph-users] [performance] rbd kernel module versus qemu librbd

2015-10-21 Thread hzwuli...@gmail.com
virt 1.2.9 Using API: QEMU 1.2.9 Running hypervisor: QEMU 2.1.2 Are there any already know bugs about those version? Thanks! hzwuli...@gmail.com From: Alexandre DERUMIER Date: 2015-10-21 18:38 To: hzwulibin CC: ceph-users Subject: Re: [ceph-users] [performance] rbd kernel module versus qemu l

Re: [ceph-users] [performance] rbd kernel module versus qemu librbd

2015-10-22 Thread hzwuli...@gmail.com
problem also. hzwuli...@gmail.com From: hzwuli...@gmail.com Date: 2015-10-22 10:15 To: Alexandre DERUMIER CC: ceph-users Subject: Re: Re: [ceph-users] [performance] rbd kernel module versus qemu librbd Hi, Sure, all those could help, but not so much -:) Now, we find it's the VM problem. CPU o

Re: [ceph-users] [performance] rbd kernel module versus qemu librbd

2015-10-22 Thread hzwuli...@gmail.com
c79fc769d9 + 45.31% 0x7fc79fc769ab So, maybe it's the kvm problem? hzwuli...@gmail.com From: hzwuli...@gmail.com Date: 2015-10-23 11:5

Re: [ceph-users] [performance] rbd kernel module versus qemu librbd

2015-10-22 Thread hzwuli...@gmail.com
Oh, no, from the phenomenon. IO in VM is wait for the host to completion. The CPU wait in VM is very high. Anyway, i could try to collect somthing, maybe there are some clues. hzwuli...@gmail.com From: Alexandre DERUMIER Date: 2015-10-23 12:39 To: hzwulibin CC: ceph-users Subject: Re: [ceph

Re: [ceph-users] [performance] rbd kernel module versus qemu librbd

2015-10-22 Thread hzwuli...@gmail.com
Yeah, you are right. Test the rbd volume form host is fine. Now, at least we could affirm ti's the qemu or kvm problem, not ceph. hzwuli...@gmail.com From: Alexandre DERUMIER Date: 2015-10-23 12:51 To: hzwulibin CC: ceph-users Subject: Re: [ceph-users] [performance] rbd kernel module v

Re: [ceph-users] [performance] rbd kernel module versus qemu librbd

2015-10-25 Thread hzwuli...@gmail.com
Now, i guess it performance down is due to the competition between the threads. As you could see, i paste the perf record before. The problem is really stuck us. So, anyone know why the threads number of qume-system-x86 increasing? And any way we could control it. Thanks! hzwuli...@gma

Re: [ceph-users] Understanding the number of TCP connections between clients and OSDs

2015-10-26 Thread hzwuli...@gmail.com
randwrite will reduce from 15k to 4k. That's really unacceptable! My evnironment: 1. nine OSD storage servers with two intel DC 3500 SSD on each 2. hammer 0.94.3 3. QEMU emulator version 2.1.2 (Debian 1:2.1+dfsg-12+deb8u4~bpo70+1) Thanks! hzwuli...@gmail.com From: Jan Schermer Date: 2015-

Re: [ceph-users] [performance] why rbd_aio_write latency increase from 4ms to 7.3ms after the same test

2015-11-02 Thread hzwuli...@gmail.com
Hi, Thank you, that make sense for testing, but i'm afraid not in my case. Even i test on the volume that already test many times, the IOPS will not growing up again. Yeah, i mean, this VM is broken, IOPS of the VM will never growing up.. Thanks! hzwuli...@gmail.com From: Chen, X