ut 5k
volume in pool2, about 12k
It's a big gap here, anyone can give me some suggestion here?
ceph version: hammer(0.94.3)
kernel: 3.10
hzwuli...@gmail.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/
lume got about 14k IOPS.
We could see performance of volume2 is not good compare to volume1, so is it
normal behabior of guest host?
If not, what maybe the problem?
Thanks!
hzwuli...@gmail.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
0 and 71 to 140 bytes. That's different from real machine.
But maybe iptraf on the VM can't prove anything, i check the real machine which
the VM located on.
It seems no abnormal.
BTW, my VM is located on the ceph storage node.
Anyone can give me more sugestions?
Thanks!
hzwuli...@g
:
***
Thanks!
hzwuli...@gmail.com
From: Alexandre DERUMIER
Date: 2015-10-21 14:01
To: hzwulibin
CC: ceph-users
Subject: Re: [ceph-users] [performance] rbd kernel module versus qemu librbd
Damn, that's a huge difference.
What is your host os, gue
r_size_other": "512",
"filestore_max_inline_xattrs": "6",
"filestore_max_inline_xattrs_xfs": "10",
"filestore_max_inline_xattrs_btrfs": "10",
"filestore_max_inline_xattrs_other": "2",
"filestore_max_al
virt 1.2.9
Using API: QEMU 1.2.9
Running hypervisor: QEMU 2.1.2
Are there any already know bugs about those version?
Thanks!
hzwuli...@gmail.com
From: Alexandre DERUMIER
Date: 2015-10-21 18:38
To: hzwulibin
CC: ceph-users
Subject: Re: [ceph-users] [performance] rbd kernel module versus qemu l
problem also.
hzwuli...@gmail.com
From: hzwuli...@gmail.com
Date: 2015-10-22 10:15
To: Alexandre DERUMIER
CC: ceph-users
Subject: Re: Re: [ceph-users] [performance] rbd kernel module versus qemu librbd
Hi,
Sure, all those could help, but not so much -:)
Now, we find it's the VM problem. CPU o
c79fc769d9
+ 45.31% 0x7fc79fc769ab
So, maybe it's the kvm problem?
hzwuli...@gmail.com
From: hzwuli...@gmail.com
Date: 2015-10-23 11:5
Oh, no, from the phenomenon. IO in VM is wait for the host to completion.
The CPU wait in VM is very high.
Anyway, i could try to collect somthing, maybe there are some clues.
hzwuli...@gmail.com
From: Alexandre DERUMIER
Date: 2015-10-23 12:39
To: hzwulibin
CC: ceph-users
Subject: Re: [ceph
Yeah, you are right. Test the rbd volume form host is fine.
Now, at least we could affirm ti's the qemu or kvm problem, not ceph.
hzwuli...@gmail.com
From: Alexandre DERUMIER
Date: 2015-10-23 12:51
To: hzwulibin
CC: ceph-users
Subject: Re: [ceph-users] [performance] rbd kernel module v
Now, i guess it performance down is due to the competition between the threads.
As you could see, i paste the
perf record before.
The problem is really stuck us.
So, anyone know why the threads number of qume-system-x86 increasing?
And any way we could control it.
Thanks!
hzwuli...@gma
randwrite will reduce from 15k
to 4k. That's really unacceptable!
My evnironment:
1. nine OSD storage servers with two intel DC 3500 SSD on each
2. hammer 0.94.3
3. QEMU emulator version 2.1.2 (Debian 1:2.1+dfsg-12+deb8u4~bpo70+1)
Thanks!
hzwuli...@gmail.com
From: Jan Schermer
Date: 2015-
Hi,
Thank you, that make sense for testing, but i'm afraid not in my case.
Even i test on the volume that already test many times, the IOPS will not
growing up
again. Yeah, i mean, this VM is broken, IOPS of the VM will never growing up..
Thanks!
hzwuli...@gmail.com
From: Chen, X
13 matches
Mail list logo