e Latency: 0.00183069
Max latency: 0.004598
Min latency: 0.001224
Re: [ceph-users] RBD read-ahead didn't improve 4K read performance
Alexandre DERUMIER
收件人:
duan xufeng
2014/11/21 14:51
抄送:
si dawei, ceph-users
Hi,
I don't have tested yet rbd readhead,
but mayb
hi,
I upgraded CEPH to 0.87 for rbd readahead , but can't see any performance
improvement in 4K seq read in the VM.
How can I know if the readahead is take effect?
thanks.
ceph.conf
[client]
rbd_cache = true
rbd_cache_size = 335544320
rbd_cache_max_dirty = 251658240
rbd_cache_target_dirty = 16
ZTE Information Security Notice: The information contained in this mail (and
any attachment transmitted herewith) is privileged and confidential and is
intended for the exclusive use of the addressee(s). If you are not an intended
recipie
root@CONTROLLER-4F:~# rbd -p volumes flatten
f3e81ea3-1d5b-487a-a55e-53efff604d54_disk
*** Caught signal (Segmentation fault) **
in thread 7fe99984f700
ceph version 0.79 (4c2d73a5095f527c3a2168deb5fa54b3c8991a6e)
1: (()+0x22a4f) [0x7fe9a1745a4f]
2: (()+0x10340) [0x7fe9a00f2340]
3: (librbd::ai
Hi,All
while I use ceph as virtual machine backend, and execute imp operation, IO
performance is 1/10 as in physical machine,about 600kb/s
but execute dd for IO performance test,such as
dd /dev/zero bs=64k count=100 of=/1.file average IO speed is about
50m/s
here is physical machine res
Hi,All
while I use ceph as virtual machine backend, and execute imp operation, IO
performance is 1/10 as in physical machine,about 600kb/s
but execute dd for IO performance test,such as
dd /dev/zero bs=64k count=100 of=/1.file average IO speed is about
50m/s
here is physical machine res
ZTE Information Security Notice: The information contained in this mail (and
any attachment transmitted herewith) is privileged and confidential and is
intended for the exclusive use of the addressee(s). If you are not an intended
recipie
Hi,
I just add a new mon to a health cluster by following website
manual "http://ceph.com/docs/master/rados/operations/add-or-rm-mons/"; "
ADDING MONITORS" step by step,
but when i execute step 6:
ceph mon add [:]
the command didn't return, then i execute "ceph -s" on health mon node,
Hi Gregory,
The latency is under my ceph test environment. It has only 1 host
with 16 2T SATA disks(16 OSDs) and 10Gb/s NIC,
journal and data are on the same disk which fs type is ext4.
my cluster config is like this. is it normal under this
configuration ? or how can i im
[root@storage1 ~]# ceph osd perf
osdid fs_commit_latency(ms) fs_apply_latency(ms)
0 149 52
1 201 61
2 176 166
3 240 57
4
Hi,
I attached one 500G block device to the vm, and test it in vm use
"dd if=/dev/zero of=myfile bs=1M count=1024" ,
Then I got a average io speed about 31MB/s. I thought that i
should have got 100MB/s,
cause my vm hypervisor has 1G nic and osd host has 10G nic。
Di
Hi,
I attached one 500G block device to the vm, and test it in vm use
"dd if=/dev/zero of=myfile bs=1M count=1024" ,
Then I got a average io speed about 31MB/s. I thought that i
should have got 100MB/s,
cause my vm hypervisor has 1G nic and osd host has 10G nic。
Di
Hi,
Problem solved by editting crushmap to change the default rule from "
step chooseleaf firstn 0 type host" to "step chooseleaf firstn 0 type osd"
.
Thanks Ashish and Robert, your reply really helps me a lot.
Thanks again.
# rules
rule data {
ruleset 0
type replicated
Dear Sir or Madam,
I have come across a problem . Please help me handle it if you are
available. Thanks in advance.
The question is that I cannot understand why the status of the PGS is
always STUCK UNCLEAN. As I see it, the status should be ACTIVE+CLEAN.
The following is some information abou
14 matches
Mail list logo