[ceph-users] 答复: Re: RBD read-ahead didn't improve 4K read performance

2014-11-21 Thread duan . xufeng
e Latency: 0.00183069 Max latency: 0.004598 Min latency: 0.001224 Re: [ceph-users] RBD read-ahead didn't improve 4K read performance Alexandre DERUMIER 收件人: duan xufeng 2014/11/21 14:51 抄送: si dawei, ceph-users Hi, I don't have tested yet rbd readhead, but mayb

[ceph-users] RBD read-ahead didn't improve 4K read performance

2014-11-20 Thread duan . xufeng
hi, I upgraded CEPH to 0.87 for rbd readahead , but can't see any performance improvement in 4K seq read in the VM. How can I know if the readahead is take effect? thanks. ceph.conf [client] rbd_cache = true rbd_cache_size = 335544320 rbd_cache_max_dirty = 251658240 rbd_cache_target_dirty = 16

[ceph-users] does anyone know what xfsaild and kworker are?they make osd disk busy. produce 100-200iops per osd disk?

2014-11-10 Thread duan . xufeng
ZTE Information Security Notice: The information contained in this mail (and any attachment transmitted herewith) is privileged and confidential and is intended for the exclusive use of the addressee(s). If you are not an intended recipie

[ceph-users] ceph version 0.79, rbd flatten report Segmentation fault (core dumped)

2014-11-02 Thread duan . xufeng
root@CONTROLLER-4F:~# rbd -p volumes flatten f3e81ea3-1d5b-487a-a55e-53efff604d54_disk *** Caught signal (Segmentation fault) ** in thread 7fe99984f700 ceph version 0.79 (4c2d73a5095f527c3a2168deb5fa54b3c8991a6e) 1: (()+0x22a4f) [0x7fe9a1745a4f] 2: (()+0x10340) [0x7fe9a00f2340] 3: (librbd::ai

[ceph-users] Does ceph has impact on imp IO performance

2014-05-08 Thread duan . xufeng
Hi,All while I use ceph as virtual machine backend, and execute imp operation, IO performance is 1/10 as in physical machine,about 600kb/s but execute dd for IO performance test,such as dd /dev/zero bs=64k count=100 of=/1.file average IO speed is about 50m/s here is physical machine res

[ceph-users] Does ceph has impact on imp IO performance

2014-05-08 Thread duan . xufeng
Hi,All while I use ceph as virtual machine backend, and execute imp operation, IO performance is 1/10 as in physical machine,about 600kb/s but execute dd for IO performance test,such as dd /dev/zero bs=64k count=100 of=/1.file average IO speed is about 50m/s here is physical machine res

[ceph-users] Could anyone tell me How to remove MDS in cluster? Thanks

2014-04-01 Thread duan . xufeng
ZTE Information Security Notice: The information contained in this mail (and any attachment transmitted herewith) is privileged and confidential and is intended for the exclusive use of the addressee(s). If you are not an intended recipie

[ceph-users] help, add mon failed lead to cluster failure

2014-03-26 Thread duan . xufeng
Hi, I just add a new mon to a health cluster by following website manual "http://ceph.com/docs/master/rados/operations/add-or-rm-mons/"; " ADDING MONITORS" step by step, but when i execute step 6: ceph mon add [:] the command didn't return, then i execute "ceph -s" on health mon node,

[ceph-users] 答复: Re: why my "fs_commit_latency" is so high ? is it normal ?

2014-03-16 Thread duan . xufeng
Hi Gregory, The latency is under my ceph test environment. It has only 1 host with 16 2T SATA disks(16 OSDs) and 10Gb/s NIC, journal and data are on the same disk which fs type is ext4. my cluster config is like this. is it normal under this configuration ? or how can i im

[ceph-users] why my "fs_commit_latency" is so high ? is it normal ?

2014-03-16 Thread duan . xufeng
[root@storage1 ~]# ceph osd perf osdid fs_commit_latency(ms) fs_apply_latency(ms) 0 149 52 1 201 61 2 176 166 3 240 57 4

[ceph-users] ceph block device IO seems slow? Did i got something wrong?

2014-03-16 Thread duan . xufeng
Hi, I attached one 500G block device to the vm, and test it in vm use "dd if=/dev/zero of=myfile bs=1M count=1024" , Then I got a average io speed about 31MB/s. I thought that i should have got 100MB/s, cause my vm hypervisor has 1G nic and osd host has 10G nic。 Di

[ceph-users] ceph block device IO seems slow? Did i got something wrong?

2014-03-16 Thread duan . xufeng
Hi, I attached one 500G block device to the vm, and test it in vm use "dd if=/dev/zero of=myfile bs=1M count=1024" , Then I got a average io speed about 31MB/s. I thought that i should have got 100MB/s, cause my vm hypervisor has 1G nic and osd host has 10G nic。 Di

[ceph-users] 答复: Re: help .--Why the PGS is STUCK UNCLEAN?

2014-03-13 Thread duan . xufeng
Hi, Problem solved by editting crushmap to change the default rule from " step chooseleaf firstn 0 type host" to "step chooseleaf firstn 0 type osd" . Thanks Ashish and Robert, your reply really helps me a lot. Thanks again. # rules rule data { ruleset 0 type replicated

[ceph-users] help .--Why the PGS is STUCK UNCLEAN?

2014-03-13 Thread duan . xufeng
Dear Sir or Madam, I have come across a problem . Please help me handle it if you are available. Thanks in advance. The question is that I cannot understand why the status of the PGS is always STUCK UNCLEAN. As I see it, the status should be ACTIVE+CLEAN. The following is some information abou