[ceph-users] ceph block device IO seems slow? Did i got something wrong?

2014-03-16 Thread duan . xufeng
Hi, I attached one 500G block device to the vm, and test it in vm use "dd if=/dev/zero of=myfile bs=1M count=1024" , Then I got a average io speed about 31MB/s. I thought that i should have got 100MB/s, cause my vm hypervisor has 1G nic and osd host has 10G nic。 Di

[ceph-users] ceph block device IO seems slow? Did i got something wrong?

2014-03-16 Thread duan . xufeng
Hi, I attached one 500G block device to the vm, and test it in vm use "dd if=/dev/zero of=myfile bs=1M count=1024" , Then I got a average io speed about 31MB/s. I thought that i should have got 100MB/s, cause my vm hypervisor has 1G nic and osd host has 10G nic。 Di

[ceph-users] erasure coding testing

2014-03-16 Thread Gruher, Joseph R
Hey all- Can anyone tell me, if I install the latest development release (looks like it is 0.77) can I enable and test erasure coding? Or do I have to wait for the actual Firefly release? I don't want to deploy anything for production, basically I just want to do some lab testing to see what

Re: [ceph-users] erasure coding testing

2014-03-16 Thread Ian Colle
Joe, We¹re pushing to get 0.78 out this week, which will allow you to play with EC. Ian R. Colle Director of Engineering Inktank Delivering the Future of Storage http://www.linkedin.com/in/ircolle http://www.twitter.com/ircolle Cell: +1.303.601.7713 Email: i...@inktank.com On 3/16/14, 8:11 PM, "

Re: [ceph-users] erasure coding testing

2014-03-16 Thread Gruher, Joseph R
Great, thanks! I'll watch (hope) for an update later this week. Appreciate the rapid response. -Joe From: Ian Colle [mailto:ian.co...@inktank.com] Sent: Sunday, March 16, 2014 7:22 PM To: Gruher, Joseph R; ceph-users@lists.ceph.com Subject: Re: [ceph-users] erasure coding testing Joe, We're

[ceph-users] How to stop reoverying process when one osd down

2014-03-16 Thread Ta Ba Tuan
Hi everyone, I am using replicate 2, 3 for my data pools. Anh, I want to stop recovering process when a data node be down. I think mark "unout" downed osds or setting crushmap for down osd with their weigh=0, Righ? ceph-data-node-01: 70 1 osd.70 down0

[ceph-users] why my "fs_commit_latency" is so high ? is it normal ?

2014-03-16 Thread duan . xufeng
[root@storage1 ~]# ceph osd perf osdid fs_commit_latency(ms) fs_apply_latency(ms) 0 149 52 1 201 61 2 176 166 3 240 57 4

Re: [ceph-users] why my "fs_commit_latency" is so high ? is it normal ?

2014-03-16 Thread Gregory Farnum
That seems a little high; how do you have your system configured? That latency is how long it takes for the hard drive to durably write out something to the journal. -Greg Software Engineer #42 @ http://inktank.com | http://ceph.com On Sun, Mar 16, 2014 at 9:59 PM, wrote: > > [root@storage1 ~]#

[ceph-users] 答复: Re: why my "fs_commit_latency" is so high ? is it normal ?

2014-03-16 Thread duan . xufeng
Hi Gregory, The latency is under my ceph test environment. It has only 1 host with 16 2T SATA disks(16 OSDs) and 10Gb/s NIC, journal and data are on the same disk which fs type is ext4. my cluster config is like this. is it normal under this configuration ? or how can i im

Re: [ceph-users] 答复: Re: why my "fs_commit_latency" is so high ? is it normal ?

2014-03-16 Thread Gregory Farnum
I'm not sure what's normal when you're sharing the journal and the data on the same disk drive. You might do better if you partition the drive and put the journals on an unformatted partition; obviously providing each journal its own spindle/ssd/whatever would be progressively better. There was a c

[ceph-users] full ssd cluster, how many iops can we expect ?

2014-03-16 Thread Alexandre DERUMIER
Hello, I'm looking to build a full ssd cluster (I have mainly random io workload), I would like to known how many iops can we expect, by osd ? I have read on inktank doc, about 4000 iops by osd(that enough for me). Is is true ? What is the bottleneck ? osd cpu limited ? Also, can we have othe

[ceph-users] why objects are still in .rgw.buckets after deleted

2014-03-16 Thread ljm李嘉敏
Hi all, I have a question about the pool .rgw.buckets, when I upload a file(has been stripped because it is bigger than 4M) through swift API, it is stored in .rgw.buckets, if I upload it again, why the objects in .rgw.buckets are not override? It is stored again and have different name. and wh

Re: [ceph-users] full ssd cluster, how many iops can we expect ?

2014-03-16 Thread Stefan Priebe - Profihost AG
Hi Alexandre, Am 17.03.2014 07:03, schrieb Alexandre DERUMIER: > Hello, > > I'm looking to build a full ssd cluster (I have mainly random io workload), > > I would like to known how many iops can we expect, by osd ? > > I have read on inktank doc, about 4000 iops by osd(that enough for me). Is