On 08/29/2013 01:26 PM, raj kumar wrote:
I've not changed any default configuration of ceph-deploy. I just added
public/cluster network to config. rep size is 2.
it's a sata disk.
Yes. 191MB/s is from physical local disk.
That's pretty fast for most sata disks!
As you said running concurrent dd gives 94.1MB/s and 135 MB/s. So is it
normal? I'm using ver version 0.67.1.
Since ceph distributes objects across lots of OSDs, higher levels of
concurrency will help keep all of those OSDs fed with data. There are
tricks you can do with cache to try to hide this (if you don't have RBD
writeback cache enabled you may want to give that a try). For reads,
increasing readahead_kb on the OSDs or on the client block device might
help too.
On Thu, Aug 29, 2013 at 11:14 PM, raj kumar <rajkumar600...@gmail.com
<mailto:rajkumar600...@gmail.com>> wrote:
I've setup ceph cluster and using rbd to check the performance. Used
ceph-deploy for deployment.
/dev/sdb - is data disk, /dev/sda4 - is journal. I've 2 servers
running 2 osds and I 've 3 monitors running from vmware virtual
machine. which has sufficient RAM/CPU.
servers are dual core and i've 2 networks of 1Gbps(for osd dedicated
1gb network).
There is a huge different when running "
dd if=/dev/zero of=a1.temp bs=1k count=1023000". Gives only
22.5MB/s. where as vmware guest gives 92MB/s and physical gives 191
MB/s.
I'm planning to use ceph for cloud and also for storing large image
files. please let me know how to tune ceph to get an optimal
performance.
-Raj
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com