Re: [ceph-users] Performance problems

2013-04-11 Thread Ziemowit Pierzycki
ee if operations are backing up on any specific OSDs. > > Mark > > > On 04/09/2013 12:54 PM, Ziemowit Pierzycki wrote: > >> Neither made a difference. I also have a glusterFS cluster with two >> nodes in replicating mode residing on 1TB drives: >> >> [root@t

Re: [ceph-users] Performance problems

2013-04-10 Thread Ziemowit Pierzycki
] So what could be causing this? On Tue, Apr 9, 2013 at 12:54 PM, Ziemowit Pierzycki wrote: > Neither made a difference. I also have a glusterFS cluster with two nodes > in replicating mode residing on 1TB drives: > > [root@triton speed]# dd conv=fdatasync if=/dev/zero > of=/mnt

Re: [ceph-users] Performance problems

2013-04-09 Thread Ziemowit Pierzycki
PM, Ziemowit Pierzycki wrote: > >> There is one SSD in each node. IPoIB performance is about 7 gbps >> between each host. CephFS is mounted via kernel client. Ceph version >> is ceph-0.56.3-1. I have a 1GB journal on the same drive as the OSD but >> on a seperate file syste

Re: [ceph-users] Performance problems

2013-04-09 Thread Ziemowit Pierzycki
I'm running DDR in this setup but I also have QDR setup. On Tue, Apr 9, 2013 at 2:31 AM, Gandalf Corvotempesta < gandalf.corvotempe...@gmail.com> wrote: > 2013/4/8 Ziemowit Pierzycki > >> Hi, >> >> I have a 3 node SSD-backed cluster connected over infi

Re: [ceph-users] Performance problems

2013-04-08 Thread Ziemowit Pierzycki
Thanks, > Mark > > > On 04/08/2013 03:00 PM, Ziemowit Pierzycki wrote: > >> Hi, >> >> The first test was writing 500 mb file and was clocked at 1.2 GBps. The >> second test was writing 5000 mb file at 17 MBps. The third test was >> reading the file a

Re: [ceph-users] Performance problems

2013-04-08 Thread Ziemowit Pierzycki
and performance went > up from 17.5MB/s to 394MB/s? How many drives in each node, and of what > kind? > -Greg > Software Engineer #42 @ http://inktank.com | http://ceph.com > > > On Mon, Apr 8, 2013 at 12:38 PM, Ziemowit Pierzycki > wrote: > > Hi, > > > > I ha

[ceph-users] Performance problems

2013-04-08 Thread Ziemowit Pierzycki
Hi, I have a 3 node SSD-backed cluster connected over infiniband (16K MTU) and here is the performance I am seeing: [root@triton temp]# !dd dd if=/dev/zero of=/mnt/temp/test.out bs=512k count=1000 1000+0 records in 1000+0 records out 524288000 bytes (524 MB) copied, 0.436249 s, 1.2 GB/s [root@tri