ee if operations are backing up on any specific OSDs.
>
> Mark
>
>
> On 04/09/2013 12:54 PM, Ziemowit Pierzycki wrote:
>
>> Neither made a difference. I also have a glusterFS cluster with two
>> nodes in replicating mode residing on 1TB drives:
>>
>> [root@t
]
So what could be causing this?
On Tue, Apr 9, 2013 at 12:54 PM, Ziemowit Pierzycki
wrote:
> Neither made a difference. I also have a glusterFS cluster with two nodes
> in replicating mode residing on 1TB drives:
>
> [root@triton speed]# dd conv=fdatasync if=/dev/zero
> of=/mnt
PM, Ziemowit Pierzycki wrote:
>
>> There is one SSD in each node. IPoIB performance is about 7 gbps
>> between each host. CephFS is mounted via kernel client. Ceph version
>> is ceph-0.56.3-1. I have a 1GB journal on the same drive as the OSD but
>> on a seperate file syste
I'm running DDR in this setup but I also have QDR setup.
On Tue, Apr 9, 2013 at 2:31 AM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:
> 2013/4/8 Ziemowit Pierzycki
>
>> Hi,
>>
>> I have a 3 node SSD-backed cluster connected over infi
Thanks,
> Mark
>
>
> On 04/08/2013 03:00 PM, Ziemowit Pierzycki wrote:
>
>> Hi,
>>
>> The first test was writing 500 mb file and was clocked at 1.2 GBps. The
>> second test was writing 5000 mb file at 17 MBps. The third test was
>> reading the file a
and performance went
> up from 17.5MB/s to 394MB/s? How many drives in each node, and of what
> kind?
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On Mon, Apr 8, 2013 at 12:38 PM, Ziemowit Pierzycki
> wrote:
> > Hi,
> >
> > I ha
Hi,
I have a 3 node SSD-backed cluster connected over infiniband (16K MTU) and
here is the performance I am seeing:
[root@triton temp]# !dd
dd if=/dev/zero of=/mnt/temp/test.out bs=512k count=1000
1000+0 records in
1000+0 records out
524288000 bytes (524 MB) copied, 0.436249 s, 1.2 GB/s
[root@tri