That same HDD is used for journal also on a separate 10 G partition.

Thanks.

Daleep Singh Bais

On Wed, Sep 9, 2015 at 2:37 PM, Shinobu Kinjo <ski...@redhat.com> wrote:

> Are you using that hdd as also for storing journal data?
> Or are you using ssd for that purpose?
>
> Shinobu
>
> ----- Original Message -----
> From: "Daleep Bais" <daleepb...@gmail.com>
> To: "Shinobu Kinjo" <ski...@redhat.com>
> Cc: "Ceph-User" <ceph-us...@ceph.com>
> Sent: Wednesday, September 9, 2015 5:59:33 PM
> Subject: Re: [ceph-users] Poor IOPS performance with Ceph
>
> Hi Shinobu,
>
> I have 1 X 1TB HDD on each node. The network bandwidth between nodes is
> 1Gbps.
>
> Thanks for the info. I will also try to go through discussion mails related
> to performance.
>
> Thanks.
>
> Daleep Singh Bais
>
>
> On Wed, Sep 9, 2015 at 2:09 PM, Shinobu Kinjo <ski...@redhat.com> wrote:
>
> > How many disks does each osd node have?
> > How about networking layer?
> > There are several factors to make your cluster much more stronger.
> >
> > Probably you may need to take a look at other discussion on this mailing
> > list.
> > There was a bunch of discussion about performance.
> >
> > Shinobu
> >
> > ----- Original Message -----
> > From: "Daleep Bais" <daleepb...@gmail.com>
> > To: "Ceph-User" <ceph-us...@ceph.com>
> > Sent: Wednesday, September 9, 2015 5:17:48 PM
> > Subject: [ceph-users] Poor IOPS performance with Ceph
> >
> > Hi,
> >
> > I have made a test ceph cluster of 6 OSD's and 03 MON. I am testing the
> > read write performance for the test cluster and the read IOPS is poor.
> > When I individually test it for each HDD, I get good performance,
> whereas,
> > when I test it for ceph cluster, it is poor.
> >
> > Between nodes, using iperf, I get good bandwidth.
> >
> > My cluster info :
> >
> > root@ceph-node3:~# ceph --version
> > ceph version 9.0.2-752-g64d37b7
> (64d37b70a687eb63edf69a91196bb124651da210)
> > root@ceph-node3:~# ceph -s
> > cluster 9654468b-5c78-44b9-9711-4a7c4455c480
> > health HEALTH_OK
> > monmap e9: 3 mons at {ceph-node10=
> >
> 192.168.1.210:6789/0,ceph-node17=192.168.1.217:6789/0,ceph-node3=192.168.1.203:6789/0
> > }
> > election epoch 442, quorum 0,1,2 ceph-node3,ceph-node10,ceph-node17
> > osdmap e1850: 6 osds: 6 up, 6 in
> > pgmap v17400: 256 pgs, 2 pools, 9274 MB data, 2330 objects
> > 9624 MB used, 5384 GB / 5394 GB avail
> > 256 active+clean
> >
> >
> > I have mapped an RBD block device to client machine (Ubuntu 14) and from
> > there, when I run tests using FIO, i get good write IOPS, however, read
> is
> > poor comparatively.
> >
> > Write IOPS : 44618 approx
> >
> > Read IOPS : 7356 approx
> >
> > Pool replica - single
> > pool 1 'test1' replicated size 1 min_size 1
> >
> > I have implemented rbd_readahead in my ceph conf file also.
> > Any suggestions in this regard with help me..
> >
> > Thanks.
> >
> > Daleep Singh Bais
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to