Hi,

I'd probably start by looking at your nodes and check if the SSDs are
saturated or if they have high write access times. If any of that is true,
does that account for all SSD or just some of them? Maybe some of the disks
needs a trim. Maybe test them individually directly on the cluster.

If you can't find anything with the disks, then try and look further up the
stack. Network, interrupts etc. At some point in time we accidentially had
a node being reinstalled with a non-LTS image (13.04 I think) - and the
kernel (3.5.something)  had a bug/'feature' which caused lots of tcp
segments to be retransmittet (approx. 1/100). This one node slowed down our
entire cluster and caused high access time across the board. 'Upgrading' to
LTS fixed it.

As you say, it can just be that the increased utilization of the the
cluster causes it and that you'll 'just' have to add more nodes.

Cheers,
Martin


On Fri, Mar 7, 2014 at 10:50 AM, Indra Pramana <in...@sg.or.id> wrote:

> Hi,
>
> I have a Ceph cluster, currently with 5 osd servers and around 22 OSDs
> with SSD drives and I noted that the I/O speed, especially write access to
> the cluster is degrading over time. When we first started the cluster, we
> can get up to 250-300 MB/s write speed to the SSD cluster but now we can
> only get up to half the mark. Furthermore, it now fluctuates so sometimes I
> can get slightly better speed but on another time I get very bad result.
>
> We started with 3 osd servers and 12 OSDs and gradually add more servers.
> We are using KVM hypervisors as the Ceph clients and connection between
> clients and servers and between the servers are through 10 GBps switch with
> jumbo frames enabled on all interfaces.
>
> Any advice on how can I start to troubleshoot what might have caused the
> degradation of the I/O speed? Does utilisation contributes to it (since now
> we have more users compared to last time when we started)? Any optimisation
> we can do to improve the I/O performance?
>
> Appreciate any advice, thank you.
>
> Cheers.
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to