Hi Gilles,

Good day to you, and thank you for your reply! :)

>You should test the performance of one (each ?) OSD, directly on the OSD
server
>(with fio on /var/lib/ceph/osd/...)

Do you have any further details on how can I do this? Any URLs to specific
documentation on this (e.g. fio or any other benchmarking tools which can
be used specific for the OSDs)?

Is it safe to say run dd within the /var/lib/ceph/osd/ceph-4 folder to test
the device's I/O speed? Will it impact Ceph performance?

>If you see the same problem, perhaps you have a SSD which has no more
empty
>cells, and has to do many deletes.
>You must trim it (fstrim if it's supported by your filesystem).

What kind of problem I can see which will lead to the requirement to do
trim/fstrim? Appreciate if you can provide me with some URLs on
documentation about this.

Greatly appreciate your assistance, thank you.



On Fri, Mar 7, 2014 at 6:04 PM, Gilles Mocellin <
gilles.mocel...@nuagelibre.org> wrote:

> Le 07/03/2014 10:50, Indra Pramana a écrit :
>
>  Hi,
>>
>> I have a Ceph cluster, currently with 5 osd servers and around 22 OSDs
>> with SSD drives and I noted that the I/O speed, especially write access to
>> the cluster is degrading over time. When we first started the cluster, we
>> can get up to 250-300 MB/s write speed to the SSD cluster but now we can
>> only get up to half the mark. Furthermore, it now fluctuates so sometimes I
>> can get slightly better speed but on another time I get very bad result.
>>
>> We started with 3 osd servers and 12 OSDs and gradually add more servers.
>> We are using KVM hypervisors as the Ceph clients and connection between
>> clients and servers and between the servers are through 10 GBps switch with
>> jumbo frames enabled on all interfaces.
>>
>> Any advice on how can I start to troubleshoot what might have caused the
>> degradation of the I/O speed? Does utilisation contributes to it (since now
>> we have more users compared to last time when we started)? Any optimisation
>> we can do to improve the I/O performance?
>>
>> Appreciate any advice, thank you.
>>
>> Cheers.
>>
>
> You should test the performance of one (each ?) OSD, directly on the OSD
> server (with fio on /var/lib/ceph/osd/...)
>
> If you see the same problem, perhaps you have a SSD which has no more
> empty cells, and has to do many deletes.
> You must trim it (fstrim if it's supported by your filesystem).
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to