Hi,

On 25/06/14 00:27, Mark Kirkwood wrote:
> On 24/06/14 23:39, Mark Nelson wrote:
>> On 06/24/2014 03:45 AM, Mark Kirkwood wrote:
>>> On 24/06/14 18:15, Robert van Leeuwen wrote:
>>>>> All of which means that Mysql performance (looking at you binlog) may
>>>>> still suffer due to lots of small block size sync writes.
>>>>
>>>> Which begs the question:
>>>> Anyone running a reasonable busy Mysql server on Ceph backed storage?
>>>>
>>>> We tried and it did not perform good enough.
>>>> We have a small ceph cluster: 3 machines with 2 SSD journals and 10
>>>> spinning disks each.
>>>> Using ceph trough kvm rbd we were seeing performance equal to about
>>>> 1-2 spinning disks.
>>>>
>>>> Reading this thread it now looks a bit if there are inherent
>>>> architecture + latency issues that would prevent it from performing
>>>> great as a Mysql database store.
>>>> I'd be interested in example setups where people are running busy
>>>> databases on Ceph backed volumes.
>>>
>>> Yes indeed,
>>>
>>> We have looked extensively at Postgres performance on rbd - and
>>> while it
>>> is not Mysql, the underlying mechanism for durable writes (i.e commit)
>>> is essentially very similar (fsync, fdatasync and friends). We achieved
>>> quite reasonable performance (by that I mean sufficiently
>>> encouraging to
>>> be happy to host real datastores for our moderately busy systems - and
>>> we are continuing to investigate using it for our really busy ones).
>>>
>>> I have not experimented exptensively with the various choices of flush
>>> method (called sync method in Postgres but the same idea), as we found
>>> quite good performance with the default (fdatasync). However this is
>>> clearly an area that is worth investigation.
>>
>> FWIW, I ran through the DBT-3 benchmark suite on MariaDB ontop of
>> qemu/kvm RBD with a 3X replication pool on 30 OSDs with 3x replication.
>>   I kept buffer sizes small to try to force disk IO and benchmarked
>> against a local disk passed through to the VM.  We typically did about
>> 3-4x faster on queries than the local disk, but there were a couple of
>> queries were we were slower.  I didn't look at how multiple databases
>> scaled though.  That may have it's own benefits and challenges.
>>
>> I'm encouraged overall though.  It looks like from your comments and
>> from my own testing it's possible to have at least passable performance
>> with a single database and potentially as we reduce latency in Ceph make
>> it even better.  With multiple databases, it's entirely possible that we
>> can do pretty good even now.
>>
>
> Yes - same kind of findings, specifically:
>
> - random read and write (e.g index access) faster than local disk
> - sequential write (e.g batch inserts) similar or faster than local disk
> - sequential read (e.g table scan) slower than local disk
>
Regarding sequential read, I think it was
https://software.intel.com/en-us/blogs/2013/11/20/measure-ceph-rbd-performance-in-a-quantitative-way-part-ii
that did some tuning with that.
Anyone tried to optimize it the way they did in the article?

Cheers,
Josef
> Regards
>
> Mark
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to