There is 1 more thing that I noticed when using cephfs instead of RBD for 
MySQL, and that is CPU usage on the client.
When using RBD, I was using 99% of the CPU’s. When I switched to cephfs, the 
same tests were using 60% of the CPU. Performance was about equal. This test 
was an OLTP sysbench using 8 tables with 8 concurrent jobs all using the same 
pool.
> On May 1, 2017, at 12:26 PM, Gregory Farnum <gfar...@redhat.com> wrote:
> 
> On Mon, May 1, 2017 at 9:17 AM, Babu Shanmugam <b...@aalam.io> wrote:
>> 
>> My intention is to just identify the root cause for the too much time spent 
>> on a "table create" operation on CephFS. I am *not* trying to benchmark with 
>> my testing. Sorry if it wasn't clear in my mail.
>> 
>> I am sure the time spent would be lesser if I had a proper CEPH setup. But, 
>> I believe even then, the "Table create" operation takes more time in CephFS 
>> than RBD. Please correct me if I am wrong.
> 
> As your data shows, fsync is just much slower on CephFS. I think you'd
> find the difference was less on a real cluster, but it's always going
> to be slower than RBD is — an fsync requires committing metadata
> updates, and that requires the client sending a message to the MDS
> which then commits to RADOS, PLUS the client sending out its own data
> directly to RADOS. In comparison, RBD is just talking to OSDs
> directly, and often enough an fsync will only require one message (or
> more commonly two in parallel?) so it's going to perform better no
> matter how much stuff gets optimized. *shrug*
> -Greg
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Rick Stehno


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to