How many Rsync's are doing at a time? If it is only a couple, you will not
be able to take advantage of the full number of OSD's, as each block of data
is only located on 1 OSD (not including replicas). When you look at disk
statistics you are seeing an average over time, so it will look like the
OSD's are not very busy, when in fact each one is busy for a very brief
period. 

 

SSD journals will help your write latency, probably going down from around
15-30ms to under 5ms 

 

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
Piotr Wachowicz
Sent: 01 May 2015 09:31
To: ceph-users@lists.ceph.com
Subject: [ceph-users] How to estimate whether putting a journal on SSD will
help with performance?

 

Is there any way to confirm (beforehand) that using SSDs for journals will
help?

We're seeing very disappointing Ceph performance. We have 10GigE
interconnect (as a shared public/internal network). 

 

We're wondering whether it makes sense to buy SSDs and put journals on them.
But we're looking for a way to verify that this will actually help BEFORE we
splash cash on SSDs.

 

The problem is that the way we have things configured now, with journals on
spinning HDDs (shared with OSDs as the backend storage), apart from slow
read/write performance to Ceph I already mention, we're also seeing fairly
low disk utilization on OSDs. 

 

This low disk utilization suggests that journals are not really used to
their max, which begs for the questions whether buying SSDs for journals
will help.

 

This kind of suggests that the bottleneck is NOT the disk. But,m yeah, we
cannot really confirm that.

 

Our typical data access use case is a lot of small random read/writes. We're
doing a lot of rsyncing (entire regular linux filesystems) from one VM to
another.

 

We're using Ceph for OpenStack storage (kvm). Enabling RBD cache didn't
really help all that much.

 

So, is there any way to confirm beforehand that using SSDs for journals will
help in our case?

 

Kind Regards,
Piotr




_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to