>Sounds like one of the following could be happening:
> 1) RBD write caching doing the 37K IOPS, which will need to flush at some
point which causes the drop.

I am not sure this will help Shantur. But you could try running  'watch cat
/proc/meminfo' during a benchmark run.
You might be able to spot caches being flushed.
iostat is probably a better tool




On 1 May 2018 at 13:13, Van Leeuwen, Robert <rovanleeu...@ebay.com> wrote:

> > On 5/1/18, 12:02 PM, "ceph-users on behalf of Shantur Rathore" <
> ceph-users-boun...@lists.ceph.com on behalf of shantur.rath...@gmail.com>
> wrote:
> >    I am not sure if the benchmark is overloading the cluster as 3 out of
> >   5 runs the benchmark goes around 37K IOPS and suddenly for the
> >    problematic runs it drops to 0 IOPS for a couple of minutes and then
> >   resumes. This is a test cluster so nothing else is running off it.
>
> Sounds like one of the following could be happening:
> 1) RBD write caching doing the 37K IOPS, which will need to flush at some
> point which causes the drop.
>
> 2) Hardware performance drops over time.
> You could be hitting hardware write cache on RAID or disk controllers.
> Especially SSDs can have a performance drop after writing to them for a
> while due to either SSD housekeeping or caches filling up.
> So always run benchmarks over longer periods to make sure you get the
> actual sustainable performance of your cluster.
>
> Cheers,
> Robert van Leeuwen
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to