On Wed, Jun 9, 2021 at 1:38 PM Wido den Hollander <w...@42on.com> wrote:
>
> Hi,
>
> While doing some benchmarks I have two identical Ceph clusters:
>
> 3x SuperMicro 1U
> AMD Epyc 7302P 16C
> 256GB DDR
> 4x Samsung PM983 1,92TB
> 100Gbit networking
>
> I tested on such a setup with v16.2.4 with fio:
>
> bs=4k
> qd=1
>
> IOps: 695
>
> That was very low as I was expecting at least >1000 IOps.
>
> I checked with the second Ceph cluster which was still running v15.2.8,
> the result: 1364 IOps.
>
> I then upgraded from 15.2.8 to 15.2.13: 725 IOps
>
> Looking at the differences between v15.2.8 and v15.2.8 of options.cc I
> saw these options:
>
> bluefs_buffered_io: false -> true
> bluestore_cache_trim_max_skip_pinned: 1000 -> 64
>
> The main difference seems to be 'bluefs_buffered_io', but in both cases
> this was already explicitly set to 'true'.
>
> So anything beyond 15.2.8 is right now giving me a much lower I/O
> performance with Queue Depth = 1 and Block Size = 4k.
>
> 15.2.8: 1364 IOps
> 15.2.13: 725 IOps
> 16.2.4: 695 IOps
>
> Has anybody else seen this as well? I'm trying to figure out where this
> is going wrong.

Hi Wido,

Going by the subject, I assume these are rbd numbers?  If so, did you
run any RADOS-level benchmarks?

Thanks,

                Ilya
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to