From my point of view, it's better to have no more than 6 osd wal/db on 1
nvme.
I think that's the root cause of the slow requests, maybe.

Mark Kirkwood <mark.kirkw...@catalyst.net.nz> 于2020年6月26日周五 上午7:47写道:

> Progress update:
>
> - tweaked debug_rocksdb to 1/5. *possibly* helped, fewer slow requests
>
> - will increase osd_memory_target from 4 to 16G, and observe
>
> On 24/06/20 1:30 pm, Mark Kirkwood wrote:
> > Hi,
> >
> > We have recently added a new storage node to our Luminous (12.2.13)
> > cluster. The prev nodes are all setup as Filestore: e.g 12 osds on hdd
> > (Seagate Constellations) with one NVMe (Intel P4600) journal. With the
> > new guy we decided to introduce Bluestore so it is configured as:
> > (same HW) 12 osd with data on hdd and db + wal on one NVMe.
> >
> > We noticed there are periodic slow requests logged, and the implicated
> > osds are the Bluestore ones 98% of the time! This suggests that we
> > need to tweak our Bluestore settings in some way. Investigating I'm
> > seeing:
> >
> > - A great deal of rocksdb debug info in the logs - perhaps we should
> > tone that down? (debug_rocksdb 4/5 -> 1/5)
> >
> > - We look to have the default cache settings
> > (bluestore_cache_size_hdd|ssd etc), we have memory to increase these
> >
> > - There are some buffered io settings (bluefs_buffered_io,
> > bluestore_default_buffered_write), set to (default) false. Are these
> > safe (or useful) to change?
> >
> > - We have default rocksdb options, should some of these be changed?
> > (bluestore_rocksdb_options, in particular max_background_compactions=2
> > - should we have less, or more?)
> >
> > Also, anything else we should be looking at?
> >
> > regards
> >
> > Mark
> >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to