Hi, Toby

On 19 May 2021, at 15:24, Toby Darling <t...@mrc-lmb.cam.ac.uk> wrote:
> 
> In the last couple of weeks we've been getting BlueFS spillover warnings on 
> multiple (>10) osds, eg
> 
> BLUEFS_SPILLOVER BlueFS spillover detected on 1 OSD(s)
>     osd.327 spilled over 58 MiB metadata from 'db' device (30 GiB used of 66 
> GiB) to slow device
> 
> I know this can be corrected with a 'ceph tell osd.$osd compact' or ignored 
> with "bluestore_warn_on_bluefs_spillover=false", but my concern is that these 
> warnings have only recently started.
> 
> Could this be a sign of something nasty heading our way that I'm not aware 
> of? Is there a performance penalty by just ignoring, rather than compacting?
> 
> Many thanks for any pointers.

It's just enough to upgrade to Nautilus at least 14.2.19, where Igor developed 
new bluestore levels policy (bluestore_volume_selection_policy) in value 
'use_some_extra' - any BlueFS spillover should be mitigated!


Cheers,
k
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to