[ceph-users] Re: BLUEFS_SPILLOVER BlueFS spillover detected

2020-11-19 Thread Igor Fedotov
This is a known issue with RocksDB/BlueFS. Discussed multiple time in this mailing thread... This should improve starting Nautilus v14.2.12 thanks to the following PRs: https://github.com/ceph/ceph/pull/33889 https://github.com/ceph/ceph/pull/37091 Please note these PRs don't fix existing sp

[ceph-users] Re: BLUEFS_SPILLOVER BlueFS spillover detected

2020-11-16 Thread Dave Hall
Zhenshi, I've been doing the same periodically over the past couple weeks. I haven't had to do it a second time on any of my OSDs, but I'm told that I can expect to do so in the future. I believe that the conclusion in this list was that for a workload with many small files it might be necessary

[ceph-users] Re: BLUEFS_SPILLOVER BlueFS spillover detected

2020-11-15 Thread Zhenshi Zhou
well, the warning message disappeared after I executed "ceph tell osd.63 compact". Zhenshi Zhou 于2020年11月16日周一 上午10:04写道: > Has anyone met this issue yet? > > Zhenshi Zhou 于2020年11月14日周六 下午12:36写道: > >> Hi, >> >> I have a cluster of 14.2.8. >> I created OSDs with dedicated PCIE for wal/db when

[ceph-users] Re: BLUEFS_SPILLOVER BlueFS spillover detected

2020-11-15 Thread Zhenshi Zhou
Has anyone met this issue yet? Zhenshi Zhou 于2020年11月14日周六 下午12:36写道: > Hi, > > I have a cluster of 14.2.8. > I created OSDs with dedicated PCIE for wal/db when deployed the cluster. > I set 72G for db and 3G for wal on each OSD. > > And now my cluster is in a WARN stats until a long health time