[ceph-users] Re: Ceph Pacific bluefs enospc bug with newly created OSDs

2023-06-22 Thread Igor Fedotov
ceph-users@ceph.io Subject: [ceph-users] Re: Ceph Pacific bluefs enospc bug with newly created OSDs Check twice before you click! This email originated from outside PNNL. Hi Carsten, please also note a workaround to bring the osds back for e.g. data recovery - set bluefs_shared_alloc_size to 32

[ceph-users] Re: Ceph Pacific bluefs enospc bug with newly created OSDs

2023-06-21 Thread Fox, Kevin M
Pacific bluefs enospc bug with newly created OSDs Check twice before you click! This email originated from outside PNNL. Hi Carsten, please also note a workaround to bring the osds back for e.g. data recovery - set bluefs_shared_alloc_size to 32768. This will hopefully allow OSD to startup and pull

[ceph-users] Re: Ceph Pacific bluefs enospc bug with newly created OSDs

2023-06-21 Thread Igor Fedotov
Hi Carsten, please also note a workaround to bring the osds back for e.g. data recovery - set bluefs_shared_alloc_size to 32768. This will hopefully allow OSD to startup and pull data out of it. But I wouldn't discourage you from using such OSDs long term as fragmentation might evolve and th

[ceph-users] Re: Ceph Pacific bluefs enospc bug with newly created OSDs

2023-06-21 Thread Carsten Grommel
Hi Igor, thank you for your ansere! >first of all Quincy does have a fix for the issue, see >https://tracker.ceph.com/issues/53466 (and its Quincy counterpart >https://tracker.ceph.com/issues/58588) Thank you I somehow missed that release, good to know! >SSD or HDD? Standalone or shared DB volu

[ceph-users] Re: Ceph Pacific bluefs enospc bug with newly created OSDs

2023-06-20 Thread Igor Fedotov
Hi Carsten, first of all Quincy does have a fix for the issue, see https://tracker.ceph.com/issues/53466 (and its Quincy counterpart https://tracker.ceph.com/issues/58588) Could you please share a bit more info on OSD disk layout? SSD or HDD? Standalone or shared DB volume? I presume the lat