ceph-users@ceph.io
Subject: [ceph-users] Re: Ceph Pacific bluefs enospc bug with newly created OSDs
Check twice before you click! This email originated from outside PNNL.
Hi Carsten,
please also note a workaround to bring the osds back for e.g. data
recovery - set bluefs_shared_alloc_size to 32
Pacific bluefs enospc bug with newly created OSDs
Check twice before you click! This email originated from outside PNNL.
Hi Carsten,
please also note a workaround to bring the osds back for e.g. data
recovery - set bluefs_shared_alloc_size to 32768.
This will hopefully allow OSD to startup and pull
Hi Carsten,
please also note a workaround to bring the osds back for e.g. data
recovery - set bluefs_shared_alloc_size to 32768.
This will hopefully allow OSD to startup and pull data out of it. But I
wouldn't discourage you from using such OSDs long term as fragmentation
might evolve and th
Hi Igor,
thank you for your ansere!
>first of all Quincy does have a fix for the issue, see
>https://tracker.ceph.com/issues/53466 (and its Quincy counterpart
>https://tracker.ceph.com/issues/58588)
Thank you I somehow missed that release, good to know!
>SSD or HDD? Standalone or shared DB volu
Hi Carsten,
first of all Quincy does have a fix for the issue, see
https://tracker.ceph.com/issues/53466 (and its Quincy counterpart
https://tracker.ceph.com/issues/58588)
Could you please share a bit more info on OSD disk layout?
SSD or HDD? Standalone or shared DB volume? I presume the lat