Paul,
unfortunately there is no way to recover an OSD which lost its DB
volume. All the metadata is kept on this volume so it's impossible to
recover object structure after such a failure. Some user data pieces
could be found on main device but they're completely unstructured. Not
to mention that some Ceph apps (e.g. RGW or CephFS) depend severely on
OMAP metadata which is kept purely in DB.
And yes - adding new DB volume implies legacy DB data is still
accessible, e.g. original DB data collocates user one at main device.
But once you had dedicated DB volume and it's lost that's not the case
any more.
Thanks,
Igor
On 23.04.2025 14:29, Paul Browne wrote:
Hi Igor,
Ah OK; So in the failure mode of, for instance, 1 failed dedicated
SSD/NVMe based dedicated DB devices serving 1 OSD, is there going to
be no real way to replace that failed DB device alone without also
replacing/wiping the OSD and then having the OSD back-fill objects
again from cluster replicas?
I had been looking into e.g.
ceph-bluestore-tool
<https://docs.ceph.com/en/latest/man/8/ceph-bluestore-tool/> bluefs-bdev-new-db
<https://docs.ceph.com/en/latest/man/8/ceph-bluestore-tool/>
, which seemed to offer some way forward around this.
But it may be the case that this only offers a way forward to replace
an older/smaller/slower but still working DB device to a
replacement/larger/faster one. And if the old DB device is actually
failed and un-usable, then the whole DB+OSD needs replacing?
Here we've lost too many DB devices and by extension too many blocked
OSDs to ever recover this Ceph cluster, I fear...
Thanks,
Paul
*******************
Paul Browne
Research Computing Platforms
University Information Services
Roger Needham Building
JJ Thompson Avenue
University of Cambridge
Cambridge
United Kingdom
E-Mail: pf...@cam.ac.uk <mailto:pf...@cam.ac.uk>
Tel: 0044-1223-746548
*******************
------------------------------------------------------------------------
*From:* Igor Fedotov <igor.fedo...@croit.io>
*Sent:* 23 April 2025 09:35
*To:* Paul Browne <pf...@cam.ac.uk>; ceph-users@ceph.io
<ceph-users@ceph.io>
*Subject:* Re: [ceph-users] Cluster recovery: DC power failure killed
OSD node BlueStore block.DB devices
Hi Paul,
have I got your idea correct - you're trying to attach new empty DB
volumes to existing OSDs in an attempt to recover these OSDs, right? And
original SSD drives which kept DB volumes are physically dead?
If so than IMO it's a way to nowhere, "recovered" OSDs wouldn't run
without original metadata. This is rather a waste of time..
Thanks,
Igor
On 23.04.2025 0:11, Paul Browne wrote:
> Hi ceph-users,
>
> We recently suffered a total power failure at our main DC;
fortunately, our production Ceph cluster emerged unscathed but a
smaller Ceph cluster came back with the majority of its dedicated SSD
BlueStore block.DB devices dead (but its HDD OSD devices unharmed).
This cluster underpinned a small OpenStack cloud, so it would be
preferable to recover it rather than writing it off.
>
> In terms of deployment tooling, this ailing Ceph cluster is a fairly
standard Red Hat Ceph Storage 7 (so Quincy) cephadm deployed cluster,
with the main wrinkle about it being that both the DB and HDD OSD
devices make use of the cephadm supported LVM->LUKS layering above the
BlueStore devices.
>
> The dead BlueStore block.DB devices are of course blocking the
surviving HDD OSD daemons (in cephadm deployed containers) from coming
up cleanly and so the Ceph cluster status is currently very degraded
(attached status for the ugly picture)
>
> I've kicked around some ideas of recovering the dead DB devices and
restarting down+out OSDs by;
>
> * Manually partitioning replacement SSDs into new DB device
partitions+LVs
> * Installing the same LUKS keys on them retrieved from the Ceph
config DB, matching up against which OSD is on which OSD host.
>
> *
> Manually changing over device links for OSDs to their DB device with
"ceph-bluestore-tool bluefs-bdev-new-db" or similar
> *
> Try restarting OSDs with updated links to new LVM->LUKS->block.DB
devices
>
> This approach seems highly messy and subject to needing to extract a
lot of information error-free from dumps of ceph volume lvm, list in
order to exactly match extant OSD UUIDs to newly created DB
devicemapper devices.
>
> Is there going to be some smarter/better/faster way to
non-destructively recover these intact HDD OSDs which have links to
dead block.DB devices, using native cephadm tooling rather than
getting so low-level as all the above?
>
> Many thanks for any advice,
>
> *******************
> Paul Browne
> Research Computing Platforms
> University Information Services
> Roger Needham Building
> JJ Thompson Avenue
> University of Cambridge
> Cambridge
> United Kingdom
> E-Mail: pf...@cam.ac.uk<mailto:pf...@cam.ac.uk <mailto:pf...@cam.ac.uk>>
> Tel: 0044-1223-746548
> *******************
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io