Hi,

I can't recall that anybody was able to recover a rocksDB. The usual response was, your OSD is dead if the DB is corrupted. But maybe someone has successfully done that, who knows. I'm not sure how many reponses you can currently expect due to Cephalocon. ;-)


Zitat von Malcolm Haak <[email protected]>:

Hi,

I have the third error message:
rocksdb: verify_sharding unable to list column families: Corruption:
CURRENT file does not end with newline

Hopefully that can help with a resolution

Thanks

Mal

On Tue, Oct 28, 2025 at 1:25 PM Malcolm Haak <[email protected]> wrote:

Hello,

I have had a very bad thing happen. 10 of my OSD's were attached to
two hosts at the same time. They were automatically mounted at system
boot.

When I realised I stopped them, one node at a time. However they now
wont mount (with the SAS pathing fixed)

I'm getting rocksDB corruption issues (Unsurprisingly) that cannot be
resolved by ceph-bluestore-tool or ceph-kvstore-tool as they cannot
even deal with the rocksdb's.

I've lost enough disks that entire PG's are wiped out.

I'm hoping there is a way to recover some of the data. I have another
6 4TB disks that are blank and can be used if it will help.

Some of the errors I'm seeing are:
rocksdb: Corruption: Mismatch in unique ID on table file 41122.
Expected: {4769373877066773223,17385751700056157561} Actual:
{17358529186729819771,7012539495376980102} in file db/MANIFEST-0411055

rocksdb: verify_sharding unable to list column families: Corruption:
checksum mismatch in file db/MANIFEST-062921

And a different one I can't find right now, but it says something
about missing a newline....

Any advice or help would be appreciated,

Thanks in advance,

Mal
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]


_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to