I managed to export the rockdb, and compact. Just don't know how to put it
back in - I guess "ceph-bluestore-tool prime-osd-dir" is the closest thing
I can get to, but I can't specify what gets primed. :(

To export Rocksdb from Bluestore:
$ ceph bluestore-tool bluefs-export --path /var/lib/ceph/osd-0/ --out-dir
/tmp/bluefs-export-ceph-osd-0

To compact:
$ ceph-kvstore-tool rocksdb /tmp/bluefs-export-ceph-osd-0 compact

Put it back into Bluestore somehow. Profit??


On Fri, Dec 20, 2019 at 11:18 AM Paul Choi <pc...@nuro.ai> wrote:

> Hi,
>
> I have a weird situation where an OSD's rocksdb fails to compact, because
> the OSD became full and the osd-full-ratio was 1.0 (not a good idea, I
> know).
>
> Hitting "bluefs enospc" while compacting:
>    -376> 2019-12-18 15:48:16.492 7f2e0a5ac700  1 bluefs _allocate failed
> to allocate 0x40da486 on bdev 1, free 0x38b0000; fallback to bdev 2
>   -376> 2019-12-18 15:48:16.492 7f2e0a5ac700 -1 bluefs _allocate failed to
> allocate 0x on bdev 2, dne
>   -376> 2019-12-18 15:48:16.492 7f2e0a5ac700 -1 bluefs _flush_range
> allocated: 0x0 offset: 0x0 length: 0x40da486
>   -376> 2019-12-18 15:48:16.500 7f2e0a5ac700 -1
> /build/ceph-13.2.8/src/os/bluestore/BlueFS.cc: In function 'int
> BlueFS::_flush_range(BlueFS::FileWriter*, uin
> t64_t, uint64_t)' thread 7f2e0a5ac700 time 2019-12-18 15:48:16.499599
> /build/ceph-13.2.8/src/os/bluestore/BlueFS.cc: 1704: FAILED assert(0 ==
> "bluefs enospc")
>
> So my idea is to copy out rocksdb somewhere else (bluefs-export), compact,
> then copy back in. Is there a way to do this? Mounting bluefs seems to be
> part of the OSD code, so there's no easy way to do this it seems.
>
> Because the OSD died at 100% full, I can't do bluefs-bdev-expand, and
> repair/fsck fail too.
>
> Thanks in advance.
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to