Colleague of Harry's here...

Harald Staub writes:
> This is again about our bad cluster, with too much objects, and the
> hdd OSDs have a DB device that is (much) too small (e.g. 20 GB, i.e. 3
> GB usable). Now several OSDs do not come up any more.

> Typical error message:
> /build/ceph-14.2.8/src/os/bluestore/BlueFS.cc: 2261: FAILED
> ceph_assert(h->file->fnode.ino != 1)

The context of that line is "we should never run out of log space here":

  // previously allocated extents.
  bool must_dirty = false;
  if (allocated < offset + length) {
    // we should never run out of log space here; see the min runway check
    // in _flush_and_sync_log.
    ceph_assert(h->file->fnode.ino != 1);

So I guess we are violating that "should", and the Bluestore code
doesn't handle that case.  And the "min runway" check may not be
reliable.  Should we file a bug?

Again, help on how to proceed would be greatly appreciated...
-- 
Simon.

> Also just tried to add a few GB to the DB device (lvextend,
> ceph-bluestore-tool bluefs-bdev-expand), but this also crashes, also 
> with this message.

> Options that helped us before (thanks Wido :-) do not help here, e.g.
> CEPH_ARGS="--bluestore-rocksdb-options compaction_readahead_size=0"
> ceph-kvstore-tool bluestore-kv /var/lib/ceph/osd/ceph-$OSD compact

> Any ideas that I could try to save these OSDs?

> Cheers
>  Harry
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to