that were
outed were part of a successful recovery/backfill with no issues). I
tested the two original outed OSDs and the underlying disks all pass
smartctl short tests and have seen no indication of hardware issues with
these spinning disks.
Looking over the ceph documentation, it looks lik
a similar issue with multipart omap keys in
Nautilus. Anyways, it sounds like you don't have the index shards for that
bucket available, but you should be able to list and delete the objects using
librados.
Best,
Nick
On Fri, Jan 29, 2021, at 5:00 AM, James, GleSYS wrote:
> Hi,
>
&g
Hi Huang,
Thanks for offering to help but this original issue with the ceph-mon's not
connecting already got diagnosed as a possible networking error at the
hardware level last week. We originally removed all the mons except one to
force it to come online without waiting for a quorum, and the netw
Hi Ashley,
The only change I made was increasing the osd_max_backfills from 3 to 10 at
first, and when that ended up causing more problems than it helped, it was
lowering the setting back down to 3 that took the cluster offline.
I've actually been working on this issue for a week now and my compa
Hello,
I have an old ceph 0.94.10 cluster that had 10 storage nodes with one extra
management node used for running commands on the cluster. Over time we'd
had some hardware failures on some of the storage nodes, so we're down to
6, with ceph-mon running on the management server and 4 of the stora