Hi,
Am 18.12.20 um 17:56 schrieb Dallas Jones:
> As you can see from the partial output of ceph -s, I left a bunch of crap
> spread across the OSDs...
>
> pools: 8 pools, 32 pgs
> objects: 219 objects, 1.2 KiB
Just remove all pools and create new ones. Removing the pools also
removes
Depending on your actual OSD setup (separate rocksDB/WAL) simply
deleting pools won’t immediately delete the remaining objects. The DBs
are cleaned up quite slowly which can leave you with completely
saturated disks. This has been explained multiple times here, I just
don’t have a link at h
On Fri, Dec 18, 2020 at 6:28 AM Stefan Kooman wrote:
> >> I have searched through documentation but I don't see anything
> >> related. It's also not described / suggested in the part about upgrading
> >> the MDS cluster (IMHO that would be a logical place) [1].
> >
> > You're the first person I'm
Hi,
I'm facing something strange! One of the PGs in my pool got inconsistent
and when I run `rados list-inconsistent-obj $PG_ID --format=json-pretty`
the `inconsistents` key was empty! What is this? Is it a bug in Ceph or..?
Thanks.
___
ceph-users maili
Hi,
I used radosgw-admin reshard process to process a manual bucket resharding
after it completes it logs an error below
ERROR: failed to process reshard logs, error=(2) No such file or directory
I've added a bucket to resharding queue with radosgw-admin reshard add
--bucket bucket-tmp --num-shar