Helllo,

I've reviewed some recent posts in this list and also searched Google for
info about autoscale and overlapping roots.  In what I have found I do not
see anything that I can understand regarding how to fix the issue -
probably because I don't deal with Crush on a regular basis.

My particulars:  I have an up-to-date Reef cluster in production with some
EC pools and some replicated pools.  My error messages look like:

pool default.rgw.buckets.data won't scale due to overlapping roots: {-1, -2}


>From what I read and looking at 'ceph osd crush rule dump', it looks like
the 8 replicated pools have

                    "op": "take",
                    "item": -1,
                    "item_name": "default"

whereas the 2 EC pools have

                    "op": "take",
                    "item": -2,
                    "item_name": "default~hdd"

To be sure, all of my OSDs are identical - HDD with SSD WAL/DB.

Please advise on how to fix this.

Thanks.

-Dave

--
Dave Hall
Binghamton University
kdh...@binghamton.edu
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to