Migration was complete flawless without any issues and slow requests.
Thanks.
k
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 02/01/2018 08:56 PM, David Turner wrote:
You can attempt to mitigate this by creating new, duplicate rules and
change 1 pool at a time to start using them.
Yes, I'm already prepared to this strategy.
k
___
ceph-users mailing list
ceph-users@lis
It doesn't matter what your failure domain is, the data movement is
significant to change your crush rules to use device classes. You can
attempt to mitigate this by creating new, duplicate rules and change 1 pool
at a time to start using them. In that way you can somewhat control the
backfilling u
We had a MASSIVE data movement upon changing the crush rules to device
class based one. I'm not sure about the exact reasons, but I assume that
the order of hosts in the crush tree has changed (hosts are ordered
lexically now...), which resulted in about 80% of data being moved around.
What is
Hi,
On 02/01/2018 10:43 AM, Konstantin Shalygin wrote:
Hi cephers.
I have typical double root crush - for nvme pools and hdd pools
created on Kraken cluster (what I mean:
http://cephnotes.ksperis.com/blog/2015/02/02/crushmap-example-of-a-hierarchical-cluster-map).
Now cluster upgraded to