Michael;

I run a Nautilus cluster, but all I had to do was change the rule associated 
with the pool, and ceph moved the data.

Thank you,

Dominic L. Hilsbos, MBA 
Director - Information Technology 
Perform Air International Inc.
dhils...@performair.com 
www.PerformAir.com



-----Original Message-----
From: Michael Thomas [mailto:w...@caltech.edu] 
Sent: Tuesday, November 10, 2020 1:32 PM
To: ceph-users@ceph.io
Subject: [ceph-users] safest way to re-crush a pool

I'm setting up a radosgw for my ceph Octopus cluster.  As soon as I 
started the radosgw service, I notice that it created a handful of new 
pools.  These pools were assigned the 'replicated_data' crush rule 
automatically.

I have a mixed hdd/ssd/nvme cluster, and this 'replicated_data' crush 
rule spans all device types.  I would like radosgw to use a replicated 
SSD pool and avoid the HDDs.  What is the recommended way to change the 
crush device class for these pools without risking the loss of any data 
in the pools?  I will note that I have not yet written any user data to 
the pools.  Everything in them was added by the radosgw process 
automatically.

--Mike
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to