> On 26/03/2015, at 21.07, J-P Methot <jpmet...@gtcomm.net> wrote:
> 
> That's a great idea. I know I can setup cinder (the openstack volume manager) 
> as a multi-backend manager and migrate from one backend to the other, each 
> backend linking to different pools of the same ceph cluster. What bugs me 
> though is that I'm pretty sure the image store, glance, wouldn't let me do 
> that. Additionally, since the compute component also has its own ceph pool, 
> I'm pretty sure it won't let me migrate the data through openstack.
Hm wouldn’t it be possible to do something similar ala:

# list object from src pool
rados ls objects loop | filter-obj-id | while read obj; do
     # export $obj to local disk
     rados -p pool-wth-too-many-pgs get $obj
     # import $obj from local disk to new pool
     rados -p better-sized-pool put $obj
done

possible split/partition list of objects into multiple concurrent loops, 
possible from multiple boxes as seems fit for resources at hand, cpu, memory, 
network, ceph perf.

/Steffen

>  
> 
> 
> On 3/26/2015 3:54 PM, Steffen W Sørensen wrote:
>>> On 26/03/2015, at 20.38, J-P Methot <jpmet...@gtcomm.net> wrote:
>>> 
>>> Lately I've been going back to work on one of my first ceph setup and now I 
>>> see that I have created way too many placement groups for the pools on that 
>>> setup (about 10 000 too many). I believe this may impact performances 
>>> negatively, as the performances on this ceph cluster are abysmal. Since it 
>>> is not possible to reduce the number of PGs in a pool, I was thinking of 
>>> creating new pools with a smaller number of PGs, moving the data from the 
>>> old pools to the new pools and then deleting the old pools.
>>> 
>>> I haven't seen any command to copy objects from one pool to another. Would 
>>> that be possible? I'm using ceph for block storage with openstack, so 
>>> surely there must be a way to move block devices from a pool to another, 
>>> right?
>> What I did a one point was going one layer higher in my storage abstraction, 
>> and created new Ceph pools and used those for new storage resources/pool in 
>> my VM env. (ProxMox) on top of Ceph RBD and then did a live migration of 
>> virtual disks there, assume you could do the same in OpenStack.
>> 
>> My 0.02$
>> 
>> /Steffen
> 
> 
> -- 
> ======================
> Jean-Philippe Méthot
> Administrateur système / System administrator
> GloboTech Communications
> Phone: 1-514-907-0050
> Toll Free: 1-(888)-GTCOMM1
> Fax: 1-(514)-907-0750
> jpmet...@gtcomm.net
> http://www.gtcomm.net
> 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to