In practice, it's not a big deal. Just deploy your disks sequentially, and Ceph will sort it out.

Sure, you'll waste a bit of time watching data copy to a new disk, only to see it get remapped it to a newer disk. It's a small period of time relative to how long it's going to take to remap to all of the new disks anyway. Ceph handles it fine, and won't lose data.


If you still want to, check out the mon osd auto mark new in setting: http://ceph.com/docs/master/rados/configuration/mon-osd-interaction/#configuration-settings


Alfredo's comments about a race condition shouldn't apply, because you'll create the OSDs sequentially, and only mark them up and in after they're created.



On 5/27/14 07:20 , Alfredo Deza wrote:
There is no simultaneous support in ceph-deploy as this is something
that is not needed for users setting up a cluster to try out
ceph (the main objective of ceph-deploy)

However, there have been users that used scripts that can call
ceph-deploy in parallel so it might be possible to do so.

For OSDs specifically, be warned about a possible race condition, you
might find a bit of info in this ticket:
http://tracker.ceph.com/issues/3309



On Mon, May 26, 2014 at 2:14 AM, Cao, Buddy <buddy....@intel.com> wrote:
Hi,  does ceph-deploy support to deploy osds simultaneously in a large scale 
cluster? Looks mkcephfs does not support simultaneous osd deployment.

If there are many hosts with very large number and size of osds/devices, how do 
I improve the performance to deploy the whole cluster at once?

Wei Cao (Buddy)

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--

*Craig Lewis*
Senior Systems Engineer
Office +1.714.602.1309
Email cle...@centraldesktop.com <mailto:cle...@centraldesktop.com>

*Central Desktop. Work together in ways you never thought possible.*
Connect with us Website <http://www.centraldesktop.com/> | Twitter <http://www.twitter.com/centraldesktop> | Facebook <http://www.facebook.com/CentralDesktop> | LinkedIn <http://www.linkedin.com/groups?gid=147417> | Blog <http://cdblog.centraldesktop.com/>

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to