BTW, I'd like to know, after I change the "from rack" to "from host", if I add 
more racks with host/osds in the cluster, will ceph choose the osds for pg only 
from one zone? or ceph will randomly choose from several different zones?


Wei Cao (Buddy)

-----Original Message-----
From: Cao, Buddy 
Sent: Wednesday, May 14, 2014 1:30 PM
To: 'Gregory Farnum'
Cc: ceph-users@lists.ceph.com
Subject: RE: [ceph-users] crushmap question

Thanks Gregory so much,it solved the problem!


Wei Cao (Buddy)

-----Original Message-----
From: Gregory Farnum [mailto:g...@inktank.com]
Sent: Wednesday, May 14, 2014 2:00 AM
To: Cao, Buddy
Cc: ceph-users@lists.ceph.com
Subject: Re: [ceph-users] crushmap question

You just use a type other than "rack" in your chooseleaf rule. In your case, 
"host". When using chooseleaf, the bucket type you specify is the failure 
domain which it must segregate across.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com


On Tue, May 13, 2014 at 12:52 AM, Cao, Buddy <buddy....@intel.com> wrote:
> Hi,
>
>
>
> I have a crushmap structure likes root->rack->host->osds. I designed 
> the rule below, since I used “chooseleaf…rack” in rule definition, if 
> there is only one rack in the cluster, the ceph gps will always stay 
> at stuck unclean state (that is because the default metadata/data/rbd pool 
> set 2 replicas).
> Could you let me know how do I configure the rule to let it can also 
> work in a cluster with only one rack?
>
>
>
> rule ssd{
>
>     ruleset 1
>
>     type replicated
>
>     min_size 0
>
>     max_size 10
>
>     step take root
>
>     step chooseleaf firstn 0 type rack
>
>     step emit
>
> }
>
>
>
> BTW, if I add a new rack into the crushmap, the pg status will finally 
> get to active+clean. However, my customer do ONLY have one rack in 
> their env, so hard for me to have workaround to ask him setup several racks.
>
>
>
> Wei Cao (Buddy)
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to