The monitor requires at least <reports> number of reports, from a set of OSDs whose size is at least <reporters>. So with 9 reporters and 3 reports, it would wait until 9 OSDs had reported an OSD down (basically ignoring the reports setting, as it is smaller). -Greg
On Friday, April 25, 2014, Craig Lewis <cle...@centraldesktop.com> wrote: > I was reading about mon osd min down reports at > http://ceph.com/docs/master/rados/configuration/mon-osd-interaction/, and > I had a question. > > Are mon osd min down reporters and mon osd min down reports both required > to mark an OSD down, or just one? > > For example, if I set > [global] > mon osd min down reporters = 9 > mon osd min down reports = 3 > > Does that mean that 9 OSDs have to report an OSD down AND one of them has > to report it 3 times? Or does it mean a single OSD reporting another OSD > is down 3 times will still mark it down? > > In the first case, reports = 3 seems like a good number. In the 2nd case, > it seems that reports should be (reporters * 3). > > > I'm thinking that in real clusters, reporters should be > > - larger than the largest number of OSDs in a single node > - less than (Num OSDs / 2) > - less than the smallest number of PGs per OSD > > Those rules might conflict for 1-3 node clusters, but shouldn't cause > problems for larger clusters. > > Thoughts? > > -- > > *Craig Lewis* > Senior Systems Engineer > Office +1.714.602.1309 > Email > cle...@centraldesktop.com<javascript:_e(%7B%7D,'cvml','cle...@centraldesktop.com');> > > *Central Desktop. Work together in ways you never thought possible.* > Connect with us Website <http://www.centraldesktop.com/> | > Twitter<http://www.twitter.com/centraldesktop> | > Facebook <http://www.facebook.com/CentralDesktop> | > LinkedIn<http://www.linkedin.com/groups?gid=147417> | > Blog <http://cdblog.centraldesktop.com/> > -- Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com