If you never ran "osd rm" then the monitors still believe it's an existing
OSD. You can run that command after doing the crush rm stuff, but you
should definitely do so.
On Friday, April 11, 2014, Chad Seys wrote:
> Hi Greg,
> > How many monitors do you have?
>
> 1 . :)
>
> > It's also possible
Hi Greg,
> How many monitors do you have?
1 . :)
> It's also possible that re-used numbers won't get caught in this,
> depending on the process you went through to clean them up, but I
> don't remember the details of the code here.
Yeah, too bad. I'm following the standard removal procedure in
On 04/11/2014 04:01 AM, Chad William Seys wrote:
Hi Greg,
Looks promising...
I added
[global]
...
mon osd auto mark new in = false
Or this one:
[osd]
osd crush update on start = false
That will prevent the OSDs from updating their weight or adding
themselves to the crushmap on start
How many monitors do you have?
It's also possible that re-used numbers won't get caught in this,
depending on the process you went through to clean them up, but I
don't remember the details of the code here.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Thu, Apr 10, 2014 a
Hi Greg,
Looks promising...
I added
[global]
...
mon osd auto mark new in = false
then pushed config to monitor
ceph-deploy --overwrite-conf config push mon01
then restart monitor
/etc/init.d/ceph restart mon
then tried
ceph-deploy --overwrite-conf disk prepare --zap-disk osd02:sde http:/
Sounds like you want to explore the auto-in settings, which can
prevent new OSDs from being automatically accepted into the cluster.
Should turn up if you search ceph.com/docs. :)
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Thu, Apr 10, 2014 at 1:45 PM, wrote:
> Hi All
Hi All,
Is there a way to prepare the drives of multiple OSD and then bring them
into the CRUSH map all at once?
Right now I'm using:
ceph-deploy --overwrite-conf disk prepare --zap-disk $NODE:$DEV http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com