Yeah. If you run "ceph auth list" you'll get a dump of all the users and keys 
the cluster knows about; each of your daemons has that key stored somewhere 
locally (generally in /var/lib/ceph/ceph-[osd|mds|mon].$id). You can create 
more or copy an unused MDS one. I believe the docs include information on how 
this works. 
-Greg

Software Engineer #42 @ http://inktank.com | http://ceph.com


On Wednesday, March 20, 2013 at 10:48 AM, Igor Laskovy wrote:

> Well, can you please clarify what exactly key I must to use? Do I need to 
> get/generate it somehow from working cluster?
> 
> 
> On Wed, Mar 20, 2013 at 7:41 PM, Greg Farnum <g...@inktank.com 
> (mailto:g...@inktank.com)> wrote:
> > The MDS doesn't have any local state. You just need start up the daemon 
> > somewhere with a name and key that are known to the cluster (these can be 
> > different from or the same as the one that existed on the dead node; 
> > doesn't matter!).
> > -Greg
> > 
> > Software Engineer #42 @ http://inktank.com | http://ceph.com
> > 
> > 
> > On Wednesday, March 20, 2013 at 10:40 AM, Igor Laskovy wrote:
> > 
> > > Actually, I already have recovered OSDs and MON daemon back to the 
> > > cluster according to 
> > > http://ceph.com/docs/master/rados/operations/add-or-rm-osds/ and 
> > > http://ceph.com/docs/master/rados/operations/add-or-rm-mons/ .
> > > 
> > > But doc has missed info about removing/add MDS.
> > > How I can recovery MDS daemon for failed node?
> > > 
> > > 
> > > 
> > > On Wed, Mar 20, 2013 at 3:23 PM, Dave (Bob) <d...@bob-the-boat.me.uk 
> > > (mailto:d...@bob-the-boat.me.uk) (mailto:d...@bob-the-boat.me.uk)> wrote:
> > > > Igor,
> > > > 
> > > > I am sure that I'm right in saying that you just have to create a new
> > > > filesystem (btrfs?) on the new block device, mount it, and then
> > > > initialise the osd with:
> > > > 
> > > > ceph-osd -i <the osd number> --mkfs
> > > > 
> > > > Then you can start the osd with:
> > > > 
> > > > ceph-osd -i <the osd number>
> > > > 
> > > > Since you are replacing an osd that already existed, the cluster knows
> > > > about it, and there is a key for it that is known.
> > > > 
> > > > I don't claim any great expertise, but this is what I've been doing, and
> > > > the cluster seems to adopt the new osd and sort everything out.
> > > > 
> > > > David
> > > > _______________________________________________
> > > > ceph-users mailing list
> > > > ceph-users@lists.ceph.com (mailto:ceph-users@lists.ceph.com) 
> > > > (mailto:ceph-users@lists.ceph.com)
> > > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > > 
> > > 
> > > 
> > > 
> > > 
> > > --
> > > Igor Laskovy
> > > facebook.com/igor.laskovy (http://facebook.com/igor.laskovy) 
> > > (http://facebook.com/igor.laskovy)
> > > Kiev, Ukraine
> > > _______________________________________________
> > > ceph-users mailing list
> > > ceph-users@lists.ceph.com (mailto:ceph-users@lists.ceph.com) 
> > > (mailto:ceph-users@lists.ceph.com)
> > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> > 
> 
> 
> 
> 
> -- 
> Igor Laskovy
> facebook.com/igor.laskovy (http://facebook.com/igor.laskovy)
> Kiev, Ukraine 



_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to