On Fri, Sep 18, 2015 at 6:33 PM, Robert LeBlanc <rob...@leblancnet.us> wrote:
> -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA256 > > Depends on how easy it is to rebuild an OS from scratch. If you have > something like Puppet or Chef that configure a node completely for > you, it may not be too much of a pain to forgo the RAID. We run our > OSD nodes from a single SATADOM and use Puppet for configuration. We > also don't use swap (not very effective on SATADOM), but have enough > RAM that we feel comfortable enough with that decision. > > If you use ceph-disk or ceph-deploy to configure the OSDs, then they > should automatically come back up when you lay down the new OS and set > up the necessary ceph config items (ceph.conf and the OSD bootstrap > keys). > Hello sir This sounds really interesting , could you please elaborate how after reinstalling OS and installing Ceph packages, how does Ceph detects OSD's that were hosted earlier on this node. I am using ceph-deploy to provision ceph , now what all changes i need to do after reinstalling OS of a OSD node. So that it should detect my OSD daemons. Please help me to know this step by step. Thanks in advance. Vickey > - ---------------- > Robert LeBlanc > PGP Fingerprint 79A2 9CA4 6CC4 45DD A904 C70E E654 3BB2 FA62 B9F1 > > > On Fri, Sep 18, 2015 at 9:06 AM, Martin Palma wrote: > > Hi, > > > > Is it a good idea to use a software raid for the system disk (Operating > > System) on a Ceph storage node? I mean only for the OS not for the OSD > > disks. > > > > And what about a swap partition? Is that needed? > > > > Best, > > Martin > > > > _______________________________________________ > > ceph-users mailing list > > ceph-users@lists.ceph.com > > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > > -----BEGIN PGP SIGNATURE----- > Version: Mailvelope v1.1.0 > Comment: https://www.mailvelope.com > > wsFcBAEBCAAQBQJV/C7UCRDmVDuy+mK58QAAoTMQAMZBv4/lphmntC23b9/l > JWUPjZfbXUtNgnfMvWcVyTSXsTtM5mY/4/iSZ4ZfCQ4YyqWWMpSlocHONHFz > nFTtGupqV3vPCo4X8bl58/iv4J0H2iWUr2klk7jtTj+e+JjyWDo25l8V2ofP > edt5g7qcMAwiWYrrpjxQBK4AFNiPJKSMxrzK1Mgic15nwX0OJu0DDNS5twzZ > s8Y+UfS80+hZvyBTUGhsO8pkYoJQvYRGgyqYtCdxA+m1T8lWVe8SC0eLWOXy > xoyGR7dqcvEXQadrqfmU618eNpNEECPoHeIkeCqpTohrUVsyRcfSGAtfM0YY > Ixf2SCaDMAaRwvXGJUf5OP/3HHWps0m4YyLBOddPZ5XZb1utZiclh26KuOyw > QdGkP7uoYEMO0v40dcsIbOVhtgTdX+HrpEGuqEtNEGe194sS1nluw+49aLxe > eozHSRGq3GmRm/q3bR5f2p+WXwKqmdDRFhqII8H11bb5F7etU2PBo1JA2bTW > hUFqu6+ST8eI34OeC7LbC9Txfw/iUhL62kiCm+gj8Rg+m+TZ7a1HEaVc8uyq > Jw1+5hIgyTWFvKdIiW65k++8w9my6kUIsY8RT8p08DTSPzxuwGtHr7UJJ629 > K/tlpGdQTRf7PXgmea6sSodnmaF5HRIUdU0nhQpRRxjX/V+PENI8Qq45KyfX > BovV > =Gzvl > -----END PGP SIGNATURE----- > _______________________________________________ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com