Did you use ceph-disk before? Support for ceph-disk was removed, see Nautilus upgrade instructions. You'll need to run "ceph-volume simple scan" to convert them to ceph-volume
Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr. 31h 81247 München www.croit.io Tel: +49 89 1896585 90 On Wed, Jul 24, 2019 at 8:25 PM Xavier Trilla <xavier.tri...@clouding.io> wrote: > Hi Peter, > > Im not sure but maybe after some changes the OSDs are not being > recongnized by ceph scripts. > > Ceph used to use udev to detect the OSDs and then moved to lvm, which kind > of OSDs are you running? Blustore or filestore? Which version did you use > to create them? > > > Cheers! > > El 24 jul 2019, a les 20:04, Peter Eisch <peter.ei...@virginpulse.com> va > escriure: > > Hi, > > I’m working through updating from 12.2.12/luminious to 14.2.2/nautilus on > centos 7.6. The managers are updated alright: > > # ceph -s > cluster: > id: 2fdb5976-1234-4b29-ad9c-1ca74a9466ec > health: HEALTH_WARN > Degraded data redundancy: 24177/9555955 objects degraded > (0.253%), 7 pgs degraded, 1285 pgs undersized > 3 monitors have not enabled msgr2 > ... > > I updated ceph on a OSD host with 'yum update' and then rebooted to grab > the current kernel. Along the way, the contents of all the directories in > /var/lib/ceph/osd/ceph-*/ were deleted. Thus I have 16 OSDs down from this. > I can manage the undersized but I'd like to get these drives working again > without deleting each OSD and recreating them. > > So far I've pulled the respective cephx key into the 'keyring' file and > populated 'bluestore' into the 'type' files but I'm unsure how to get the > lockboxes mounted to where I can get the OSDs running. The osd-lockbox > directory is otherwise untouched from when the OSDs were deployed. > > Is there a way to run ceph-deploy or some other tool to rebuild the mounts > for the drives? > > peter > > Peter Eisch > Senior Site Reliability Engineer > T *1.612.659.3228* <1.612.659.3228> > <image784832.png> <https://www.facebook.com/VirginPulse> > <image794027.png> <https://www.linkedin.com/company/virgin-pulse> > <image803211.png> <https://twitter.com/virginpulse> > *virginpulse.com* <https://www.virginpulse.com/> > | *virginpulse.com/global-challenge* > <https://www.virginpulse.com/en-gb/global-challenge/> > > Australia | Bosnia and Herzegovina | Brazil | Canada | Singapore | > Switzerland | United Kingdom | USA > Confidentiality Notice: The information contained in this e-mail, > including any attachment(s), is intended solely for use by the designated > recipient(s). Unauthorized use, dissemination, distribution, or > reproduction of this message by anyone other than the intended > recipient(s), or a person designated as responsible for delivering such > messages to the intended recipient, is strictly prohibited and may be > unlawful. This e-mail may contain proprietary, confidential or privileged > information. Any views or opinions expressed are solely those of the author > and do not necessarily represent those of Virgin Pulse, Inc. If you have > received this message in error, or are not the named recipient(s), please > immediately notify the sender and delete this e-mail message. > v2.59 > > _______________________________________________ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > _______________________________________________ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com >
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com