Re: [ceph-users] Upgrading and lost OSDs

2019-11-25 Thread Brent Kennedy
: ceph-users On Behalf Of Brent Kennedy Sent: Friday, November 22, 2019 6:47 PM To: 'Alfredo Deza' ; 'Bob R' Cc: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Upgrading and lost OSDs I just ran into this today with a server we rebooted. The server has been upgrade

Re: [ceph-users] Upgrading and lost OSDs

2019-11-22 Thread Brent Kennedy
Subject: Re: [ceph-users] Upgrading and lost OSDs On Thu, Jul 25, 2019 at 7:00 PM Bob R mailto:b...@drinksbeer.org> > wrote: I would try 'mv /etc/ceph/osd{,.old}' then run 'ceph-volume simple scan' again. We had some problems upgrading due to OSDs (perhaps initi

Re: [ceph-users] Upgrading and lost OSDs

2019-07-26 Thread Alfredo Deza
.facebook.com/VirginPulse> >>> [image: LinkedIn] <https://www.linkedin.com/company/virgin-pulse> >>> [image: Twitter] <https://twitter.com/virginpulse> >>> *virginpulse.com* <https://www.virginpulse.com/> >>> | *virginpulse.com/global-challenge* &

Re: [ceph-users] Upgrading and lost OSDs

2019-07-25 Thread Bob R
global-challenge* >> <https://www.virginpulse.com/en-gb/global-challenge/> >> >> Australia | Bosnia and Herzegovina | Brazil | Canada | Singapore | >> Switzerland | United Kingdom | USA >> Confidentiality Notice: The information contained in this e-mail, >> includi

Re: [ceph-users] Upgrading and lost OSDs

2019-07-24 Thread Alfredo Deza
sage by anyone other than the intended > recipient(s), or a person designated as responsible for delivering such > messages to the intended recipient, is strictly prohibited and may be > unlawful. This e-mail may contain proprietary, confidential or privileged > information. Any view

Re: [ceph-users] Upgrading and lost OSDs

2019-07-24 Thread Peter Eisch
sender and delete this e-mail message. v2.59 From: Alfredo Deza Date: Wednesday, July 24, 2019 at 3:02 PM To: Peter Eisch Cc: Paul Emmerich , "ceph-users@lists.ceph.com" Subject: Re: [ceph-users] Upgrading and lost OSDs On Wed, Jul 24, 2019 at 3:49 PM Peter Eisch mailto:peter.e

Re: [ceph-users] Upgrading and lost OSDs

2019-07-24 Thread Alfredo Deza
il may contain proprietary, confidential or privileged > information. Any views or opinions expressed are solely those of the author > and do not necessarily represent those of Virgin Pulse, Inc. If you have > received this message in error, or are not the named recipient(s), please > imm

Re: [ceph-users] Upgrading and lost OSDs

2019-07-24 Thread Peter Eisch
ge. v2.59 From: Alfredo Deza Date: Wednesday, July 24, 2019 at 2:20 PM To: Peter Eisch Cc: Paul Emmerich , "ceph-users@lists.ceph.com" Subject: Re: [ceph-users] Upgrading and lost OSDs On Wed, Jul 24, 2019 at 2:56 PM Peter Eisch mailto:peter.ei...@virginpulse.com>> wrote: Hi Paul,

Re: [ceph-users] Upgrading and lost OSDs

2019-07-24 Thread Alfredo Deza
e this e-mail message. > v2.59 > > From: Paul Emmerich > Date: Wednesday, July 24, 2019 at 1:39 PM > To: Peter Eisch > Cc: Xavier Trilla , "ceph-users@lists.ceph.com" > > Subject: Re: [ceph-users] Upgrading and lost OSDs > > On Wed, Jul 24, 2019 at 8:36 PM Pete

Re: [ceph-users] Upgrading and lost OSDs

2019-07-24 Thread Peter Eisch
ceph-users@lists.ceph.com" Subject: Re: [ceph-users] Upgrading and lost OSDs On Wed, Jul 24, 2019 at 8:36 PM Peter Eisch <mailto:peter.ei...@virginpulse.com> wrote: # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 1.7T 0 disk ├─sda1 8:1 0 100M 0 part ├─sda2 8:2 0 1.7T 0 par

Re: [ceph-users] Upgrading and lost OSDs

2019-07-24 Thread Peter Eisch
this e-mail message. v2.59 From: Paul Emmerich Date: Wednesday, July 24, 2019 at 1:32 PM To: Xavier Trilla Cc: Peter Eisch , "ceph-users@lists.ceph.com" Subject: Re: [ceph-users] Upgrading and lost OSDs Did you use ceph-disk before? Support for ceph-disk was removed, see Nautil

Re: [ceph-users] Upgrading and lost OSDs

2019-07-24 Thread Paul Emmerich
This e-mail may contain proprietary, confidential or privileged > information. Any views or opinions expressed are solely those of the author > and do not necessarily represent those of Virgin Pulse, Inc. If you have > received this message in error, or are not the named recipient(s), ple

Re: [ceph-users] Upgrading and lost OSDs

2019-07-24 Thread Peter Eisch
From: Xavier Trilla Date: Wednesday, July 24, 2019 at 1:25 PM To: Peter Eisch Cc: "ceph-users@lists.ceph.com" Subject: Re: [ceph-users] Upgrading and lost OSDs Hi Peter, Im not sure but maybe after some changes the OSDs are not being recongnized by ceph scripts. Ceph used to use ude

Re: [ceph-users] Upgrading and lost OSDs

2019-07-24 Thread Paul Emmerich
Did you use ceph-disk before? Support for ceph-disk was removed, see Nautilus upgrade instructions. You'll need to run "ceph-volume simple scan" to convert them to ceph-volume Paul -- Paul Emmerich Looking for help with your Ceph cluster? Contact us at https://croit.io croit GmbH Freseniusstr

Re: [ceph-users] Upgrading and lost OSDs

2019-07-24 Thread Xavier Trilla
Hi Peter, Im not sure but maybe after some changes the OSDs are not being recongnized by ceph scripts. Ceph used to use udev to detect the OSDs and then moved to lvm, which kind of OSDs are you running? Blustore or filestore? Which version did you use to create them? Cheers! El 24 jul 2019,

[ceph-users] Upgrading and lost OSDs

2019-07-24 Thread Peter Eisch
Hi, I’m working through updating from 12.2.12/luminious to 14.2.2/nautilus on centos 7.6. The managers are updated alright: # ceph -s   cluster:     id:     2fdb5976-1234-4b29-ad9c-1ca74a9466ec     health: HEALTH_WARN             Degraded data redundancy: 24177/9555955 objects degraded (0.253%)