5 pgs peering; 915 pgs stuck inactive;
I guess its due to the failing osd.
I guess I could remove the osd and add as a new one, but its always
interesting to know what's actually wrong.
/Regards Martin
Best Regards / Vänliga Hälsningar
*Martin Wilderoth*
*VD*
Enhagslingan 1B, 187 40 Täby
Dir
Hello,
I have a ceph cluster where the one osd is failng to start. I have been
upgrading ceph to see if the error dissappered. Now I'm running jewel but I
still get the error message.
-31> 2016-07-13 17:03:30.474321 7fda18a8b700 2 -- 10.0.6.21:6800/1876
>> 10.0.5.71:6789/0 pipe(0x7fdb5712a
>
> Hello,
>
> On Fri, 22 Apr 2016 06:20:17 +0200 Martin Wilderoth wrote:
>
> > I have a ceph cluster and I will change my journal devices to new SSD's.
> >
> > In some instructions of doing this they refer to a journal file (link to
> > UUID of jou
On 22 April 2016 at 09:09, Csaba Tóth wrote:
> Hi!
>
> I use ceph hammer on ubuntu 14.04.
> Please give me advice how is the best to upgrade, first the OS to 16.04,
> and than the ceph to jewel, or first the ceph and than the OS?
>
> Thanks,
> Csaba
>
> Hello,
I have not tested to upgrade to jew
I have a ceph cluster and I will change my journal devices to new SSD's.
In some instructions of doing this they refer to a journal file (link to
UUID of journal )
In my OSD folder this journal don’t exist.
This instructions is renaming the UUID of new device to the old UUID not to
break anythin