Do you use upstream ceph version previously? Or do you shutdown
running ceph-osd when upgrading osd?

How many osds meet this problems?

This assert failure means that osd detects a upgraded pg meta object
but failed to read(or lack of 1 key) meta keys from object.

On Thu, Jul 23, 2015 at 7:03 PM, Udo Lembke <ulem...@polarzone.de> wrote:
> Am 21.07.2015 12:06, schrieb Udo Lembke:
>> Hi all,
>> ...
>>
>> Normaly I would say, if one OSD-Node die, I simply reinstall the OS and ceph 
>> and I'm back again... but this looks bad
>> for me.
>> Unfortunality the system also don't start 9 OSDs as I switched back to the 
>> old system-disk... (only three of the big
>> OSDs are running well)
>>
>> What is the best solution for that? Empty one node (crush weight 0), fresh 
>> reinstall OS/ceph, reinitialise all OSDs?
>> This will take a long long time, because we use 173TB in this cluster...
>>
>>
>
> Hi,
> answer myself if anybody has similiar issues and find the posting.
>
> Empty the whole nodes takes too long.
> I used the puppet wheezy system and have to recreate all OSDs (in this case I 
> need to empty the first blocks of the
> journal before create the OSD again).
>
>
> Udo
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Best Regards,

Wheat
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to