Hi!

I'm sorry, but I dont know, how to help you. We move OSDs from XFS to EXT4 on 
our test
cluster (Hammer 0.94.2), removing ODSs one-by-one and re-adding them after 
reformatting
to EXT4. This process is usual to a ceph (Add/Remove OSDs in documentaion) and 
took
place without any data loss. We also change ruleset, like written in Sebastian 
Han's blog:

http://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/

And it was also with no harm to data. But we dont use tiering, maybe some things
happens with data while removing cache tier, like not all objects was written 
back to
lower lier pool?


Megov Igor
CIO, Yuterra


________________________________
От: Константин Сахинов <sakhi...@gmail.com>
Отправлено: 7 августа 2015 г. 15:39
Кому: Межов Игорь Александрович; ceph-users@lists.ceph.com
Тема: Re: [ceph-users] inconsistent pgs

It's hard to say now. I changed one-by-one my 6 OSDs from btrfs to xfs. During 
the repair process I added 2 more OSDs. Changed crush map from root-host-osd to 
root-chasis-host-osd structure... There was SSD cache tiering set, when first 
inconsistency showed up. Then I removed tiering to confirm than it was not the 
reason of inconsistencies.
Once there was hardware problem with one node - PCI slot issue. I shut down 
that node and exchanged motherboard to the same model.
I'm running CentOS Linux release 7.1.1503 (Core) with 3.10.0-229.7.2.el7.x86_64 
kernel.

пт, 7 авг. 2015 г. в 15:18, Межов Игорь Александрович 
<me...@yuterra.ru<mailto:me...@yuterra.ru>>:
Hi!

When inconsistent PGs starting to appear? Maybe after some event?
Hang, node reboot or after reconfiguration or changing parameters?
Can you say, what triggers such behaviour? And, BTW, what system/kernel
you use?

Megov Igor
CIO, Yuterra

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to