On Tue, Nov 5, 2013 at 3:02 PM, Dominik Mostowiec
wrote:
> After remove ( ceph osd out X) osd from one server ( 11 osd ) ceph
> starts data migration process.
> It stopped on:
> 32424 pgs: 30635 active+clean, 191 active+remapped, 1596
> active+degraded, 2 active+clean+scrubbing;
> degraded (1.718%
I hope it will help.
crush: https://www.dropbox.com/s/inrmq3t40om26vf/crush.txt
ceph osd dump: https://www.dropbox.com/s/jsbt7iypyfnnbqm/ceph_osd_dump.txt
--
Regards
Dominik
2013/11/6 yy-nm :
> On 2013/11/5 22:02, Dominik Mostowiec wrote:
>>
>> Hi,
>> After remove ( ceph osd out X) osd from one
Hi,
This is s3/ceph cluster, .rgw.buckets has 3 copies of data.
Many PG's are only on 2 OSD's and are marked as 'degraded'.
Scrubbing can fix this on degraded object's?
I don't have set tunables in cruch, mabye this can help (this is safe?)?
--
Regards
Dominik
2013/11/5 Dominik Mostowiec :
> H