Hi Zhang,
    From the ceph health detail, I suggest NTP server should be calibrated.

    Can you share crush map output?

2016-03-22 18:28 GMT+08:00 Zhang Qiang <dotslash...@gmail.com>:

> Hi Reddy,
> It's over a thousand lines, I pasted it on gist:
> https://gist.github.com/dotSlashLu/22623b4cefa06a46e0d4
>
> On Tue, 22 Mar 2016 at 18:15 M Ranga Swami Reddy <swamire...@gmail.com>
> wrote:
>
>> Hi,
>> Can you please share the "ceph health detail" output?
>>
>> Thanks
>> Swami
>>
>> On Tue, Mar 22, 2016 at 3:32 PM, Zhang Qiang <dotslash...@gmail.com>
>> wrote:
>> > Hi all,
>> >
>> > I have 20 OSDs and 1 pool, and, as recommended by the
>> > doc(http://docs.ceph.com/docs/master/rados/operations/placement-groups/),
>> I
>> > configured pg_num and pgp_num to 4096, size 2, min size 1.
>> >
>> > But ceph -s shows:
>> >
>> > HEALTH_WARN
>> > 534 pgs degraded
>> > 551 pgs stuck unclean
>> > 534 pgs undersized
>> > too many PGs per OSD (382 > max 300)
>> >
>> > Why the recommended value, 4096, for 10 ~ 50 OSDs doesn't work?  And
>> what
>> > does it mean by "too many PGs per OSD (382 > max 300)"? If per OSD has
>> 382
>> > PGs I would have had 7640 PGs.
>> >
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to