Hi,
Did you had any further information about this issue? I'm having now the
same symptoms with a similar configuration with 240 OSD in a scale-out
process.
Thanks in advance,
Nuno Vargas
NOS
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://
Hi,
I can only second this, revert all, but especially:
net.core.netdev_max_backlog = 5
this def. leads to bad behaviour, so back to 1000, or max 2500 and re-check
Regards,
Oliver.
> Am 12.09.2016 um 22:06 schrieb Wido den Hollander :
>
>> net.core.netdev_max_backlog = 5
__
> Op 12 september 2016 om 16:14 schreef Василий Ангапов :
>
>
> Hello, colleagues!
>
> I have Ceph Jewel cluster of 10 nodes (Centos 7 kernel 4.7.0), 290
> OSDs total with journals on SSDs. Network is 2x10Gb public and 2x10GB
> cluster.
> I do constantly see periodic slow requests being follow
Hello, colleagues!
I have Ceph Jewel cluster of 10 nodes (Centos 7 kernel 4.7.0), 290
OSDs total with journals on SSDs. Network is 2x10Gb public and 2x10GB
cluster.
I do constantly see periodic slow requests being followed by "wrongly
marked me down" record in ceph.log like this:
root@ed-ds-c171