h nodes
> and ca. 550 client nodes, accounting for about 1500 active ceph clients, 1400
> cephfs and 170 RBD images.
>
> Best regards,
>
> =
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
>
>
> Fro
s blackholed on the switch, you could see some heart beating
> impacted but not others.
>
> Also make sure you have the optimal reporters value.
>
> > On Nov 27, 2019, at 7:31 AM, Vincent Godin wrote:
> >
> > Till i submit the mail below few days ago, we found some c
Till i submit the mail below few days ago, we found some clues
We observed a lot of lossy connexion like :
ceph-osd.9.log:2019-11-27 11:03:49.369 7f6bb77d0700 0 --
192.168.4.181:6818/2281415 >> 192.168.4.41:0/1962809518
conn(0x563979a9f600 :6818 s=STATE_ACCEPTING_WAIT_CONNECT_MSG_AUTH
pgs=0 cs=0