When the host boots, it hints set the epair queue max len to 86016. I
did not see any related problem When I set it, it's stable.
But epair consumed cpu, it's so easy to touch 100% cpu usage if it's
passed high rate packets.
Simon
20180703
On 2018/7/3 06:16, Bjoern A. Zeeb wrote:
> On 2 Jul 2
On 2 Jul 2018, at 21:11, Dr Josef Karthauser wrote:
We’re experiencing a strange issue in production failure with epair
(which we’re using to talk vimage to jails).
FreeBSD s5 11.1-STABLE FreeBSD 11.1-STABLE #2 r328930: Tue Feb 6
16:05:59 GMT 2018 root@s5:/usr/obj/usr/src/sys/TRUESPEED
On 2 Jul 2018, at 23:11, Dr Josef Karthauser wrote:
Break break. We’ve just seen a bug bugzilla report 22710, reporting
that epair fails when the queue limit is hit
(net.link.epair.netisr_maxqlen). We’ve just introduced a high
bandwidth service on this machine and so it’s probably that that’s
We’re experiencing a strange issue in production failure with epair (which
we’re using to talk vimage to jails).
FreeBSD s5 11.1-STABLE FreeBSD 11.1-STABLE #2 r328930: Tue Feb 6 16:05:59 GMT
2018 root@s5:/usr/obj/usr/src/sys/TRUESPEED amd64
Looks like epair has suddenly stopped forwarding
On Mon, Jul 2, 2018 at 1:34 AM, Suneet Singh wrote:
> Dear Professor,
>
> I am Suneet from India. I am a PhD student in Unicamp, Campinas, Brazil. I
> am testing latency of MACSAD P4 software switch. I found that latency using
> netmap i/o is higher than using socket_mmap i/o while in case of dpdk