Robert Watson wrote:
I would suggest making just the HZ -> 4000 change for now and see how
it goes.
2018 users online, 73 drops have just occurred.
p.s.: already 123 drops.
It will only get worse after some time.
Traffic load: 440-450 mbps.
top -HS:
last pid: 68314; load averages: 1.35, 1.22, 1.25
up 0+05:13:28 17:53:49
145 processes: 11 running, 118 sleeping, 16 waiting
CPU: 1.4% user, 0.0% nice, 2.8% system, 10.3% interrupt, 85.5% idle
Mem: 1337M Active, 1683M Inact, 355M Wired, 40K Cache, 214M Buf, 560M Free
Swap: 2048M Total, 2048M Free
PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU COMMAND
14 root 171 ki31 0K 16K CPU4 4 257:35 99.41% idle: cpu4
12 root 171 ki31 0K 16K RUN 6 286:39 98.14% idle: cpu6
18 root 171 ki31 0K 16K RUN 0 225:16 92.72% idle: cpu0
15 root 171 ki31 0K 16K RUN 3 255:35 90.04% idle: cpu3
16 root 171 ki31 0K 16K CPU2 2 272:04 87.40% idle: cpu2
13 root 171 ki31 0K 16K CPU5 5 260:52 81.69% idle: cpu5
17 root 171 ki31 0K 16K CPU1 1 239:06 75.29% idle: cpu1
21 root -44 - 0K 16K CPU7 7 108:49 57.37% swi1: net
11 root 171 ki31 0K 16K CPU7 7 267:48 49.02% idle: cpu7
29 root -68 - 0K 16K WAIT 1 41:45 20.90% irq256: bce0
470 root -68 - 0K 16K - 5 27:01 9.18% dummynet
19 root -32 - 0K 16K WAIT 1 16:13 6.59% swi4:
clock sio
31 root -68 - 0K 16K WAIT 2 9:35 4.35% irq257: bce1
Robert Watson wrote:
Suggestions like increasing timer resolution are intended to spread
out the injection of packets by dummynet to attempt to reduce the
peaks of burstiness that occur when multiple queues inject packets in
a burst that exceeds the queue depth supported by combined hardware
descriptor rings and software transmit queue.
How to tweak the software transmit queue?
P.S.: still only 123 drops (while io_pkt_drop: 0, intr_queue_drops: 0),
but it was a warning.
_______________________________________________
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"