rihad wrote:
Robert Watson wrote:
I would suggest making just the HZ -> 4000 change for now and see how
it goes.

2018 users online, 73 drops have just occurred.
p.s.: already 123 drops.
It will only get worse after some time.

Traffic load: 440-450 mbps.

top -HS:
last pid: 68314;  load averages:  1.35,  1.22,  1.25
                            up 0+05:13:28  17:53:49
145 processes: 11 running, 118 sleeping, 16 waiting
CPU:  1.4% user,  0.0% nice,  2.8% system, 10.3% interrupt, 85.5% idle
Mem: 1337M Active, 1683M Inact, 355M Wired, 40K Cache, 214M Buf, 560M Free
Swap: 2048M Total, 2048M Free

  PID USERNAME   PRI NICE   SIZE    RES STATE  C   TIME   WCPU COMMAND
   14 root       171 ki31     0K    16K CPU4   4 257:35 99.41% idle: cpu4
   12 root       171 ki31     0K    16K RUN    6 286:39 98.14% idle: cpu6
   18 root       171 ki31     0K    16K RUN    0 225:16 92.72% idle: cpu0
   15 root       171 ki31     0K    16K RUN    3 255:35 90.04% idle: cpu3
   16 root       171 ki31     0K    16K CPU2   2 272:04 87.40% idle: cpu2
   13 root       171 ki31     0K    16K CPU5   5 260:52 81.69% idle: cpu5
   17 root       171 ki31     0K    16K CPU1   1 239:06 75.29% idle: cpu1
   21 root       -44    -     0K    16K CPU7   7 108:49 57.37% swi1: net
   11 root       171 ki31     0K    16K CPU7   7 267:48 49.02% idle: cpu7
   29 root       -68    -     0K    16K WAIT   1  41:45 20.90% irq256: bce0
  470 root       -68    -     0K    16K -      5  27:01  9.18% dummynet
   19 root       -32    -     0K    16K WAIT   1  16:13  6.59% swi4:
clock sio
   31 root       -68    -     0K    16K WAIT   2   9:35  4.35% irq257: bce1

Robert Watson wrote:
Suggestions like increasing timer resolution are intended to spread
out the injection of packets by dummynet to attempt to reduce the
peaks of burstiness that occur when multiple queues inject packets in
a burst that exceeds the queue depth supported by combined hardware
descriptor rings and software transmit queue.

How to tweak the software transmit queue?


P.S.: still only 123 drops (while io_pkt_drop: 0, intr_queue_drops: 0), but it was a warning.


you can not do anything about it if one of the custommers sends a burst of 3000 udp packets at their maximum speed(or maybe some combination of custommers to something which results in an aggreagate burst rate like that. In other words you may always continue to get moments when the pipe releases a bunch of stuff that has a potential to over-run something down stream.

Think of it as a dam in a stream...

If you have no dam, teh water level goes up and down gradually and by small amounts, but if you have a dam, you can release water in such a way that the stream is flooded higher than it would normally ever get.


work out how large a burst of data the pipe will release in 1/4000th of a second and using a small packet size, work out how many packets that is. Then make sure that the driver software input queue is
capable of holding that many packets.

_______________________________________________
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"

Reply via email to