On Sat, 23 Oct 2010, Lawrence Stewart wrote:

One observation though: net.inet.tcp.reass.cursegments was non-zero (it was just 1) after 30 rounds, where each round is (as earlier) 15-concurrent instances of netperf for 20s. This was on the netserver side. And, it was zero before the netperf runs. On the other hand, Andre told me (in a separate mail) that this counter is not relevant anymore - so, should I just ignore it ?

It's relevant, just not guaranteed to be 100% accurate at any given point in time. The value is calculated based on synchronised access to UMA zone stats and unsynchronised access to UMA per-cpu zone stats. The latter is safe, but causes the overall result to potentially be inaccurate due to use of stale data. The accuracy vs overhead tradeoff was deemed worthwhile for informational counters like this one.

That being said, I would not expect the value to remain persistently at 1 after all TCP activity has finished on the machine. It won't affect performance, but I'm curious to know if the calculation method has a flaw. I'll try to reproduce locally, but can you please confirm if the value stays at 1 even after many minutes of no TCP activity?

It's possible we should revisit the current synchronisation model for per-CPU caches in this regard. We switched to soft critical sessions when the P4 Xeon was a popular CPU line -- it had extortionately expensive atomic operations, even when a cache line was in the local cache. If we were to move back to mutexes for per-CPU caches, then we could acquire all the locks in sequence and get an atomic snapshot across them all (if desired). This isn't a hard technical change, but would require very careful performance evaluation.

Robert
_______________________________________________
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"

Reply via email to