On Wed, 26 Mar 2008, Julian Elischer wrote:
it wouldn't.. you'd add them together before presenting them. but every time a packet changes a counter that is shared, there is a chance that it is being altered by another processor, so if you have fine grained locking in ipfw, you really should use atomic adds, which are slow, or accept possibl collisions (which might be ok) but still cause a lot of cross cpu TLB flushing.
In malloc(9) and uma(9), we maintain per-CPU stats, coalescing only for presentation, relying on soft critical sections rather than locks to pretect consistency. What's worth remembering, however, is that recent multicore machines have significantly optimized the cost of atomic operations on cache lines held for write by the current CPU, and so the cost of locking has dramatically fallen in the last few years. This re-emphasizes the importance of careful cacheline management for per-CPU data structures (particularly, don't put data written by multiple CPUs in the same cacheline if you want the benefits of per-CPU access).
Where read-write locking is the best model, Stephan's recent work on rmlocks looks quite promising. In my micro-benchmarks, on recent hardware it performs extremely well on SMP for read locks, but still requires optimization for UP-compiled kernels. For stats and writable structures, such as per-CPU caches, rmlocks aren't very helpful, but when compared with replicating infrequently written data structures across many CPUs, rwlocks/rmlocks offer a much simpler and less error-prone programming model. We need to see more optimization and measurement done on rmlocks for 8.x, and the lack of full priority propagation for rwlocks has to be kept in mind.
Robert N M Watson Computer Laboratory University of Cambridge _______________________________________________ freebsd-hackers@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-hackers To unsubscribe, send any mail to "[EMAIL PROTECTED]"