On Mon, Mar 30, 2015 at 10:51:51AM +0200, Hans Petter Selasky wrote:
H> Hi,
H> 
H> Like was mentioned here, maybe we need a global counter that is not 
H> accessed that frequently, and use per-cpu counters for the most frequent 
H> accesses. To keep the order somewhat sane, we need a global counter:
H> 
H> Pseudo code:
H> 
H> static int V_ip_id;
H> 
H> PER_CPU(V_ip_id_start);
H> PER_CPU(V_ip_id_end);
H> 
H> static uint16_t
H> get_next_id()
H> {
H> if (PER_CPU(V_ip_id_start) == PER_CPU(V_ip_id_end)) {
H>      next = atomic_add32(&V_ip_id, 256);
H>      V_ip_id_start = next;
H>      V_ip_id_end = next + 256;
H> }
H> id = V_ip_id_start++;
H> return (id);
H> }

What's the rationale of the code? Trying to keep CPUs off by 256 from
each other?

The suggested code suffers from migration more than what I suggested. E.g.
you can assign V_ip_id_start on CPU 1 then migrate to CPU 2 and assign
V_ip_id_end, yielding in the broken state of the ID generating machine.
Or you can compare start and end on different CPUs, which causes less harm.

And still the code doesn't protect against full 65k overflow. One CPU
can emit a burst over 65k packets, and then go on and reuse all the IDs
that other CPUs are using now.

-- 
Totus tuus, Glebius.
_______________________________________________
svn-src-head@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/svn-src-head
To unsubscribe, send any mail to "svn-src-head-unsubscr...@freebsd.org"

Reply via email to