On Aug 5, 2013, at 2:23 AM, Luigi Rizzo <ri...@iet.unipi.it> wrote:

> i am slightly unclear of what mechanisms we use to prevent races
> between interface being reconfigured (up/down/multicast setting, etc,
> all causing reinitialization of the rx and tx rings) and
> 
> i) packets from the host stack being sent out;
> ii) interrupts from the network card being processed.
> 
> I think in the old times IFF_DRV_RUNNING was used for this purpose,
> but now it is not enough.
> Acquiring the "core lock" in the NIC does not seem enough, either,
> because newer drivers, especially multiqueue ones, have per-queue
> rx and tx locks.
> 
> Does anyone know if there is a generic mechanism, or each driver
> reimplements its own way ?
> 

I'll speak to the RX side of the question.  Several years ago I modified the
if_em driver to use a fast interrupt handler that would signal actual processing
in a taskqueue thread.  The result was a system with no more latency than
the classic ithread model, but with the ability to allow RX processing to be
halted, drained, and restarted via an explicit API.  This in turn was extended
to cause the RX processing to be safely paused during the control events
that you described above.

The system worked well on many fronts, but unfortunately I was unable to
actively maintain it, and it was slowly garbage collected over time.  I think
that it could have been extended without much effort to cover TX-complete
processing.  TX dispatch is a different matter, but I don't think that it would 
be
hard to have the if_transmit/if_start path respond to control synchronization
events.

Scott


_______________________________________________
freebsd-current@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-current
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"

Reply via email to