On Wed, Dec 10, 2014 at 03:38:37PM -0800, Stephen Hemminger wrote:
> On Wed, 10 Dec 2014 11:16:46 -0500
> Neil Horman <nhorman at tuxdriver.com> wrote:
> 
> > This really seems like a false savings to me.  If an application intends to 
> > use
> > multiple processes (which by all rights it seems like the use case that the 
> > dpdk
> > is mostly designed for) then you need locking one way or another, and you've
> > just made application coding harder, because the application now needs to 
> > know
> > which functions might have internal critical sections that they need to 
> > provide
> > locking for.
> 
> The DPDK is not Linux.
I never indicated that it was.

> See the examples of how to route without using locks by doing asymmetric 
> multiprocessing.
> I.e queues are only serviced by one CPU.
> 
Yes, I've seen it.

> The cost of a locked operation (even uncontended) is often enough to drop
> packet performance by several million PPS.
Please re-read my note, I clearly stated that a single process use case was a
valid one, but that didn't preclude the need to provide mutual exclusion
internally to the api.  Theres no reason that this locking can't be moved into
the api, and the spinlock api itself either be defined to do locking at compile
time, or defined out as empty macros based on a build variable
(CONFIG_SINGLE_ACCESSOR or some such).  That way you save the application the
headache of having to guess which api calls need locking around them, and you
still get maximal performance if the application being written can guarantee
single accessor status to the dpdk library.

Neil

Reply via email to