On Mon, Dec 30, 2013 at 1:34 PM, Ben Pfaff <b...@nicira.com> wrote:
> On Mon, Dec 30, 2013 at 11:10:22AM -0800, Pravin Shelar wrote:
>> On Mon, Dec 30, 2013 at 10:49 AM, Ben Pfaff <b...@nicira.com> wrote:
>> > On Mon, Dec 30, 2013 at 10:40:13AM -0800, Pravin Shelar wrote:
>> >> On Fri, Dec 27, 2013 at 8:03 PM, Ben Pfaff <b...@nicira.com> wrote:
>> >> > ovsthread_counter is an abstract interface that could be implemented
>> >> > different ways.  The initial implementation is simple but less than
>> >> > optimally efficient.
>> >> >
>> >> > Signed-off-by: Ben Pfaff <b...@nicira.com>
>> >> > +void
>> >> > +ovsthread_counter_inc(struct ovsthread_counter *c, unsigned long long 
>> >> > int n)
>> >> > +{
>> >> > +    c = &c[hash_int(ovsthread_id_self(), 0) % N_COUNTERS];
>> >> > +
>> >> Does it make sense optimize this locking so that threads running on
>> >> same numa-node likely share lock?
>> >> we can use process id hashing to achieve it easily.
>> >
>> > Yes, that makes a lot of sense.  How do we do it?
>> >
>> Use processor id  (sched_getcpu()) to hash it. In case of
>> sched_getcpu() is not available then we can read thread affinity using
>> sched_getaffinity() and return assigned CPU, in properly optimized
>> environment we can assume that a thread wold be pinned to one cpu
>> only. But I am not sure of doing on platforms other than linux.
>
> That's reasonable.
>
> But, on second thought, I am not sure of the benefit from threads on
> the same node sharing a lock.  I see that there are benefits from
> threads on different nodes having different locks, but I'm not sure
> that using only one lock on a single node really saves anything.  What
> do you think?

Then how about having per-cpu lock?
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to