On Wed, Mar 30, 2016 at 01:53:55AM +, Daniele Di Proietto wrote:
>
>
> On 29/03/2016 06:08, "Flavio Leitner" wrote:
>
> >On Tue, Mar 29, 2016 at 02:13:18AM +, Daniele Di Proietto wrote:
> >> Hi Flavio and Karl,
> >>
> >> thanks for the patch! I have a couple of comments:
> >>
> >> Can
On Wed, Mar 30, 2016 at 03:20:33AM +, Daniele Di Proietto wrote:
> On 29/03/2016 06:44, "Karl Rister" wrote:
> >One other area of the sequence code that I thought was curious was a
> >single mutex that covered all sequences. If updating the API is a
> >possibility I would think going to a mut
On Wed, Mar 30, 2016 at 01:53:55AM +, Daniele Di Proietto wrote:
>
>
> On 29/03/2016 06:08, "Flavio Leitner" wrote:
>
> >On Tue, Mar 29, 2016 at 02:13:18AM +, Daniele Di Proietto wrote:
> >> Hi Flavio and Karl,
> >>
> >> thanks for the patch! I have a couple of comments:
> >>
> >> Can
On 29/03/2016 06:44, "Karl Rister" wrote:
>On 03/29/2016 08:08 AM, Flavio Leitner wrote:
>> On Tue, Mar 29, 2016 at 02:13:18AM +, Daniele Di Proietto wrote:
>>> Hi Flavio and Karl,
>>>
>>> thanks for the patch! I have a couple of comments:
>>>
>>> Can you point out a configuration where thi
On 29/03/2016 06:08, "Flavio Leitner" wrote:
>On Tue, Mar 29, 2016 at 02:13:18AM +, Daniele Di Proietto wrote:
>> Hi Flavio and Karl,
>>
>> thanks for the patch! I have a couple of comments:
>>
>> Can you point out a configuration where this is the bottleneck?
>> I'm interested in reprodu
On 03/29/2016 08:08 AM, Flavio Leitner wrote:
> On Tue, Mar 29, 2016 at 02:13:18AM +, Daniele Di Proietto wrote:
>> Hi Flavio and Karl,
>>
>> thanks for the patch! I have a couple of comments:
>>
>> Can you point out a configuration where this is the bottleneck?
>> I'm interested in reproducing
On Tue, Mar 29, 2016 at 02:13:18AM +, Daniele Di Proietto wrote:
> Hi Flavio and Karl,
>
> thanks for the patch! I have a couple of comments:
>
> Can you point out a configuration where this is the bottleneck?
> I'm interested in reproducing this.
Karl, since you did the tests, could you ple
Hi Flavio and Karl,
thanks for the patch! I have a couple of comments:
Can you point out a configuration where this is the bottleneck?
I'm interested in reproducing this.
I think the implementation would look simpler if we could
avoid explicitly taking the mutex in dpif-netdev and instead
having
On Thu, 24 Mar 2016 14:10:14 +0800
yewgang wrote:
> So basically, you replace ovs_mutex_rwlock (or sth) into ovs_mutex_trylock
> in the loop of "other tasks after some time processing the RX queues".
> Is it ?
It isn't replacing since the original locking remains the same. But yeah,
it tries to
So basically, you replace ovs_mutex_rwlock (or sth) into ovs_mutex_trylock
in the loop of "other tasks after some time processing the RX queues".
Is it ?
2016-03-24 11:54 GMT+08:00 Flavio Leitner :
> The PMD thread needs to keep processing RX queues in order
> archive maximum throughput. However
10 matches
Mail list logo