>
> warning: 1 line adds whitespace errors.
>
Fixed
>
> On Wed, Jun 11, 2014 at 2:24 PM, Andy Zhou <az...@nicira.com> wrote:
>> When deleting a mask from the mask array, we always move the last entry
>> into its current location. Another approach can be NULL in its current
>> place, and periodically compact it.
>>
>> The approach taken by this patch is more efficient during run time.
>> During look up, fast path packet don't have to skip over NULL pointers.
>>
>> A more important advantage of this approach is that it tries to
>> keep the mask array index stable by avoiding periodic index reshuffle.
>>
>> This patch implements an optimization to further promote index
>> stability.  By leaving the last entry value intact when moving it to
>> a new location, the old cache index can 'fix' themselves, by noticing
>> the index in the cache is outside the valid mask array region. The new
>> index can be found by scanning the mask pointer within the valid region.
>>
>> Signed-off-by: Andy Zhou <az...@nicira.com>
>>
> I have couple of comments mostly related to optimizations.

> we can go from 0 to old->count, is there any reason for looping till old->max?

So we can preserve the mask array so we can prolong the life to cached index.

>
>>         rcu_assign_pointer(tbl->mask_array, new);
>>
>> @@ -260,6 +260,59 @@ static int tbl_mask_array_realloc(struct flow_table 
>> *tbl, int size)
>>         return 0;

> If you merge tbl_mask_array_find_idx() and fixup_cache_entry_index()
> you can save above check.
> tbl_mask_array_find_idx() is not used anywhere, so this should not be a 
> problem.

Good idea. I rewrote fixup_cache_entry_index()  and it is actually simpler.
>
>>  /*
>>   * mask_cache maps flow to probable mask. This cache is not tightly
>>   * coupled cache, It means updates to  mask list can result in inconsistent
>> @@ -578,13 +643,21 @@ struct sw_flow *ovs_flow_tbl_lookup_stats(struct 
>> flow_table *tbl,
>>
>>                 e = &entries[index];
>>                 if (e->skb_hash == skb_hash) {
>> -                       cache = 
>> rcu_dereference_ovsl(ma->masks[e->mask_index]);
>> +                       int i = e->mask_index;
>> +
>> +                       if (i < ma->max)
>> +                               cache = rcu_dereference_ovsl(ma->masks[i]);
>> +
>>                         if (cache) {
>> +                               if (unlikely(i >= ma->count))
>> +                                       fixup_cache_entry_index(e, ma, 
>> cache);
>> +
>>                                 flow = masked_flow_lookup(ti, key, cache,
>>                                                           n_mask_hit);
>>                                 if (flow) /* Cache hit. */
>>                                         return flow;
>> -                       }
>> +                       } else
>> +                               e->skb_hash = 0; /* Not a valid cache entry. 
>> */
>>
> Better sequence would be to check in following order. So that most
> cases we have one condition check
> if (i < count) {
>     guaranteed to have valid cache pointer for lookup
> } else {
>     handle exceptions.
> }
>
Good idea. I re-organized the logic according to your suggestion. I
will send out the updated patch soon.
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to