On 04/08/2014 12:00 AM, Pravin wrote:
From: Pravin Shelar <pshe...@nicira.com>

On every packet OVS needs to lookup flow-table with every mask.
the packet flow-key is first masked with mask in the list and
then the masked key is looked up in flow-table.  Therefore number
of masks can affect packet processing performance.

Following patch adds mask pointer to mask cache from last
pakcet lookup in same flow.  Index of mask is stored in
this cache. This cache is indexed by 5 tuple hash (skb rxhash).

Signed-off-by: Pravin B Shelar <pshe...@nicira.com>
---
  datapath/datapath.c   |    3 +-
  datapath/flow_table.c |   80 ++++++++++++++++++++++++++++++++++++++++++++-----
  datapath/flow_table.h |   11 +++++--
  3 files changed, 83 insertions(+), 11 deletions(-)

diff --git a/datapath/datapath.c b/datapath/datapath.c
index f4e415e..c5059e1 100644
--- a/datapath/datapath.c
+++ b/datapath/datapath.c
@@ -262,7 +262,8 @@ void ovs_dp_process_received_packet(struct vport *p, struct 
sk_buff *skb)
        }

        /* Look up flow. */
-       flow = ovs_flow_tbl_lookup_stats(&dp->table, &key, &n_mask_hit);
+       flow = ovs_flow_tbl_lookup_stats(&dp->table, &key, skb_get_rxhash(skb),
+                                        &n_mask_hit);
        if (unlikely(!flow)) {
                struct dp_upcall_info upcall;

diff --git a/datapath/flow_table.c b/datapath/flow_table.c
index 75c1b82..f12dd81 100644
--- a/datapath/flow_table.c
+++ b/datapath/flow_table.c
@@ -49,6 +49,10 @@
  #define TBL_MIN_BUCKETS               1024
  #define REHASH_INTERVAL               (10 * 60 * HZ)

+#define MC_HASH_SHIFT          8
+#define MC_HASH_ENTRIES                (1u << MC_HASH_SHIFT)
+#define MC_HASH_SEGS           ((sizeof(uint32_t) * 8) / MC_HASH_SHIFT)

A comment describing this cache with some ascii art and stuff
would help tremendously. Took me 30 minutes to figure it out ;-)


+struct sw_flow *ovs_flow_tbl_lookup_stats(struct flow_table *tbl,
+                                         const struct sw_flow_key *key,
+                                         u32 skb_hash,
+                                         u32 *n_mask_hit)
+{
+       struct table_instance *ti = rcu_dereference_ovsl(tbl->ti);
+       struct mask_cache_entry  *entries, *ce, *del;
+       struct sw_flow *flow;
+       u32 hash = skb_hash;
+       int seg;
+
+       del = NULL;
+       entries = this_cpu_ptr(tbl->mask_cache);
+
+       for (seg = 0; seg < MC_HASH_SEGS; seg++) {
+               int index;
+
+               index = hash & (MC_HASH_ENTRIES - 1);
+               ce = &entries[index];
+
+               if (ce->skb_hash == skb_hash) {

I believe skb_get_hash() can be 0 and that would result in mistaking any
empty slot to be a cached entry and we would only look at the first flow
mask.
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to