On Sun, Sep 29, 2019 at 7:09 PM <xiangxia.m....@gmail.com> wrote: > > From: Tonghao Zhang <xiangxia.m....@gmail.com> > > The most case *index < ma->max, we add likely for performance. > > Signed-off-by: Tonghao Zhang <xiangxia.m....@gmail.com> > --- > net/openvswitch/flow_table.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/net/openvswitch/flow_table.c b/net/openvswitch/flow_table.c > index c8e79c1..c21fd52 100644 > --- a/net/openvswitch/flow_table.c > +++ b/net/openvswitch/flow_table.c > @@ -526,7 +526,7 @@ static struct sw_flow *flow_lookup(struct flow_table *tbl, > struct sw_flow_mask *mask; > int i; > > - if (*index < ma->max) { > + if (likely(*index < ma->max)) {
After changes from patch 5, ma->count is the limit for mask array. so why not use ma->count here. > mask = rcu_dereference_ovsl(ma->masks[*index]); > if (mask) { > flow = masked_flow_lookup(ti, key, mask, n_mask_hit); > -- > 1.8.3.1 >