On Wed, Feb 17, 2016 at 10:42 AM, Ryan Moats <rmo...@us.ibm.com> wrote: > > This is a prerequisite for incremental processing. > > Signed-off-by: Ryan Moats <rmo...@us.ibm.com> > --- > ovn/controller/ofctrl.c | 118 +++++++++++++++++++++++++++------------ > ovn/controller/ofctrl.h | 2 + > ovn/controller/ovn-controller.c | 4 +- > 3 files changed, 87 insertions(+), 37 deletions(-) > > diff --git a/ovn/controller/ofctrl.c b/ovn/controller/ofctrl.c > index 3297fb3..7280c8b 100644 > --- a/ovn/controller/ofctrl.c > +++ b/ovn/controller/ofctrl.c > @@ -484,16 +484,53 @@ ofctrl_add_flow(struct hmap *desired_flows, > f->ofpacts_len = actions->size; > f->hmap_node.hash = ovn_flow_hash(f); > > - if (ovn_flow_lookup(desired_flows, f)) { > - static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 5); > - if (!VLOG_DROP_INFO(&rl)) { > - char *s = ovn_flow_to_string(f); > - VLOG_INFO("dropping duplicate flow: %s", s); > - free(s); > - } > + /* loop through all the flows to see if there is an old flow to be > + * removed - do so if the old flow has the same priority, table, and match > + * but a different action or if the old flow has the same priority, table > + * and action, but the new match is either a superset or subset of the > + * old match */ > + > + struct ovn_flow *d, *next; > + HMAP_FOR_EACH_SAFE (d, next, hmap_node, desired_flows) { > + if (f->table_id == d->table_id && f->priority == d->priority) { > + if ((match_equal(&f->match, &d->match) > + && !ofpacts_equal(f->ofpacts, f->ofpacts_len, > + d->ofpacts, d->ofpacts_len)) > + || (ofpacts_equal(f->ofpacts, f->ofpacts_len, > + d->ofpacts, d->ofpacts_len) > + && ((flow_wildcards_has_extra(&f->match.wc,&d->match.wc) > + && flow_equal_except(&f->match.flow, &d->match.flow, > + &f->match.wc)) > + || (flow_wildcards_has_extra(&d->match.wc,&f->match.wc) > + && flow_equal_except(&d->match.flow, > + &f->match.flow, > + &d->match.wc))))) { > + static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 5); > + if (!VLOG_DROP_INFO(&rl)) { > + char *s = ovn_flow_to_string(d); > + VLOG_INFO("removing superceded flow: %s", s); > + free(s); > + } > > - ovn_flow_destroy(f); > - return; > + hmap_remove(desired_flows, &d->hmap_node); > + ovn_flow_destroy(d); > + } > + > + /* if this is a complete duplicate, remove the new flow */ > + if (match_equal(&f->match, &d->match) > + && ofpacts_equal(f->ofpacts, f->ofpacts_len, > + d->ofpacts, d->ofpacts_len)) { > + static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 5); > + if (!VLOG_DROP_INFO(&rl)) { > + char *s = ovn_flow_to_string(f); > + VLOG_INFO("dropping duplicate flow: %s", s); > + free(s); > + } > + > + ovn_flow_destroy(f); > + return; > + } > + } > } >
Ryan, sorry that I don't quite understand the loop here. Could you explain why we need to examine existing flows one by one to identify the old flows to be removed? Shouldn't the ovsdb track deleted entries and then we can just delete whatever is informed by ovsdb? Otherwise, I wonder if we go with this approach, it maybe less efficient in circumstances when there are big changes in SB lflow table, e.g. during some fault-recovery when SB db are rebuilt. Maybe I missed some point here, correct me if I am wrong :) -- Best regards, Han _______________________________________________ dev mailing list dev@openvswitch.org http://openvswitch.org/mailman/listinfo/dev