On 24 August 2016 at 11:12, Ryan Moats <[email protected]> wrote:
>
>
>
> Guru Shetty <[email protected]> wrote on 08/24/2016 01:07:47 PM:
>
> > From: Guru Shetty <[email protected]>
> > To: Ryan Moats/Omaha/IBM@IBMUS
> > Cc: ovs dev <[email protected]>
> > Date: 08/24/2016 01:08 PM
> > Subject: Re: [ovs-dev] [PATCH v3] ovn-controller: Back out
> > incremental processing
> >
> > On 24 August 2016 at 11:03, Ryan Moats <[email protected]> wrote:
> >
> > Guru Shetty <[email protected]> wrote on 08/24/2016 12:46:55 PM:
> >
> > > From: Guru Shetty <[email protected]>
> > > To: Ryan Moats/Omaha/IBM@IBMUS
> > > Cc: ovs dev <[email protected]>
> > > Date: 08/24/2016 12:47 PM
> > > Subject: Re: [ovs-dev] [PATCH v3] ovn-controller: Back out
> > > incremental processing
> > >
> > > On 23 August 2016 at 22:40, Ryan Moats <[email protected]> wrote:
> > > As [1] indicates, incremental processing hasn't resulted
> > > in an improvement worth the complexity it has added.
> > > This patch backs out all of the code specific to incremental
> > > processing, along with the persisting of OF flows,
> > > logical ports and multicast groups.
> > >
> > > [1] http://openvswitch.org/pipermail/dev/2016-August/078272.html
> > >
> > > Signed-off-by: Ryan Moats <[email protected]>
> > >
> > > This is not a full review. But I have a few comments.
> > >
> > > * sparse gives the following warning
> > > ovn/controller/ofctrl.c:675:1: warning: symbol
> > > 'ovn_flow_table_clear' was not declared. Should it be static?
> >
> > hmm... I think I lost something because I remember needing to declare
> > that static...
> >
> > > * struct group_info still has lflow_uuid. Do we need it? It looks to
> > > me that it is not needed.
> >
> > I wanted to keep the change minimal to start with - I will look and
> > see if I can pull this out...
> >
> > > While you are at it, please replace the comment on top of ofctrl_put
> > > around groups to just read:
> > > * Replaces the group table on the switch, if possible, by the
> > > 'groups->desired_groups'
> >
> > I can do that
> >
> > >
> > > * I notice that conntrack zone allocation for logical ports is still
> > > broken. I am not sure when it broke (but it has been broke for a
> > > long time), but I remember some patches for the fix around it for
> > > incremental processing
> > > For e.g., if you add the following test, you will notice that it fails.
> >
> > With your permission, I'll add this in to PS 4 and get it fixed
> > You have my permission (I don't know what PS 4 is.). The test is
> > very minimalist and just tries to see whether atleast one logical
> > port has REG13 loaded as an example.
>
> Thanks, and PS 4 = Patch Set 4 (i.e. the next version)
>
Just an FYI. If you fix the conntrack zone allocation correctly, 2 load
balancing tests will fail and that should just be because the output has an
extra zone (which was previously just zero and I had missed the implication
of it until yesterday.).
>
>
> >
> > >
> > > AT_SETUP([ovn -- conntrack zone allocation])
> > > AT_KEYWORDS([ovnconntrack])
> > > AT_SKIP_IF([test $HAVE_PYTHON = no])
> > > ovn_start
> > >
> > > # Logical network:
> > > # 2 logical switches "foo" (192.168.1.0/24) and "bar" (172.16.1.0/24)
> > > # connected to a router R1.
> > > # foo has foo1 to act as a client.
> > > # bar has bar1, bar2, bar3 to act as servers.
> > >
> > > net_add n1
> > >
> > > sim_add hv1
> > > as hv1
> > > ovs-vsctl add-br br-phys
> > > ovn_attach n1 br-phys 192.168.0.1
> > > for i in foo1 bar1 bar2 bar3; do
> > > ovs-vsctl -- add-port br-int $i -- \
> > > set interface $i external-ids:iface-id=$i \
> > > options:tx_pcap=hv1/$i-tx.pcap \
> > > options:rxq_pcap=hv1/$i-rx.pcap
> > > done
> > >
> > > ovn-nbctl create Logical_Router name=R1
> > > ovn-nbctl ls-add foo
> > > ovn-nbctl ls-add bar
> > >
> > > # Connect foo to R1
> > > ovn-nbctl lrp-add R1 foo 00:00:01:01:02:03 192.168.1.1/24
> > > ovn-nbctl lsp-add foo rp-foo -- set Logical_Switch_Port rp-foo \
> > > type=router options:router-port=foo addresses=\"00:00:01:01:02:03\
> "
> > >
> > > # Connect bar to R1
> > > ovn-nbctl lrp-add R1 bar 00:00:01:01:02:04 172.16.1.1/24
> > > ovn-nbctl lsp-add bar rp-bar -- set Logical_Switch_Port rp-bar \
> > > type=router options:router-port=bar addresses=\"00:00:01:01:02:04\
> "
> > >
> > > # Create logical port foo1 in foo
> > > ovn-nbctl lsp-add foo foo1 \
> > > -- lsp-set-addresses foo1 "f0:00:00:01:02:03 192.168.1.2"
> > >
> > > # Create logical port bar1, bar2 and bar3 in bar
> > > for i in `seq 1 3`; do
> > > ip=`expr $i + 1`
> > > ovn-nbctl lsp-add bar bar$i \
> > > -- lsp-set-addresses bar$i "f0:00:0a:01:02:$i 172.16.1.$ip"
> > > done
> > >
> > > OVS_WAIT_UNTIL([ovs-ofctl dump-flows br-int table=0 | grep REG13])
> > >
> > > OVN_CLEANUP([hv1])
> > >
> > > AT_CLEANUP
> > >
> > >
> > >
> > >
> > > ---
> > > v1->v2:
> > > - removed some obvious memory leaks left behind in physical.c
> > > v2->v3:
> > > - removed some less obvious memory leaks created by
> > > non-persisting ofctrl.c
> > >
> > > ovn/controller/binding.c | 120 +++++----------
> > > ovn/controller/binding.h | 1 -
> > > ovn/controller/encaps.c | 111 ++++++--------
> > > ovn/controller/lflow.c | 101 ++++---------
> > > ovn/controller/lflow.h | 4 +-
> > > ovn/controller/lport.c | 220 +++++----------------------
> > > ovn/controller/lport.h | 24 +--
> > > ovn/controller/ofctrl.c | 323 ++++++++++
> > > +-----------------------------
> > > ovn/controller/ofctrl.h | 16 +-
> > > ovn/controller/ovn-controller.c | 26 ++--
> > > ovn/controller/patch.c | 6 -
> > > ovn/controller/physical.c | 166 +++++----------------
> > > ovn/controller/physical.h | 3 +-
> > > 13 files changed, 295 insertions(+), 826 deletions(-)
> >
> > >
> > > diff --git a/ovn/controller/binding.c b/ovn/controller/binding.c
> > > index c26007d..f2552fa 100644
> > > --- a/ovn/controller/binding.c
> > > +++ b/ovn/controller/binding.c
> > > @@ -33,15 +33,6 @@ VLOG_DEFINE_THIS_MODULE(binding);
> > > /* A set of the iface-id values of local interfaces on this chassis.
> */
> > > static struct sset local_ids = SSET_INITIALIZER(&local_ids);
> > >
> > > -/* When this gets set to true, the next run will re-check all
> > > binding records. */
> > > -static bool process_full_binding = false;
> > > -
> > > -void
> > > -binding_reset_processing(void)
> > > -{
> > > - process_full_binding = true;
> > > -}
> > > -
> > > void
> > > binding_register_ovs_idl(struct ovsdb_idl *ovs_idl)
> > > {
> > > @@ -139,7 +130,6 @@ remove_local_datapath(struct hmap
> > > *local_datapaths, struct local_datapath *ld)
> > > }
> > > hmap_remove(local_datapaths, &ld->hmap_node);
> > > free(ld);
> > > - lflow_reset_processing();
> > > }
> > >
> > > static void
> > > @@ -156,9 +146,6 @@ add_local_datapath(struct hmap *local_datapaths,
> > > memcpy(&ld->uuid, &binding_rec->header_.uuid, sizeof ld->uuid);
> > > hmap_insert(local_datapaths, &ld->hmap_node,
> > > binding_rec->datapath->tunnel_key);
> > > - lport_index_reset();
> > > - mcgroup_index_reset();
> > > - lflow_reset_processing();
> > > }
> > >
> > > static void
> > > @@ -268,80 +255,49 @@ binding_run(struct controller_ctx *ctx, const
> > > struct ovsrec_bridge *br_int,
> > > }
> > >
> > > if (br_int) {
> > > - if (ctx->ovnsb_idl_txn && get_local_iface_ids(br_int,
> > > &lport_to_iface,
> > > - all_lports)) {
> > > - process_full_binding = true;
> > > - }
> > > - } else {
> > > - /* We have no integration bridge, therefore no local logical
> > ports.
> > > - * We'll remove our chassis from all port binding records
> below.
> > */
> > > - process_full_binding = true;
> > > + get_local_iface_ids(br_int, &lport_to_iface, all_lports);
> > > }
> > >
> > > - /* Run through each binding record to see if it is resident on
> this
> > > - * chassis and update the binding accordingly. This includes both
> > > - * directly connected logical ports and children of those ports.
> */
> > > - if (process_full_binding) {
> > > - /* Detect any entries in all_lports that have been deleted.
> > > - * In particular, this will catch localnet ports that we
> > > - * put in all_lports. */
> > > - struct sset removed_lports = SSET_INITIALIZER
> (&removed_lports);
> > > - sset_clone(&removed_lports, all_lports);
> > > -
> > > - struct hmap keep_local_datapath_by_uuid =
> > > - HMAP_INITIALIZER(&keep_local_datapath_by_uuid);
> > > - SBREC_PORT_BINDING_FOR_EACH(binding_rec, ctx->ovnsb_idl) {
> > > - sset_find_and_delete(&removed_lports, binding_rec->
> > logical_port);
> > > - consider_local_datapath(ctx, chassis_rec, binding_rec,
> > > - local_datapaths, &lport_to_iface,
> > > - all_lports);
> > > - struct local_datapath *ld = xzalloc(sizeof *ld);
> > > - memcpy(&ld->uuid, &binding_rec->header_.uuid, sizeof ld->
> > uuid);
> > > - hmap_insert(&keep_local_datapath_by_uuid, &ld->
> > uuid_hmap_node,
> > > - uuid_hash(&ld->uuid));
> > > - }
> > > - struct local_datapath *old_ld, *next;
> > > - HMAP_FOR_EACH_SAFE (old_ld, next, hmap_node, local_datapaths)
> {
> > > - if (!local_datapath_lookup_by_uuid
> > (&keep_local_datapath_by_uuid,
> > > - &old_ld->uuid)) {
> > > - remove_local_datapath(local_datapaths, old_ld);
> > > - }
> > > - }
> > > - struct local_datapath *ld;
> > > - HMAP_FOR_EACH_SAFE (ld, next, uuid_hmap_node,
> > > - &keep_local_datapath_by_uuid) {
> > > - hmap_remove(&keep_local_datapath_by_uuid, &ld->
> > uuid_hmap_node);
> > > - free(ld);
> > > - }
> > > - hmap_destroy(&keep_local_datapath_by_uuid);
> > > + /* Detect any entries in all_lports that have been deleted.
> > > + * In particular, this will catch localnet ports that we
> > > + * put in all_lports. */
> > > + struct sset removed_lports = SSET_INITIALIZER(&removed_lports);
> > > + sset_clone(&removed_lports, all_lports);
> > >
> > > - /* Any remaining entries in removed_lports are logical ports
> > that
> > > - * have been deleted and should also be removed from
> all_ports.
> > */
> > > - const char *cur_id;
> > > - SSET_FOR_EACH(cur_id, &removed_lports) {
> > > - sset_find_and_delete(all_lports, cur_id);
> > > - }
> > > - sset_destroy(&removed_lports);
> > > -
> > > - process_full_binding = false;
> > > - } else {
> > > - SBREC_PORT_BINDING_FOR_EACH_TRACKED(binding_rec, ctx->
> ovnsb_idl)
> > {
> > > - if (sbrec_port_binding_is_deleted(binding_rec)) {
> > > - /* If a port binding was bound to this chassis and
> > > removed before
> > > - * the ovs interface was removed, we'll catch that
> > > here and trigger
> > > - * a full bindings refresh. This is to see if we
> > > need to clear
> > > - * an entry out of local_datapaths. */
> > > - if (binding_rec->chassis == chassis_rec) {
> > > - process_full_binding = true;
> > > - poll_immediate_wake();
> > > - }
> > > - } else {
> > > - consider_local_datapath(ctx, chassis_rec, binding_rec,
> > > - local_datapaths,
> > &lport_to_iface,
> > > - all_lports);
> > > - }
> > > + struct hmap keep_local_datapath_by_uuid =
> > > + HMAP_INITIALIZER(&keep_local_datapath_by_uuid);
> > > + SBREC_PORT_BINDING_FOR_EACH(binding_rec, ctx->ovnsb_idl) {
> > > + sset_find_and_delete(&removed_lports, binding_rec->
> > logical_port);
> > > + consider_local_datapath(ctx, chassis_rec, binding_rec,
> > > + local_datapaths, &lport_to_iface,
> > > + all_lports);
> > > + struct local_datapath *ld = xzalloc(sizeof *ld);
> > > + memcpy(&ld->uuid, &binding_rec->header_.uuid, sizeof ld->
> uuid);
> > > + hmap_insert(&keep_local_datapath_by_uuid,
> &ld->uuid_hmap_node,
> > > + uuid_hash(&ld->uuid));
> > > + }
> > > + struct local_datapath *old_ld, *next;
> > > + HMAP_FOR_EACH_SAFE (old_ld, next, hmap_node, local_datapaths) {
> > > + if (!local_datapath_lookup_by_uuid
> (&keep_local_datapath_by_uuid,
> > > + &old_ld->uuid)) {
> > > + remove_local_datapath(local_datapaths, old_ld);
> > > }
> > > }
> > > + struct local_datapath *ld;
> > > + HMAP_FOR_EACH_SAFE (ld, next, uuid_hmap_node,
> > > + &keep_local_datapath_by_uuid) {
> > > + hmap_remove(&keep_local_datapath_by_uuid, &ld->
> uuid_hmap_node);
> > > + free(ld);
> > > + }
> > > + hmap_destroy(&keep_local_datapath_by_uuid);
> > > +
> > > + /* Any remaining entries in removed_lports are logical ports that
> > > + * have been deleted and should also be removed from all_ports. */
> > > + const char *cur_id;
> > > + SSET_FOR_EACH(cur_id, &removed_lports) {
> > > + sset_find_and_delete(all_lports, cur_id);
> > > + }
> > > + sset_destroy(&removed_lports);
> > >
> > > shash_destroy(&lport_to_iface);
> > > }
> > > diff --git a/ovn/controller/binding.h b/ovn/controller/binding.h
> > > index fbd16c8..dd764f2 100644
> > > --- a/ovn/controller/binding.h
> > > +++ b/ovn/controller/binding.h
> > > @@ -27,7 +27,6 @@ struct simap;
> > > struct sset;
> > >
> > > void binding_register_ovs_idl(struct ovsdb_idl *);
> > > -void binding_reset_processing(void);
> > > void binding_run(struct controller_ctx *, const struct ovsrec_bridge
> > *br_int,
> > > const char *chassis_id, struct hmap *local_datapaths,
> > > struct sset *all_lports);
> > > diff --git a/ovn/controller/encaps.c b/ovn/controller/encaps.c
> > > index d745e99..d99ba05 100644
> > > --- a/ovn/controller/encaps.c
> > > +++ b/ovn/controller/encaps.c
> > > @@ -68,11 +68,6 @@ static struct tunnel_ctx tc = {
> > > .port_names = SSET_INITIALIZER(&tc.port_names),
> > > };
> > >
> > > -/* Iterate over the full set of tunnels in both the OVS and SB
> databases
> > on
> > > - * the next wakeup. This is necessary when we add or remove a port in
> > OVS to
> > > - * handle the case where validation fails. */
> > > -static bool process_full_bridge = false;
> > > -
> > > static char *
> > > tunnel_create_name(const char *chassis_id)
> > > {
> > > @@ -229,11 +224,6 @@ tunnel_add(const struct sbrec_chassis
> *chassis_rec,
> > > sset_add(&tc.port_names, port_name);
> > > free(port_name);
> > > free(ports);
> > > - binding_reset_processing();
> > > - lport_index_reset();
> > > - mcgroup_index_reset();
> > > - lflow_reset_processing();
> > > - process_full_bridge = true;
> > > }
> > >
> > > static void
> > > @@ -259,10 +249,6 @@ bridge_delete_port(const struct ovsrec_bridge *br,
> > > ovsrec_bridge_verify_ports(br);
> > > ovsrec_bridge_set_ports(br, ports, n);
> > > free(ports);
> > > -
> > > - binding_reset_processing();
> > > - lflow_reset_processing();
> > > - process_full_bridge = true;
> > > }
> > > }
> > >
> > > @@ -365,65 +351,61 @@ encaps_run(struct controller_ctx *ctx, const
> > > struct ovsrec_bridge *br_int,
> > > * common. It would also require more bookkeeping to match up
> ports
> > and
> > > * interfaces. */
> > >
> > > - if (process_full_bridge || ovsrec_port_track_get_first(ctx->
> ovs_idl)
> > ||
> > > - ovsrec_interface_track_get_first(ctx->ovs_idl)) {
> > > - const struct ovsrec_port *port_rec;
> > > - struct chassis_hash_node *chassis_node, *next;
> > > + const struct ovsrec_port *port_rec;
> > > + struct chassis_hash_node *chassis_node, *next;
> > >
> > > - process_full_bridge = false;
> > > - sset_clear(&tc.port_names);
> > > + sset_clear(&tc.port_names);
> > >
> > > - /* Find all of the tunnel ports to remote chassis.
> > > - * Delete the tunnel ports from unknown remote chassis. */
> > > - OVSREC_PORT_FOR_EACH (port_rec, ctx->ovs_idl) {
> > > - sset_add(&tc.port_names, port_rec->name);
> > > - for (int i = 0; i < port_rec->n_interfaces; i++) {
> > > - sset_add(&tc.port_names, port_rec->interfaces[i]->
> name);
> > > - }
> > > + /* Find all of the tunnel ports to remote chassis.
> > > + * Delete the tunnel ports from unknown remote chassis. */
> > > + OVSREC_PORT_FOR_EACH (port_rec, ctx->ovs_idl) {
> > > + sset_add(&tc.port_names, port_rec->name);
> > > + for (int i = 0; i < port_rec->n_interfaces; i++) {
> > > + sset_add(&tc.port_names, port_rec->interfaces[i]->name);
> > > + }
> > >
> > > - const char *chassis_id = smap_get(&port_rec->external_
> ids,
> > > - "ovn-chassis-id");
> > > - if (chassis_id) {
> > > - chassis_node = lookup_chassis_id(chassis_id);
> > > - if (chassis_node) {
> > > - /* Populate the port's UUID the first time we
> > > see it after
> > > - * the port was added. */
> > > - if (uuid_is_zero(&chassis_node->port_uuid)) {
> > > - chassis_node->port_uuid = port_rec->
> > header_.uuid;
> > > - }
> > > - } else {
> > > - for (int i = 0; i < port_rec->n_interfaces; i++) {
> > > - sset_find_and_delete(&tc.port_names,
> > > - port_rec->interfaces[i]->
> > name);
> > > - }
> > > - sset_find_and_delete(&tc.port_names, port_rec->
> > name);
> > > - bridge_delete_port(tc.br_int, port_rec, NULL);
> > > + const char *chassis_id = smap_get(&port_rec->external_ids,
> > > + "ovn-chassis-id");
> > > + if (chassis_id) {
> > > + chassis_node = lookup_chassis_id(chassis_id);
> > > + if (chassis_node) {
> > > + /* Populate the port's UUID the first time we see it
> > after
> > > + * the port was added. */
> > > + if (uuid_is_zero(&chassis_node->port_uuid)) {
> > > + chassis_node->port_uuid = port_rec->header_.uuid;
> > > + }
> > > + } else {
> > > + for (int i = 0; i < port_rec->n_interfaces; i++) {
> > > + sset_find_and_delete(&tc.port_names,
> > > + port_rec->interfaces[i]->
> name);
> > > }
> > > + sset_find_and_delete(&tc.port_names, port_rec->name);
> > > + bridge_delete_port(tc.br_int, port_rec, NULL);
> > > }
> > > }
> > > + }
> > >
> > > - /* For each chassis that we previously created, check that
> both
> > the
> > > - * chassis and port still exist and are current. */
> > > - HMAP_FOR_EACH_SAFE (chassis_node, next, node,
> &tc.chassis_hmap)
> > {
> > > - chassis_rec = sbrec_chassis_get_for_uuid(ctx->ovnsb_idl,
> > > -
> > > &chassis_node->chassis_uuid);
> > > - port_rec = ovsrec_port_get_for_uuid(ctx->ovs_idl,
> > > - &chassis_node->
> > port_uuid);
> > > + /* For each chassis that we previously created, check that both
> the
> > > + * chassis and port still exist and are current. */
> > > + HMAP_FOR_EACH_SAFE (chassis_node, next, node, &tc.chassis_hmap) {
> > > + chassis_rec = sbrec_chassis_get_for_uuid(ctx->ovnsb_idl,
> > > + &chassis_node->
> > chassis_uuid);
> > > + port_rec = ovsrec_port_get_for_uuid(ctx->ovs_idl,
> > > + &chassis_node->port_uuid);
> > >
> > > - if (!chassis_rec) {
> > > - /* Delete tunnel port (if present) for missing
> chassis.
> > */
> > > - bridge_delete_port(tc.br_int, port_rec, chassis_node);
> > > - continue;
> > > - }
> > > + if (!chassis_rec) {
> > > + /* Delete tunnel port (if present) for missing chassis. */
> > > + bridge_delete_port(tc.br_int, port_rec, chassis_node);
> > > + continue;
> > > + }
> > >
> > > - if (!port_rec) {
> > > - /* Delete our representation of the chassis, then
> > > add back. */
> > > - bridge_delete_port(tc.br_int, NULL, chassis_node);
> > > - check_and_add_tunnel(chassis_rec, local_chassis_id);
> > > - } else {
> > > - /* Update tunnel. */
> > > - check_and_update_tunnel(port_rec, chassis_rec);
> > > - }
> > > + if (!port_rec) {
> > > + /* Delete our representation of the chassis, then add
> back.
> > */
> > > + bridge_delete_port(tc.br_int, NULL, chassis_node);
> > > + check_and_add_tunnel(chassis_rec, local_chassis_id);
> > > + } else {
> > > + /* Update tunnel. */
> > > + check_and_update_tunnel(port_rec, chassis_rec);
> > > }
> > > }
> > >
> > > @@ -444,7 +426,6 @@ encaps_run(struct controller_ctx *ctx, const
> > > struct ovsrec_bridge *br_int,
> > > * new version and then iterate over everything
> > > to see if it
> > > * is OK. */
> > > delete_encap_uuid(encap_hash_node);
> > > - process_full_bridge = true;
> > > poll_immediate_wake();
> > > }
> > >
> > > diff --git a/ovn/controller/lflow.c b/ovn/controller/lflow.c
> > > index 341ca08..6b13208 100644
> > > --- a/ovn/controller/lflow.c
> > > +++ b/ovn/controller/lflow.c
> > > @@ -41,17 +41,6 @@ static struct shash symtab;
> > > /* Contains an internal expr datastructure that represents an address
> > set. */
> > > static struct shash expr_address_sets;
> > >
> > > -static bool full_flow_processing = false;
> > > -static bool full_logical_flow_processing = false;
> > > -static bool full_neighbor_flow_processing = false;
> > > -
> > > -void
> > > -lflow_reset_processing(void)
> > > -{
> > > - full_flow_processing = true;
> > > - physical_reset_processing();
> > > -}
> > > -
> > > void
> > > lflow_init(void)
> > > {
> > > @@ -219,7 +208,8 @@ static void consider_logical_flow(const struct
> > > lport_index *lports,
> > > const struct simap *ct_zones,
> > > struct hmap *dhcp_opts_p,
> > > struct hmap *dhcpv6_opts_p,
> > > - uint32_t *conj_id_ofs_p);
> > > + uint32_t *conj_id_ofs_p,
> > > + struct hmap *flow_table);
> > >
> > > static bool
> > > lookup_port_cb(const void *aux_, const char *port_name, unsigned int
> > *portp)
> > > @@ -257,19 +247,13 @@ add_logical_flows(struct controller_ctx *ctx,
> > > const struct lport_index *lports,
> > > const struct hmap *local_datapaths,
> > > const struct hmap *patched_datapaths,
> > > struct group_table *group_table,
> > > - const struct simap *ct_zones)
> > > + const struct simap *ct_zones,
> > > + struct hmap *flow_table)
> > > {
> > > uint32_t conj_id_ofs = 1;
> > > const struct sbrec_logical_flow *lflow;
> > >
> > > - if (full_flow_processing) {
> > > - ovn_flow_table_clear();
> > > - ovn_group_table_clear(group_table, false);
> > > - full_logical_flow_processing = true;
> > > - full_neighbor_flow_processing = true;
> > > - full_flow_processing = false;
> > > - physical_reset_processing();
> > > - }
> > > + ovn_group_table_clear(group_table, false);
> > >
> > > struct hmap dhcp_opts = HMAP_INITIALIZER(&dhcp_opts);
> > > struct hmap dhcpv6_opts = HMAP_INITIALIZER(&dhcpv6_opts);
> > > @@ -286,31 +270,11 @@ add_logical_flows(struct controller_ctx *ctx,
> > > const struct lport_index *lports,
> > > dhcpv6_opt_row->type);
> > > }
> > >
> > > - if (full_logical_flow_processing) {
> > > - SBREC_LOGICAL_FLOW_FOR_EACH (lflow, ctx->ovnsb_idl) {
> > > - consider_logical_flow(lports, mcgroups, lflow,
> > local_datapaths,
> > > - patched_datapaths, group_table,
> > ct_zones,
> > > - &dhcp_opts, &dhcpv6_opts,
> > &conj_id_ofs);
> > > - }
> > > - full_logical_flow_processing = false;
> > > - } else {
> > > - SBREC_LOGICAL_FLOW_FOR_EACH_TRACKED (lflow, ctx->ovnsb_idl) {
> > > - /* Remove any flows that should be removed. */
> > > - if (sbrec_logical_flow_is_deleted(lflow)) {
> > > - ofctrl_remove_flows(&lflow->header_.uuid);
> > > - } else {
> > > - /* Now, add/modify existing flows. If the logical
> > > - * flow is a modification, just remove the flows
> > > - * for this row, and then add new flows. */
> > > - if (!sbrec_logical_flow_is_new(lflow)) {
> > > - ofctrl_remove_flows(&lflow->header_.uuid);
> > > - }
> > > - consider_logical_flow(lports, mcgroups, lflow,
> > > - local_datapaths,
> > patched_datapaths,
> > > - group_table, ct_zones,
> > > - &dhcp_opts, &dhcpv6_opts,
> > > &conj_id_ofs);
> > > - }
> > > - }
> > > + SBREC_LOGICAL_FLOW_FOR_EACH (lflow, ctx->ovnsb_idl) {
> > > + consider_logical_flow(lports, mcgroups, lflow,
> local_datapaths,
> > > + patched_datapaths, group_table,
> ct_zones,
> > > + &dhcp_opts, &dhcpv6_opts, &conj_id_ofs,
> > > + flow_table);
> > > }
> > >
> > > dhcp_opts_destroy(&dhcp_opts);
> > > @@ -327,7 +291,8 @@ consider_logical_flow(const struct lport_index
> > *lports,
> > > const struct simap *ct_zones,
> > > struct hmap *dhcp_opts_p,
> > > struct hmap *dhcpv6_opts_p,
> > > - uint32_t *conj_id_ofs_p)
> > > + uint32_t *conj_id_ofs_p,
> > > + struct hmap *flow_table)
> > > {
> > > /* Determine translation of logical table IDs to physical table
> IDs.
> > */
> > > bool ingress = !strcmp(lflow->pipeline, "ingress");
> > > @@ -468,8 +433,8 @@ consider_logical_flow(const struct lport_index
> > *lports,
> > > m->match.flow.conj_id += *conj_id_ofs_p;
> > > }
> > > if (!m->n) {
> > > - ofctrl_add_flow(ptable, lflow->priority, &m->match,
> > &ofpacts,
> > > - &lflow->header_.uuid);
> > > + ofctrl_add_flow(flow_table, ptable, lflow->priority, &m->
> > match,
> > > + &ofpacts);
> > > } else {
> > > uint64_t conj_stubs[64 / 8];
> > > struct ofpbuf conj;
> > > @@ -484,8 +449,8 @@ consider_logical_flow(const struct lport_index
> > *lports,
> > > dst->clause = src->clause;
> > > dst->n_clauses = src->n_clauses;
> > > }
> > > - ofctrl_add_flow(ptable, lflow->priority, &m->match, &conj,
> > > - &lflow->header_.uuid);
> > > + ofctrl_add_flow(flow_table, ptable, lflow->priority, &m->
> > match,
> > > + &conj);
> > > ofpbuf_uninit(&conj);
> > > }
> > > }
> > > @@ -513,7 +478,8 @@ static void
> > > consider_neighbor_flow(const struct lport_index *lports,
> > > const struct sbrec_mac_binding *b,
> > > struct ofpbuf *ofpacts_p,
> > > - struct match *match_p)
> > > + struct match *match_p,
> > > + struct hmap *flow_table)
>
> > > {
> > > const struct sbrec_port_binding *pb
> > > = lport_lookup_by_name(lports, b->logical_port);
> > > @@ -555,8 +521,7 @@ consider_neighbor_flow(const struct lport_index
> > *lports,
> > > ofpbuf_clear(ofpacts_p);
> > > put_load(mac.ea, sizeof mac.ea, MFF_ETH_DST, 0, 48, ofpacts_p);
> > >
> > > - ofctrl_add_flow(OFTABLE_MAC_BINDING, 100, match_p, ofpacts_p,
> > > - &b->header_.uuid);
> > > + ofctrl_add_flow(flow_table, OFTABLE_MAC_BINDING, 100, match_p,
> > > ofpacts_p);
> > > }
> > >
> > > /* Adds an OpenFlow flow to flow tables for each MAC binding in the
> OVN
> > > @@ -564,7 +529,8 @@ consider_neighbor_flow(const struct lport_index
> > *lports,
> > > * numbers. */
> > > static void
> > > add_neighbor_flows(struct controller_ctx *ctx,
> > > - const struct lport_index *lports)
> > > + const struct lport_index *lports,
> > > + struct hmap *flow_table)
> > > {
> > > struct ofpbuf ofpacts;
> > > struct match match;
> > > @@ -572,22 +538,8 @@ add_neighbor_flows(struct controller_ctx *ctx,
> > > ofpbuf_init(&ofpacts, 0);
> > >
> > > const struct sbrec_mac_binding *b;
> > > - if (full_neighbor_flow_processing) {
> > > - SBREC_MAC_BINDING_FOR_EACH (b, ctx->ovnsb_idl) {
> > > - consider_neighbor_flow(lports, b, &ofpacts, &match);
> > > - }
> > > - full_neighbor_flow_processing = false;
> > > - } else {
> > > - SBREC_MAC_BINDING_FOR_EACH_TRACKED (b, ctx->ovnsb_idl) {
> > > - if (sbrec_mac_binding_is_deleted(b)) {
> > > - ofctrl_remove_flows(&b->header_.uuid);
> > > - } else {
> > > - if (!sbrec_mac_binding_is_new(b)) {
> > > - ofctrl_remove_flows(&b->header_.uuid);
> > > - }
> > > - consider_neighbor_flow(lports, b, &ofpacts, &match);
> > > - }
> > > - }
> > > + SBREC_MAC_BINDING_FOR_EACH (b, ctx->ovnsb_idl) {
> > > + consider_neighbor_flow(lports, b, &ofpacts, &match,
> flow_table);
> > > }
> > >
> > > ofpbuf_uninit(&ofpacts);
> > > @@ -601,12 +553,13 @@ lflow_run(struct controller_ctx *ctx, const
> > > struct lport_index *lports,
> > > const struct hmap *local_datapaths,
> > > const struct hmap *patched_datapaths,
> > > struct group_table *group_table,
> > > - const struct simap *ct_zones)
> > > + const struct simap *ct_zones,
> > > + struct hmap *flow_table)
> > > {
> > > update_address_sets(ctx);
> > > add_logical_flows(ctx, lports, mcgroups, local_datapaths,
> > > - patched_datapaths, group_table, ct_zones);
> > > - add_neighbor_flows(ctx, lports);
> > > + patched_datapaths, group_table, ct_zones,
> > flow_table);
> > > + add_neighbor_flows(ctx, lports, flow_table);
> > > }
> > >
> > > void
> > > diff --git a/ovn/controller/lflow.h b/ovn/controller/lflow.h
> > > index ac058ff..d3ca5d1 100644
> > > --- a/ovn/controller/lflow.h
> > > +++ b/ovn/controller/lflow.h
> > > @@ -66,8 +66,8 @@ void lflow_run(struct controller_ctx *, const
> > > struct lport_index *,
> > > const struct hmap *local_datapaths,
> > > const struct hmap *patched_datapaths,
> > > struct group_table *group_table,
> > > - const struct simap *ct_zones);
> > > + const struct simap *ct_zones,
> > > + struct hmap *flow_table);
> > > void lflow_destroy(void);
> > > -void lflow_reset_processing(void);
> > >
> > > #endif /* ovn/lflow.h */
> > > diff --git a/ovn/controller/lport.c b/ovn/controller/lport.c
> > > index 5d8d0d0..e1ecf21 100644
> > > --- a/ovn/controller/lport.c
> > > +++ b/ovn/controller/lport.c
> > > @@ -17,7 +17,6 @@
> > >
> > > #include "lport.h"
> > > #include "hash.h"
> > > -#include "lflow.h"
> > > #include "openvswitch/vlog.h"
> > > #include "ovn/lib/ovn-sb-idl.h"
> > >
> > > @@ -25,112 +24,49 @@ VLOG_DEFINE_THIS_MODULE(lport);
> > >
> > > /* A logical port. */
> > > struct lport {
> > > - struct hmap_node name_node; /* Index by name. */
> > > - struct hmap_node key_node; /* Index by (dp_key, port_key). */
> > > - struct hmap_node uuid_node; /* Index by row uuid. */
> > > - struct uuid uuid;
> > > + struct hmap_node name_node; /* Index by name. */
> > > + struct hmap_node key_node; /* Index by (dp_key, port_key). */
> > > const struct sbrec_port_binding *pb;
> > > };
> > >
> > > -static bool full_lport_rebuild = false;
> > > -
> > > -void
> > > -lport_index_reset(void)
> > > -{
> > > - full_lport_rebuild = true;
> > > -}
> > > -
> > > void
> > > -lport_index_init(struct lport_index *lports)
> > > +lport_index_init(struct lport_index *lports, struct ovsdb_idl
> > *ovnsb_idl)
> > > {
> > > hmap_init(&lports->by_name);
> > > hmap_init(&lports->by_key);
> > > - hmap_init(&lports->by_uuid);
> > > -}
> > >
> > > -bool
> > > -lport_index_remove(struct lport_index *lports, const struct uuid
> *uuid)
> > > -{
> > > - const struct lport *port_ = lport_lookup_by_uuid(lports, uuid);
> > > - struct lport *port = CONST_CAST(struct lport *, port_);
> > > - if (port) {
> > > - hmap_remove(&lports->by_name, &port->name_node);
> > > - hmap_remove(&lports->by_key, &port->key_node);
> > > - hmap_remove(&lports->by_uuid, &port->uuid_node);
> > > - free(port);
> > > - return true;
> > > + const struct sbrec_port_binding *pb;
> > > + SBREC_PORT_BINDING_FOR_EACH (pb, ovnsb_idl) {
> > > + if (lport_lookup_by_name(lports, pb->logical_port)) {
> > > + static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1,
> > 1);
> > > + VLOG_WARN_RL(&rl, "duplicate logical port name '%s'",
> > > + pb->logical_port);
> > > + continue;
> > > + }
> > > +
> > > + struct lport *p = xmalloc(sizeof *p);
> > > + hmap_insert(&lports->by_name, &p->name_node,
> > > + hash_string(pb->logical_port, 0));
> > > + hmap_insert(&lports->by_key, &p->key_node,
> > > + hash_int(pb->tunnel_key, pb->datapath->
> tunnel_key));
> > > + p->pb = pb;
> > > }
> > > - return false;
> > > }
> > >
> > > void
> > > -lport_index_clear(struct lport_index *lports)
> > > +lport_index_destroy(struct lport_index *lports)
> > > {
> > > /* Destroy all of the "struct lport"s.
> > > *
> > > - * We have to remove the node from all indexes. */
> > > + * We don't have to remove the node from both indexes. */
> > > struct lport *port, *next;
> > > HMAP_FOR_EACH_SAFE (port, next, name_node, &lports->by_name) {
> > > hmap_remove(&lports->by_name, &port->name_node);
> > > - hmap_remove(&lports->by_key, &port->key_node);
> > > - hmap_remove(&lports->by_uuid, &port->uuid_node);
> > > free(port);
> > > }
> > > - lflow_reset_processing();
> > > -}
> > > -
> > > -static void
> > > -consider_lport_index(struct lport_index *lports,
> > > - const struct sbrec_port_binding *pb)
> > > -{
> > > - if (lport_lookup_by_name(lports, pb->logical_port)) {
> > > - return;
> > > - }
> > > -
> > > - struct lport *p = xmalloc(sizeof *p);
> > > - hmap_insert(&lports->by_name, &p->name_node,
> > > - hash_string(pb->logical_port, 0));
> > > - hmap_insert(&lports->by_key, &p->key_node,
> > > - hash_int(pb->tunnel_key, pb->datapath->tunnel_key));
> > > - hmap_insert(&lports->by_uuid, &p->uuid_node,
> > > - uuid_hash(&pb->header_.uuid));
> > > - memcpy(&p->uuid, &pb->header_.uuid, sizeof p->uuid);
> > > - p->pb = pb;
> > > - lflow_reset_processing();
> > > -}
> > > -
> > > -void
> > > -lport_index_fill(struct lport_index *lports, struct ovsdb_idl
> > *ovnsb_idl)
> > > -{
> > > - const struct sbrec_port_binding *pb;
> > > - if (full_lport_rebuild) {
> > > - lport_index_clear(lports);
> > > - SBREC_PORT_BINDING_FOR_EACH (pb, ovnsb_idl) {
> > > - consider_lport_index(lports, pb);
> > > - }
> > > - full_lport_rebuild = false;
> > > - } else {
> > > - SBREC_PORT_BINDING_FOR_EACH_TRACKED (pb, ovnsb_idl) {
> > > - if (sbrec_port_binding_is_deleted(pb)) {
> > > - while (lport_index_remove(lports, &pb->header_.uuid))
> {
> > > - ;
> > > - }
> > > - lflow_reset_processing();
> > > - } else {
> > > - consider_lport_index(lports, pb);
> > > - }
> > > - }
> > > - }
> > > -}
> > > -
> > > -void
> > > -lport_index_destroy(struct lport_index *lports)
> > > -{
> > > - lport_index_clear(lports);
> > >
> > > hmap_destroy(&lports->by_name);
> > > hmap_destroy(&lports->by_key);
> > > - hmap_destroy(&lports->by_uuid);
> > > }
> > >
> > > /* Finds and returns the lport with the given 'name', or NULL if
> nosuch
> > lport
> > > @@ -148,20 +84,6 @@ lport_lookup_by_name(const struct lport_index
> > > *lports, const char *name)
> > > return NULL;
> > > }
> > >
> > > -const struct lport *
> > > -lport_lookup_by_uuid(const struct lport_index *lports,
> > > - const struct uuid *uuid)
> > > -{
> > > - const struct lport *lport;
> > > - HMAP_FOR_EACH_WITH_HASH (lport, uuid_node, uuid_hash(uuid),
> > > - &lports->by_uuid) {
> > > - if (uuid_equals(uuid, &lport->uuid)) {
> > > - return lport;
> > > - }
> > > - }
> > > - return NULL;
> > > -}
> > > -
> > > const struct sbrec_port_binding *
> > > lport_lookup_by_key(const struct lport_index *lports,
> > > uint32_t dp_key, uint16_t port_key)
> > > @@ -179,113 +101,43 @@ lport_lookup_by_key(const struct lport_index
> > *lports,
> > >
> > > struct mcgroup {
> > > struct hmap_node dp_name_node; /* Index by (logical datapath,
> name).
> > */
> > > - struct hmap_node uuid_node; /* Index by insert uuid. */
> > > - struct uuid uuid;
> > > const struct sbrec_multicast_group *mg;
> > > };
> > >
> > > -static bool full_mc_rebuild = false;
> > > -
> > > void
> > > -mcgroup_index_reset(void)
> > > -{
> > > - full_mc_rebuild = true;
> > > -}
> > > -
> > > -void
> > > -mcgroup_index_init(struct mcgroup_index *mcgroups)
> > > +mcgroup_index_init(struct mcgroup_index *mcgroups, struct ovsdb_idl
> > > *ovnsb_idl)
> > > {
> > > hmap_init(&mcgroups->by_dp_name);
> > > - hmap_init(&mcgroups->by_uuid);
> > > -}
> > >
> > > -void
> > > -mcgroup_index_remove(struct mcgroup_index *mcgroups, const struct uuid
> > *uuid)
> > > -{
> > > - const struct mcgroup *mcgroup_ = mcgroup_lookup_by_uuid(mcgroups,
> > uuid);
> > > - struct mcgroup *mcgroup = CONST_CAST(struct mcgroup *, mcgroup_);
> > > - if (mcgroup) {
> > > - hmap_remove(&mcgroups->by_dp_name, &mcgroup->dp_name_node);
> > > - hmap_remove(&mcgroups->by_uuid, &mcgroup->uuid_node);
> > > - free(mcgroup);
> > > + const struct sbrec_multicast_group *mg;
> > > + SBREC_MULTICAST_GROUP_FOR_EACH (mg, ovnsb_idl) {
> > > + const struct uuid *dp_uuid = &mg->datapath->header_.uuid;
> > > + if (mcgroup_lookup_by_dp_name(mcgroups, mg->datapath, mg->
> name))
> > {
> > > + static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(1,
> > 1);
> > > + VLOG_WARN_RL(&rl, "datapath "UUID_FMT" contains duplicate
> "
> > > + "multicast group '%s'",
> > > UUID_ARGS(dp_uuid), mg->name);
> > > + continue;
> > > + }
> > > +
> > > + struct mcgroup *m = xmalloc(sizeof *m);
> > > + hmap_insert(&mcgroups->by_dp_name, &m->dp_name_node,
> > > + hash_string(mg->name, uuid_hash(dp_uuid)));
> > > + m->mg = mg;
> > > }
> > > - lflow_reset_processing();
> > > }
> > >
> > > void
> > > -mcgroup_index_clear(struct mcgroup_index *mcgroups)
> > > +mcgroup_index_destroy(struct mcgroup_index *mcgroups)
> > > {
> > > struct mcgroup *mcgroup, *next;
> > > HMAP_FOR_EACH_SAFE (mcgroup, next, dp_name_node, &mcgroups->
> > by_dp_name) {
> > > hmap_remove(&mcgroups->by_dp_name, &mcgroup->dp_name_node);
> > > - hmap_remove(&mcgroups->by_uuid, &mcgroup->uuid_node);
> > > free(mcgroup);
> > > }
> > > -}
> > > -
> > > -static void
> > > -consider_mcgroup_index(struct mcgroup_index *mcgroups,
> > > - const struct sbrec_multicast_group *mg)
> > > -{
> > > - const struct uuid *dp_uuid = &mg->datapath->header_.uuid;
> > > - if (mcgroup_lookup_by_dp_name(mcgroups, mg->datapath, mg->name))
> {
> > > - return;
> > > - }
> > > -
> > > - struct mcgroup *m = xmalloc(sizeof *m);
> > > - hmap_insert(&mcgroups->by_dp_name, &m->dp_name_node,
> > > - hash_string(mg->name, uuid_hash(dp_uuid)));
> > > - hmap_insert(&mcgroups->by_uuid, &m->uuid_node,
> > > - uuid_hash(&mg->header_.uuid));
> > > - memcpy(&m->uuid, &mg->header_.uuid, sizeof m->uuid);
> > > - m->mg = mg;
> > > - lflow_reset_processing();
> > > -}
> > > -
> > > -void
> > > -mcgroup_index_fill(struct mcgroup_index *mcgroups, struct ovsdb_idl
> > > *ovnsb_idl)
> > > -{
> > > - const struct sbrec_multicast_group *mg;
> > > - if (full_mc_rebuild) {
> > > - mcgroup_index_clear(mcgroups);
> > > - SBREC_MULTICAST_GROUP_FOR_EACH (mg, ovnsb_idl) {
> > > - consider_mcgroup_index(mcgroups, mg);
> > > - }
> > > - full_mc_rebuild = false;
> > > - } else {
> > > - SBREC_MULTICAST_GROUP_FOR_EACH_TRACKED (mg, ovnsb_idl) {
> > > - if (sbrec_multicast_group_is_deleted(mg)) {
> > > - mcgroup_index_remove(mcgroups, &mg->header_.uuid);
> > > - lflow_reset_processing();
> > > - } else {
> > > - consider_mcgroup_index(mcgroups, mg);
> > > - }
> > > - }
> > > - }
> > > -}
> > > -
> > > -void
> > > -mcgroup_index_destroy(struct mcgroup_index *mcgroups)
> > > -{
> > > - mcgroup_index_clear(mcgroups);
> > >
> > > hmap_destroy(&mcgroups->by_dp_name);
> > > }
> > >
> > > -const struct mcgroup *
> > > -mcgroup_lookup_by_uuid(const struct mcgroup_index *mcgroups,
> > > - const struct uuid *uuid)
> > > -{
> > > - const struct mcgroup *mcgroup;
> > > - HMAP_FOR_EACH_WITH_HASH (mcgroup, uuid_node, uuid_hash(uuid),
> > > - &mcgroups->by_uuid) {
> > > - if (uuid_equals(&mcgroup->uuid, uuid)) {
> > > - return mcgroup;
> > > - }
> > > - }
> > > - return NULL;
> > > -}
> > > -
> > > const struct sbrec_multicast_group *
> > > mcgroup_lookup_by_dp_name(const struct mcgroup_index *mcgroups,
> > > const struct sbrec_datapath_binding *dp,
> > > diff --git a/ovn/controller/lport.h b/ovn/controller/lport.h
> > > index 9e9c6d3..0cad74a 100644
> > > --- a/ovn/controller/lport.h
> > > +++ b/ovn/controller/lport.h
> > > @@ -18,7 +18,6 @@
> > >
> > > #include <stdint.h>
> > > #include "openvswitch/hmap.h"
> > > -#include "uuid.h"
> > >
> > > struct ovsdb_idl;
> > > struct sbrec_datapath_binding;
> > > @@ -33,25 +32,15 @@ struct sbrec_datapath_binding;
> > > struct lport_index {
> > > struct hmap by_name;
> > > struct hmap by_key;
> > > - struct hmap by_uuid;
> > > };
> > >
> > > -void lport_index_reset(void);
> > > -void lport_index_init(struct lport_index *);
> > > -void lport_index_fill(struct lport_index *, struct ovsdb_idl *);
> > > -bool lport_index_remove(struct lport_index *, const struct uuid *);
> > > -void lport_index_clear(struct lport_index *);
> > > +void lport_index_init(struct lport_index *, struct ovsdb_idl *);
> > > void lport_index_destroy(struct lport_index *);
> > > -void lport_index_rebuild(void);
> > >
> > > const struct sbrec_port_binding *lport_lookup_by_name(
> > > const struct lport_index *, const char *name);
> > > const struct sbrec_port_binding *lport_lookup_by_key(
> > > const struct lport_index *, uint32_t dp_key, uint16_t port_key);
> > > -
> > > -const struct lport *lport_lookup_by_uuid(
> > > - const struct lport_index *, const struct uuid *uuid);
> > > -
> > >
> > > /* Multicast group index
> > > * =====================
> > > @@ -65,23 +54,14 @@ const struct lport *lport_lookup_by_uuid(
> > >
> > > struct mcgroup_index {
> > > struct hmap by_dp_name;
> > > - struct hmap by_uuid;
> > > };
> > >
> > > -void mcgroup_index_reset(void);
> > > -void mcgroup_index_init(struct mcgroup_index *);
> > > -void mcgroup_index_fill(struct mcgroup_index *, struct ovsdb_idl *);
> > > -void mcgroup_index_remove(struct mcgroup_index *, const struct uuid
> *);
> > > -void mcgroup_index_clear(struct mcgroup_index *);
> > > +void mcgroup_index_init(struct mcgroup_index *, struct ovsdb_idl *);
> > > void mcgroup_index_destroy(struct mcgroup_index *);
> > > -void mcgroup_index_rebuild(void);
> > >
> > > const struct sbrec_multicast_group *mcgroup_lookup_by_dp_name(
> > > const struct mcgroup_index *,
> > > const struct sbrec_datapath_binding *,
> > > const char *name);
> > >
> > > -const struct mcgroup *mcgroup_lookup_by_uuid(
> > > - const struct mcgroup_index *, const struct uuid *uuid);
> > > -
> > > #endif /* ovn/lport.h */
> > > diff --git a/ovn/controller/ofctrl.c b/ovn/controller/ofctrl.c
> > > index d7b3d3e..723998b 100644
> > > --- a/ovn/controller/ofctrl.c
> > > +++ b/ovn/controller/ofctrl.c
> > > @@ -19,7 +19,6 @@
> > > #include "dirs.h"
> > > #include "flow.h"
> > > #include "hash.h"
> > > -#include "hindex.h"
> > > #include "lflow.h"
> > > #include "ofctrl.h"
> > > #include "openflow/openflow.h"
> > > @@ -47,8 +46,7 @@ VLOG_DEFINE_THIS_MODULE(ofctrl);
> > >
> > > /* An OpenFlow flow. */
> > > struct ovn_flow {
> > > - struct hmap_node match_hmap_node; /* For match based hashing. */
> > > - struct hindex_node uuid_hindex_node; /* For uuid based hashing. */
> > > + struct hmap_node hmap_node; /* For match based hashing. */
> > > struct ovs_list list_node; /* For handling lists of flows. */
> > >
> > > /* Key. */
> > > @@ -56,15 +54,14 @@ struct ovn_flow {
> > > uint16_t priority;
> > > struct match match;
> > >
> > > - /* Data. UUID is used for disambiguation. */
> > > - struct uuid uuid;
> > > + /* Data. */
> > > struct ofpact *ofpacts;
> > > size_t ofpacts_len;
> > > };
> > >
> > > -static uint32_t ovn_flow_match_hash(const struct ovn_flow *);
> > > -static void ovn_flow_lookup(struct hmap *, const struct ovn_flow
> > *target,
> > > - struct ovs_list *answers);
> > > +static uint32_t ovn_flow_hash(const struct ovn_flow *);
> > > +static struct ovn_flow *ovn_flow_lookup(struct hmap *flow_table,
> > > + const struct ovn_flow
> *target);
> > > static char *ovn_flow_to_string(const struct ovn_flow *);
> > > static void ovn_flow_log(const struct ovn_flow *, const char *action);
> > > static void ovn_flow_destroy(struct ovn_flow *);
> > > @@ -138,15 +135,14 @@ static enum mf_field_id mff_ovn_geneve;
> > >
> > > static ovs_be32 queue_msg(struct ofpbuf *);
> > >
> > > -static void ovn_flow_table_destroy(void);
> > > static struct ofpbuf *encode_flow_mod(struct ofputil_flow_mod *);
> > >
> > > static struct ofpbuf *encode_group_mod(const struct ofputil_group_mod
> > *);
> > >
> > > -static void ofctrl_recv(const struct ofp_header *, enum ofptype);
> > > +static void ovn_flow_table_clear(struct hmap *flow_table);
> > > +static void ovn_flow_table_destroy(struct hmap *flow_table);
> > >
> > > -static struct hmap match_flow_table = HMAP_INITIALIZER
> > (&match_flow_table);
> > > -static struct hindex uuid_flow_table = HINDEX_INITIALIZER
> > (&uuid_flow_table);
> > > +static void ofctrl_recv(const struct ofp_header *, enum ofptype);
> > >
> > > void
> > > ofctrl_init(struct group_table *group_table)
> > > @@ -357,6 +353,9 @@ run_S_CLEAR_FLOWS(void)
> > > queue_msg(encode_flow_mod(&fm));
> > > VLOG_DBG("clearing all flows");
> > >
> > > + /* Clear installed_flows, to match the state of the switch. */
> > > + ovn_flow_table_clear(&installed_flows);
> > > +
> > > /* Send a group_mod to delete all groups. */
> > > struct ofputil_group_mod gm;
> > > memset(&gm, 0, sizeof gm);
> > > @@ -367,10 +366,6 @@ run_S_CLEAR_FLOWS(void)
> > > queue_msg(encode_group_mod(&gm));
> > > ofputil_uninit_group_mod(&gm);
> > >
> > > - /* Clear installed_flows, to match the state of the switch. */
> > > - ovn_flow_table_clear();
> > > - lflow_reset_processing();
> > > -
> > > /* Clear existing groups, to match the state of the switch. */
> > > if (groups) {
> > > ovn_group_table_clear(groups, true);
> > > @@ -513,7 +508,7 @@ void
> > > ofctrl_destroy(void)
> > > {
> > > rconn_destroy(swconn);
> > > - ovn_flow_table_destroy();
> > > + ovn_flow_table_destroy(&installed_flows);
> > > rconn_packet_counter_destroy(tx_counter);
> > > }
> > >
> > > @@ -563,144 +558,54 @@ ofctrl_recv(const struct ofp_header *oh, enum
> > > ofptype type)
> > >
> > > /* Flow table interfaces to the rest of ovn-controller. */
> > >
> > > -static void
> > > -log_ovn_flow_rl(struct vlog_rate_limit *rl, enum vlog_level level,
> > > - const struct ovn_flow *flow, const char *title)
> > > -{
> > > - if (!vlog_should_drop(&this_module, level, rl)) {
> > > - char *s = ovn_flow_to_string(flow);
> > > - vlog(&this_module, level, "%s for parent "UUID_FMT": %s",
> > > - title, UUID_ARGS(&flow->uuid), s);
> > > - free(s);
> > > - }
> > > -}
> > > -
> > > -/* Adds a flow to the collection associated with 'uuid'. The flow has
> > the
> > > - * specified 'match' and 'actions' to the OpenFlow table numbered
> > 'table_id'
> > > - * with the given 'priority'. The caller retains ownership of 'match'
> > and
> > > - * 'actions'.
> > > +/* Adds a flow to 'desired_flows' with the specified 'match' and
> > 'actions' to
> > > + * the OpenFlow table numbered 'table_id' with the given 'priority'.
> > The
> > > + * caller retains ownership of 'match' and 'actions'.
> > > *
> > > - * Any number of flows may be associated with a given UUID. The flows
> > with a
> > > - * given UUID must have a unique (table_id, priority, match) tuple. A
> > > - * duplicate within a generally indicates a bug in the ovn-
> > > controller code that
> > > - * generated it, so this functions logs a warning.
> > > + * This just assembles the desired flow table in memory. Nothing is
> > actually
> > > + * sent to the switch until a later call to ofctrl_run().
> > > *
> > > - * (table_id, priority, match) tuples should also be unique for flows
> > with
> > > - * different UUIDs, but it doesn't necessarily indicate a bug in
> > > - * ovn-controller, for two reasons. First, these duplicates could
>
> > > be caused by
> > > - * logical flows generated by ovn-northd, which aren't ovn-
> > > controller's fault;
> > > - * perhaps something should warn about these but the root cause is
> > different.
> > > - * Second, these duplicates might be transient, that is, they might go
> > away
> > > - * before the next call to ofctrl_run() if a call to
> ofctrl_remove_flows
> > ()
> > > - * removes one or the other.
> > > - *
> > > - * This just assembles the desired flow tables in memory. Nothing
> > > is actually
> > > - * sent to the switch until a later call to ofctrl_run(). */
> > > + * The caller should initialize its own hmap to hold the flows. */
> > > void
> > > -ofctrl_add_flow(uint8_t table_id, uint16_t priority,
> > > - const struct match *match, const struct ofpbuf
> *actions,
> > > - const struct uuid *uuid)
> > > +ofctrl_add_flow(struct hmap *desired_flows,
> > > + uint8_t table_id, uint16_t priority,
> > > + const struct match *match, const struct ofpbuf
> *actions)
> > > {
> > > - /* Structure that uses table_id+priority+various things as hashes.
> > */
> > > struct ovn_flow *f = xmalloc(sizeof *f);
> > > f->table_id = table_id;
> > > f->priority = priority;
> > > f->match = *match;
> > > f->ofpacts = xmemdup(actions->data, actions->size);
> > > f->ofpacts_len = actions->size;
> > > - f->uuid = *uuid;
> > > - f->match_hmap_node.hash = ovn_flow_match_hash(f);
> > > - f->uuid_hindex_node.hash = uuid_hash(&f->uuid);
> > > -
> > > - /* Check to see if other flows exist with the same key
> > > (table_id priority,
> > > - * match criteria) and uuid. If so, discard this flow and log a
> > > - * warning. */
> > > - struct ovs_list existing;
> > > - ovn_flow_lookup(&match_flow_table, f, &existing);
> > > - struct ovn_flow *d;
> > > - LIST_FOR_EACH (d, list_node, &existing) {
> > > - if (uuid_equals(&f->uuid, &d->uuid)) {
> > > - /* Duplicate flows with the same UUID indicate some kind
> of
> > bug
> > > - * (see the function-level comment), but we distinguish
> two
> > > - * cases:
> > > - *
> > > - * - If the actions for the duplicate flow are the
> same,
> > then
> > > - * it's benign; it's hard to imagine how there could
> > be a
> > > - * real problem. Log at INFO level.
> > > - *
> > > - * - If the actions are different, then one or the
> > > other set of
> > > - * actions must be wrong or (perhaps more likely)
> > > we've got a
> > > - * new set of actions replacing an old set but the
> > caller
> > > - * neglected to use ofctrl_remove_flows() or
> > > - * ofctrl_set_flow() to do it properly. Log at
> > > WARN level to
> > > - * get some attention.
> > > - */
> > > - if (ofpacts_equal(f->ofpacts, f->ofpacts_len,
> > > - d->ofpacts, d->ofpacts_len)) {
> > > - static struct vlog_rate_limit rl =
> > > VLOG_RATE_LIMIT_INIT(5, 1);
> > > - log_ovn_flow_rl(&rl, VLL_INFO, f, "duplicate flow");
> > > - } else {
> > > - static struct vlog_rate_limit rl =
> > > VLOG_RATE_LIMIT_INIT(5, 1);
> > > - log_ovn_flow_rl(&rl, VLL_WARN, f,
> > > - "duplicate flow with modified
> action");
> > > -
> > > - /* It seems likely that the newer actions are the
> > correct
> > > - * ones. */
> > > - free(d->ofpacts);
> > > - d->ofpacts = f->ofpacts;
> > > - d->ofpacts_len = f->ofpacts_len;
> > > - f->ofpacts = NULL;
> > > - }
> > > - ovn_flow_destroy(f);
> > > - return;
> > > + f->hmap_node.hash = ovn_flow_hash(f);
> > > +
> > > + if (ovn_flow_lookup(desired_flows, f)) {
> > > + static struct vlog_rate_limit rl = VLOG_RATE_LIMIT_INIT(5, 5);
> > > + if (!VLOG_DROP_INFO(&rl)) {
> > > + char *s = ovn_flow_to_string(f);
> > > + VLOG_INFO("dropping duplicate flow: %s", s);
> > > + free(s);
> > > }
> > > - }
> > > -
> > > - /* Otherwise, add the flow. */
> > > - hmap_insert(&match_flow_table, &f->match_hmap_node,
> > > - f->match_hmap_node.hash);
> > > - hindex_insert(&uuid_flow_table, &f->uuid_hindex_node,
> > > - f->uuid_hindex_node.hash);
> > > -}
> > >
> > > -/* Removes a bundles of flows from the flow table. */
> > > -void
> > > -ofctrl_remove_flows(const struct uuid *uuid)
> > > -{
> > > - struct ovn_flow *f, *next;
> > > - HINDEX_FOR_EACH_WITH_HASH_SAFE (f, next, uuid_hindex_node,
> > > uuid_hash(uuid),
> > > - &uuid_flow_table) {
> > > - if (uuid_equals(&f->uuid, uuid)) {
> > > - hmap_remove(&match_flow_table, &f->match_hmap_node);
> > > - hindex_remove(&uuid_flow_table, &f->uuid_hindex_node);
> > > - ovn_flow_destroy(f);
> > > - }
> > > + ovn_flow_destroy(f);
> > > + return;
> > > }
> > >
> > > - /* Remove any group_info information created by this logical flow.
> > */
> > > - struct group_info *g, *next_g;
> > > - HMAP_FOR_EACH_SAFE (g, next_g, hmap_node, &groups->desired_groups)
> {
> > > - if (uuid_equals(&g->lflow_uuid, uuid)) {
> > > - hmap_remove(&groups->desired_groups, &g->hmap_node);
> > > - ds_destroy(&g->group);
> > > - free(g);
> > > - }
> > > - }
> > > + hmap_insert(desired_flows, &f->hmap_node, f->hmap_node.hash);
> > > }
> > >
> > > -/* Shortcut to remove all flows matching the supplied UUID and add
> this
> > > - * flow. */
> > > -void
> > > -ofctrl_set_flow(uint8_t table_id, uint16_t priority,
> > > - const struct match *match, const struct ofpbuf
> *actions,
> > > - const struct uuid *uuid)
> > > -{
> > > - ofctrl_remove_flows(uuid);
> > > - ofctrl_add_flow(table_id, priority, match, actions, uuid);
> > > -}
> > >
> > > /* ovn_flow. */
> > >
> > > +/* Returns a hash of the key in 'f'. */
> > > +static uint32_t
> > > +ovn_flow_hash(const struct ovn_flow *f)
> > > +{
> > > + return hash_2words((f->table_id << 16) | f->priority,
> > > + match_hash(&f->match, 0));
> > > +
> > > +}
> > > +
> > > /* Duplicate an ovn_flow structure. */
> > > struct ovn_flow *
> > > ofctrl_dup_flow(struct ovn_flow *src)
> > > @@ -711,60 +616,26 @@ ofctrl_dup_flow(struct ovn_flow *src)
> > > dst->match = src->match;
> > > dst->ofpacts = xmemdup(src->ofpacts, src->ofpacts_len);
> > > dst->ofpacts_len = src->ofpacts_len;
> > > - dst->uuid = src->uuid;
> > > - dst->match_hmap_node.hash = ovn_flow_match_hash(dst);
> > > - dst->uuid_hindex_node.hash = uuid_hash(&src->uuid);
> > > + dst->hmap_node.hash = ovn_flow_hash(dst);
> > > return dst;
> > > }
> > >
> > > -/* Returns a hash of the match key in 'f'. */
> > > -static uint32_t
> > > -ovn_flow_match_hash(const struct ovn_flow *f)
> > > -{
> > > - return hash_2words((f->table_id << 16) | f->priority,
> > > - match_hash(&f->match, 0));
> > > -}
> > > -
> > > -/* Compare two flows and return -1, 0, 1 based on whether a if less
> > than,
> > > - * equal to or greater than b. */
> > > -static int
> > > -ovn_flow_compare_flows(struct ovn_flow *a, struct ovn_flow *b)
> > > -{
> > > - return uuid_compare_3way(&a->uuid, &b->uuid);
> > > -}
> > > -
> > > -/* Given a list of ovn_flows, goes through the list and returns
> > > - * a single flow, in a deterministic way. */
> > > +/* Finds and returns an ovn_flow in 'flow_table' whose key is
> identical
> > to
> > > + * 'target''s key, or NULL if there is none. */
> > > static struct ovn_flow *
> > > -ovn_flow_select_from_list(struct ovs_list *flows)
> > > -{
> > > - struct ovn_flow *candidate;
> > > - struct ovn_flow *answer = NULL;
> > > - LIST_FOR_EACH (candidate, list_node, flows) {
> > > - if (!answer || ovn_flow_compare_flows(candidate, answer) < 0)
> {
> > > - answer = candidate;
> > > - }
> > > - }
> > > - return answer;
> > > -}
> > > -
> > > -/* Initializes and files in the supplied list with ovn_flows from
> > > 'flow_table'
> > > - * whose key is identical to 'target''s key. */
> > > -static void
> > > -ovn_flow_lookup(struct hmap *flow_table, const struct ovn_flow
> *target,
> > > - struct ovs_list *answer)
> > > +ovn_flow_lookup(struct hmap *flow_table, const struct ovn_flow
> *target)
> > > {
> > > struct ovn_flow *f;
> > >
> > > - ovs_list_init(answer);
> > > - HMAP_FOR_EACH_WITH_HASH (f, match_hmap_node,
> > > target->match_hmap_node.hash,
> > > + HMAP_FOR_EACH_WITH_HASH (f, hmap_node, target->hmap_node.hash,
> > > flow_table) {
> > > if (f->table_id == target->table_id
> > > && f->priority == target->priority
> > > && match_equal(&f->match, &target->match)) {
> > > - ovs_list_push_back(answer, &f->list_node);
> > > + return f;
> > > }
> > > }
> > > + return NULL;
> > > }
> > >
> > > static char *
> > > @@ -801,27 +672,20 @@ ovn_flow_destroy(struct ovn_flow *f)
> > > /* Flow tables of struct ovn_flow. */
> > >
> > > void
> > > -ovn_flow_table_clear(void)
> > > +ovn_flow_table_clear(struct hmap *flow_table)
> > > {
> > > struct ovn_flow *f, *next;
> > > - HMAP_FOR_EACH_SAFE (f, next, match_hmap_node, &match_flow_table) {
> > > - hmap_remove(&match_flow_table, &f->match_hmap_node);
> > > - hindex_remove(&uuid_flow_table, &f->uuid_hindex_node);
> > > - ovn_flow_destroy(f);
> > > - }
> > > -
> > > - HMAP_FOR_EACH_SAFE (f, next, match_hmap_node, &installed_flows) {
> > > - hmap_remove(&installed_flows, &f->match_hmap_node);
> > > + HMAP_FOR_EACH_SAFE (f, next, hmap_node, flow_table) {
> > > + hmap_remove(flow_table, &f->hmap_node);
> > > ovn_flow_destroy(f);
> > > }
> > > }
> > >
> > > static void
> > > -ovn_flow_table_destroy(void)
> > > +ovn_flow_table_destroy(struct hmap *flow_table)
> > > {
> > > - ovn_flow_table_clear();
> > > - hmap_destroy(&match_flow_table);
> > > - hindex_destroy(&uuid_flow_table);
> > > + ovn_flow_table_clear(flow_table);
> > > + hmap_destroy(flow_table);
> > > }
> > >
> > > /* Flow table update. */
> > > @@ -912,7 +776,7 @@ add_group_mod(const struct ofputil_group_mod
> > > *gm, struct ovs_list *msgs)
> > > *
> > > * This should be called after ofctrl_run() within the main loop. */
> > > void
> > > -ofctrl_put(int64_t nb_cfg)
> > > +ofctrl_put(struct hmap *flow_table, int64_t nb_cfg)
> > > {
> > > /* The flow table can be updated if the connection to the
> > > switch is up and
> > > * in the correct state and not backlogged with existing
> flow_mods.
> > (Our
> > > @@ -920,6 +784,7 @@ ofctrl_put(int64_t nb_cfg)
> > > * between ovn-controller and OVS provides some buffering.) */
> > > if (state != S_UPDATE_FLOWS
> > > || rconn_packet_counter_n_packets(tx_counter)) {
> > > + ovn_flow_table_clear(flow_table);
> > > ovn_group_table_clear(groups, false);
> > > return;
> > > }
> > > @@ -960,10 +825,9 @@ ofctrl_put(int64_t nb_cfg)
> > > * longer desired, delete them; if any of them should have
> different
> > > * actions, update them. */
> > > struct ovn_flow *i, *next;
> > > - HMAP_FOR_EACH_SAFE (i, next, match_hmap_node, &installed_flows) {
> > > - struct ovs_list matches;
> > > - ovn_flow_lookup(&match_flow_table, i, &matches);
> > > - if (ovs_list_is_empty(&matches)) {
> > > + HMAP_FOR_EACH_SAFE (i, next, hmap_node, &installed_flows) {
> > > + struct ovn_flow *d = ovn_flow_lookup(flow_table, i);
> > > + if (!d) {
> > > /* Installed flow is no longer desirable. Delete it from
> > the
> > > * switch and from installed_flows. */
> > > struct ofputil_flow_mod fm = {
> > > @@ -975,19 +839,9 @@ ofctrl_put(int64_t nb_cfg)
> > > add_flow_mod(&fm, &msgs);
> > > ovn_flow_log(i, "removing installed");
> > >
> > > - hmap_remove(&installed_flows, &i->match_hmap_node);
> > > + hmap_remove(&installed_flows, &i->hmap_node);
> > > ovn_flow_destroy(i);
> > > } else {
> > > - /* Since we still have desired flows that match this key,
> > > - * select one and compare both its actions and uuid.
> > > - * If the actions aren't the same, queue and update
> > > - * action for the install flow. If the uuid has changed
> > > - * update that as well. */
> > > - struct ovn_flow *d = ovn_flow_select_from_list(&matches);
> > > - if (!uuid_equals(&i->uuid, &d->uuid)) {
> > > - /* Update installed flow's UUID. */
> > > - i->uuid = d->uuid;
> > > - }
> > > if (!ofpacts_equal(i->ofpacts, i->ofpacts_len,
> > > d->ofpacts, d->ofpacts_len)) {
> > > /* Update actions in installed flow. */
> > > @@ -1004,42 +858,37 @@ ofctrl_put(int64_t nb_cfg)
> > >
> > > /* Replace 'i''s actions by 'd''s. */
> > > free(i->ofpacts);
> > > - i->ofpacts = xmemdup(d->ofpacts, d->ofpacts_len);
> > > + i->ofpacts = d->ofpacts;
> > > i->ofpacts_len = d->ofpacts_len;
> > > + d->ofpacts = NULL;
> > > + d->ofpacts_len = 0;
> > > }
> > > +
> > > + hmap_remove(flow_table, &d->hmap_node);
> > > + ovn_flow_destroy(d);
> > > }
> > > }
> > >
> > > - /* Iterate through the desired flows and add those that aren't
> found
> > > - * in the installed flow table. */
> > > - struct ovn_flow *c;
> > > - HMAP_FOR_EACH (c, match_hmap_node, &match_flow_table) {
> > > - struct ovs_list matches;
> > > - ovn_flow_lookup(&installed_flows, c, &matches);
> > > - if (ovs_list_is_empty(&matches)) {
> > > - /* We have a key that isn't in the installed flows, so
> > > - * look back into the desired flow list for all flows
> > > - * that match this key, and select the one to be
> installed.
> > */
> > > - struct ovs_list candidates;
> > > - ovn_flow_lookup(&match_flow_table, c, &candidates);
> > > - struct ovn_flow *d = ovn_flow_select_from_list
> (&candidates);
> > > - /* Send flow_mod to add flow. */
> > > - struct ofputil_flow_mod fm = {
> > > - .match = d->match,
> > > - .priority = d->priority,
> > > - .table_id = d->table_id,
> > > - .ofpacts = d->ofpacts,
> > > - .ofpacts_len = d->ofpacts_len,
> > > - .command = OFPFC_ADD,
> > > - };
> > > - add_flow_mod(&fm, &msgs);
> > > - ovn_flow_log(d, "adding installed");
> > > -
> > > - /* Copy 'd' from 'flow_table' to installed_flows. */
> > > - struct ovn_flow *new_node = ofctrl_dup_flow(d);
> > > - hmap_insert(&installed_flows, &new_node->match_hmap_node,
> > > - new_node->match_hmap_node.hash);
> > > - }
> > > + /* The previous loop removed from 'flow_table' all of the flows
> that
> > are
> > > + * already installed. Thus, any flows remaining in 'flow_table'
> > need to
> > > + * be added to the flow table. */
> > > + struct ovn_flow *d;
> > > + HMAP_FOR_EACH_SAFE (d, next, hmap_node, flow_table) {
> > > + /* Send flow_mod to add flow. */
> > > + struct ofputil_flow_mod fm = {
> > > + .match = d->match,
> > > + .priority = d->priority,
> > > + .table_id = d->table_id,
> > > + .ofpacts = d->ofpacts,
> > > + .ofpacts_len = d->ofpacts_len,
> > > + .command = OFPFC_ADD,
> > > + };
> > > + add_flow_mod(&fm, &msgs);
> > > + ovn_flow_log(d, "adding installed");
> > > +
> > > + /* Move 'd' from 'flow_table' to installed_flows. */
> > > + hmap_remove(flow_table, &d->hmap_node);
> > > + hmap_insert(&installed_flows, &d->hmap_node, d->
> hmap_node.hash);
> > > }
> > >
> > > /* Iterate through the installed groups from previous runs. If
> they
> > > diff --git a/ovn/controller/ofctrl.h b/ovn/controller/ofctrl.h
> > > index d21a7fe..5cd4128 100644
> > > --- a/ovn/controller/ofctrl.h
> > > +++ b/ovn/controller/ofctrl.h
> > > @@ -32,7 +32,7 @@ struct group_table;
> > > /* Interface for OVN main loop. */
> > > void ofctrl_init(struct group_table *group_table);
> > > enum mf_field_id ofctrl_run(const struct ovsrec_bridge *br_int);
> > > -void ofctrl_put(int64_t nb_cfg);
> > > +void ofctrl_put(struct hmap *flow_table, int64_t nb_cfg);
> > > void ofctrl_wait(void);
> > > void ofctrl_destroy(void);
> > > int64_t ofctrl_get_cur_cfg(void);
> > > @@ -40,20 +40,12 @@ int64_t ofctrl_get_cur_cfg(void);
> > > struct ovn_flow *ofctrl_dup_flow(struct ovn_flow *source);
> > >
> > > /* Flow table interfaces to the rest of ovn-controller. */
> > > -void ofctrl_add_flow(uint8_t table_id, uint16_t priority,
> > > - const struct match *, const struct ofpbuf
> *ofpacts,
> > > - const struct uuid *uuid);
> > > -
> > > -void ofctrl_remove_flows(const struct uuid *uuid);
> > > -
> > > -void ofctrl_set_flow(uint8_t table_id, uint16_t priority,
> > > - const struct match *, const struct ofpbuf
> *ofpacts,
> > > - const struct uuid *uuid);
> > > +void ofctrl_add_flow(struct hmap *desired_flows, uint8_t table_id,
> > > + uint16_t priority, const struct match *,
> > > + const struct ofpbuf *ofpacts);
> > >
> > > void ofctrl_flow_table_clear(void);
> > >
> > > -void ovn_flow_table_clear(void);
> > > -
> >
> > > void ovn_group_table_clear(struct group_table *group_table,
> > > bool existing);
> > >
> > > diff --git a/ovn/controller/ovn-controller.c
> > b/ovn/controller/ovn-controller.c
> > > index 66a364f..414e6b5 100644
> > > --- a/ovn/controller/ovn-controller.c
> > > +++ b/ovn/controller/ovn-controller.c
> > > @@ -309,9 +309,6 @@ get_nb_cfg(struct ovsdb_idl *idl)
> > > static struct hmap local_datapaths = HMAP_INITIALIZER
> (&local_datapaths);
> > > static struct hmap patched_datapaths = HMAP_INITIALIZER
> > (&patched_datapaths);
> > >
> > > -static struct lport_index lports;
> > > -static struct mcgroup_index mcgroups;
> > > -
> > > /* Contains the names of all logical ports currently bound to the
> > chassis
> > > * managed by this instance of ovn-controller. The contents are
> managed
> > > * in binding.c, but consumed elsewhere. */
> > > @@ -354,9 +351,6 @@ main(int argc, char *argv[])
> > > pinctrl_init();
> > > lflow_init();
> > >
> > > - lport_index_init(&lports);
> > > - mcgroup_index_init(&mcgroups);
> > > -
> > > /* Connect to OVS OVSDB instance. We do not monitor all tables by
> > > * default, so modules must register their interest explicitly.
> */
> > > struct ovsdb_idl_loop ovs_idl_loop = OVSDB_IDL_LOOP_INITIALIZER(
> > > @@ -412,9 +406,6 @@ main(int argc, char *argv[])
> > > free(ovnsb_remote);
> > > ovnsb_remote = new_ovnsb_remote;
> > > ovsdb_idl_set_remote(ovnsb_idl_loop.idl, ovnsb_remote,
> > true);
> > > - binding_reset_processing();
> > > - lport_index_clear(&lports);
> > > - mcgroup_index_clear(&mcgroups);
> > > } else {
> > > free(new_ovnsb_remote);
> > > }
> > > @@ -443,8 +434,10 @@ main(int argc, char *argv[])
> > > patch_run(&ctx, br_int, chassis_id, &local_datapaths,
> > > &patched_datapaths);
> > >
> > > - lport_index_fill(&lports, ctx.ovnsb_idl);
> > > - mcgroup_index_fill(&mcgroups, ctx.ovnsb_idl);
> > > + static struct lport_index lports;
> > > + static struct mcgroup_index mcgroups;
> > > + lport_index_init(&lports, ctx.ovnsb_idl);
> > > + mcgroup_index_init(&mcgroups, ctx.ovnsb_idl);
> > >
> > > enum mf_field_id mff_ovn_geneve = ofctrl_run(br_int);
> > >
> > > @@ -452,20 +445,25 @@ main(int argc, char *argv[])
> > > update_ct_zones(&all_lports, &patched_datapaths,
> &ct_zones,
> > > ct_zone_bitmap);
> > >
> > > + struct hmap flow_table = HMAP_INITIALIZER(&flow_table);
> > > lflow_run(&ctx, &lports, &mcgroups, &local_datapaths,
> > > - &patched_datapaths, &group_table, &ct_zones);
> > > + &patched_datapaths, &group_table, &ct_zones,
> > > + &flow_table);
> > >
> > > physical_run(&ctx, mff_ovn_geneve,
> > > - br_int, chassis_id, &ct_zones,
> > > + br_int, chassis_id, &ct_zones, &flow_table,
> > > &local_datapaths, &patched_datapaths);
> > >
> > > - ofctrl_put(get_nb_cfg(ctx.ovnsb_idl));
> > > + ofctrl_put(&flow_table, get_nb_cfg(ctx.ovnsb_idl));
> > > + hmap_destroy(&flow_table);
> > > if (ctx.ovnsb_idl_txn) {
> > > int64_t cur_cfg = ofctrl_get_cur_cfg();
> > > if (cur_cfg && cur_cfg != chassis->nb_cfg) {
> > > sbrec_chassis_set_nb_cfg(chassis, cur_cfg);
> > > }
> > > }
> > > + mcgroup_index_destroy(&mcgroups);
> > > + lport_index_destroy(&lports);
> > > }
> > >
> > > unixctl_server_run(unixctl);
> > > diff --git a/ovn/controller/patch.c b/ovn/controller/patch.c
> > > index 02bf450..c8e47b4 100644
> > > --- a/ovn/controller/patch.c
> > > +++ b/ovn/controller/patch.c
> > > @@ -95,9 +95,6 @@ create_patch_port(struct controller_ctx *ctx,
> > > ovsrec_bridge_verify_ports(src);
> > > ovsrec_bridge_set_ports(src, ports, src->n_ports + 1);
> > >
> > > - lport_index_reset();
> > > - mcgroup_index_reset();
> > > - lflow_reset_processing();
> > > free(ports);
> > > }
> > >
> > > @@ -130,9 +127,6 @@ remove_port(struct controller_ctx *ctx,
> > > return;
> > > }
> > > }
> > > - lport_index_reset();
> > > - mcgroup_index_reset();
> > > - lflow_reset_processing();
> > > }
> > >
> > > /* Obtains external-ids:ovn-bridge-mappings from OVSDB and adds
> > > patch ports for
> > > diff --git a/ovn/controller/physical.c b/ovn/controller/physical.c
> > > index 23e3e2c..bf63300 100644
> > > --- a/ovn/controller/physical.c
> > > +++ b/ovn/controller/physical.c
> > > @@ -68,16 +68,6 @@ static struct hmap port_binding_uuids =
> > > HMAP_INITIALIZER(&port_binding_uuids);
> > > static struct hmap multicast_group_uuids =
> > > HMAP_INITIALIZER(&multicast_group_uuids);
> > >
> > > -/* UUID to identify OF flows not associated with ovsdb rows. */
> > > -static struct uuid *hc_uuid = NULL;
> > > -static bool full_binding_processing = false;
> > > -
> > > -void
> > > -physical_reset_processing(void)
> > > -{
> > > - full_binding_processing = true;
> > > -}
> > > -
> > > /* Maps from a chassis to the OpenFlow port number of the tunnel that
> > can be
> > > * used to reach that chassis. */
> > > struct chassis_tunnel {
> > > @@ -179,7 +169,8 @@ consider_port_binding(enum mf_field_id
> > mff_ovn_geneve,
> > > struct hmap *local_datapaths,
> > > struct hmap *patched_datapaths,
> > > const struct sbrec_port_binding *binding,
> > > - struct ofpbuf *ofpacts_p)
> > > + struct ofpbuf *ofpacts_p,
> > > + struct hmap *flow_table)
> > > {
> > > /* Skip the port binding if the port is on a datapath that is
> > neither
> > > * local nor with any logical patch port connected, because local
> > ports
> > > @@ -343,9 +334,8 @@ consider_port_binding(enum mf_field_id
> > mff_ovn_geneve,
> > >
> > > /* Resubmit to first logical ingress pipeline table. */
> > > put_resubmit(OFTABLE_LOG_INGRESS_PIPELINE, ofpacts_p);
> > > - ofctrl_add_flow(OFTABLE_PHY_TO_LOG,
> > > - tag ? 150 : 100, &match, ofpacts_p,
> > > - &binding->header_.uuid);
> > > + ofctrl_add_flow(flow_table, OFTABLE_PHY_TO_LOG,
> > > + tag ? 150 : 100, &match, ofpacts_p);
> > >
> > > if (!tag && (!strcmp(binding->type, "localnet")
> > > || !strcmp(binding->type, "l2gateway"))) {
> > > @@ -355,8 +345,7 @@ consider_port_binding(enum mf_field_id
> > mff_ovn_geneve,
> > > * action. */
> > > ofpbuf_pull(ofpacts_p, ofpacts_orig_size);
> > > match_set_dl_tci_masked(&match, 0, htons(VLAN_CFI));
> > > - ofctrl_add_flow(0, 100, &match, ofpacts_p,
> > > - &binding->header_.uuid);
> > > + ofctrl_add_flow(flow_table, 0, 100, &match, ofpacts_p);
> > > }
> > >
> > > /* Table 33, priority 100.
> > > @@ -386,8 +375,8 @@ consider_port_binding(enum mf_field_id
> > mff_ovn_geneve,
> > >
> > > /* Resubmit to table 34. */
> > > put_resubmit(OFTABLE_CHECK_LOOPBACK, ofpacts_p);
> > > - ofctrl_add_flow(OFTABLE_LOCAL_OUTPUT, 100,
> > > - &match, ofpacts_p, &binding->header_.uuid);
> > > + ofctrl_add_flow(flow_table, OFTABLE_LOCAL_OUTPUT, 100,
> > > + &match, ofpacts_p);
> > >
> > > /* Table 34, Priority 100.
> > > * =======================
> > > @@ -401,8 +390,8 @@ consider_port_binding(enum mf_field_id
> > mff_ovn_geneve,
> > > 0, MLF_ALLOW_LOOPBACK);
> > > match_set_reg(&match, MFF_LOG_INPORT - MFF_REG0, port_key);
> > > match_set_reg(&match, MFF_LOG_OUTPORT - MFF_REG0, port_key);
> > > - ofctrl_add_flow(OFTABLE_CHECK_LOOPBACK, 100,
> > > - &match, ofpacts_p, &binding->header_.uuid);
> > > + ofctrl_add_flow(flow_table, OFTABLE_CHECK_LOOPBACK, 100,
> > > + &match, ofpacts_p);
> > >
> > > /* Table 64, Priority 100.
> > > * =======================
> > > @@ -422,8 +411,8 @@ consider_port_binding(enum mf_field_id
> > mff_ovn_geneve,
> > > put_load(0, MFF_IN_PORT, 0, 16, ofpacts_p);
> > > put_resubmit(OFTABLE_LOG_TO_PHY, ofpacts_p);
> > > put_stack(MFF_IN_PORT, ofpact_put_STACK_POP(ofpacts_p));
> > > - ofctrl_add_flow(OFTABLE_SAVE_INPORT, 100,
> > > - &match, ofpacts_p, &binding->header_.uuid);
> > > + ofctrl_add_flow(flow_table, OFTABLE_SAVE_INPORT, 100,
> > > + &match, ofpacts_p);
> > >
> > > /* Table 65, Priority 100.
> > > * =======================
> > > @@ -457,8 +446,8 @@ consider_port_binding(enum mf_field_id
> > mff_ovn_geneve,
> > > ofpact_put_STRIP_VLAN(ofpacts_p);
> > > put_stack(MFF_IN_PORT, ofpact_put_STACK_POP(ofpacts_p));
> > > }
> > > - ofctrl_add_flow(OFTABLE_LOG_TO_PHY, 100,
> > > - &match, ofpacts_p, &binding->header_.uuid);
> > > + ofctrl_add_flow(flow_table, OFTABLE_LOG_TO_PHY, 100,
> > > + &match, ofpacts_p);
> > > } else if (!tun) {
> > > /* Remote port connected by localnet port */
> > > /* Table 33, priority 100.
> > > @@ -480,8 +469,8 @@ consider_port_binding(enum mf_field_id
> > mff_ovn_geneve,
> > >
> > > /* Resubmit to table 33. */
> > > put_resubmit(OFTABLE_LOCAL_OUTPUT, ofpacts_p);
> > > - ofctrl_add_flow(OFTABLE_LOCAL_OUTPUT, 100,
> > > - &match, ofpacts_p, &binding->header_.uuid);
> > > + ofctrl_add_flow(flow_table, OFTABLE_LOCAL_OUTPUT, 100,
> > > + &match, ofpacts_p);
> > > } else {
> > > /* Remote port connected by tunnel */
> > >
> > > @@ -506,8 +495,8 @@ consider_port_binding(enum mf_field_id
> > mff_ovn_geneve,
> > >
> > > /* Resubmit to table 33. */
> > > put_resubmit(OFTABLE_LOCAL_OUTPUT, ofpacts_p);
> > > - ofctrl_add_flow(OFTABLE_REMOTE_OUTPUT, 150, &match,
> ofpacts_p,
> > > - &binding->header_.uuid);
> > > + ofctrl_add_flow(flow_table, OFTABLE_REMOTE_OUTPUT, 150,
> &match,
> > > + ofpacts_p);
> > >
> > >
> > > match_init_catchall(&match);
> > > @@ -522,8 +511,8 @@ consider_port_binding(enum mf_field_id
> > mff_ovn_geneve,
> > >
> > > /* Output to tunnel. */
> > > ofpact_put_OUTPUT(ofpacts_p)->port = ofport;
> > > - ofctrl_add_flow(OFTABLE_REMOTE_OUTPUT, 100,
> > > - &match, ofpacts_p, &binding->header_.uuid);
> > > + ofctrl_add_flow(flow_table, OFTABLE_REMOTE_OUTPUT, 100,
> > > + &match, ofpacts_p);
> > > }
> > > }
> > >
> > > @@ -533,7 +522,8 @@ consider_mc_group(enum mf_field_id mff_ovn_geneve,
> > > struct hmap *local_datapaths,
> > > const struct sbrec_multicast_group *mc,
> > > struct ofpbuf *ofpacts_p,
> > > - struct ofpbuf *remote_ofpacts_p)
> > > + struct ofpbuf *remote_ofpacts_p,
> > > + struct hmap *flow_table)
> > > {
> > > struct sset remote_chassis = SSET_INITIALIZER(&remote_chassis);
> > > struct match match;
> > > @@ -601,8 +591,8 @@ consider_mc_group(enum mf_field_id mff_ovn_geneve,
> > > * group as the logical output port. */
> > > put_load(mc->tunnel_key, MFF_LOG_OUTPORT, 0, 32, ofpacts_p);
> > >
> > > - ofctrl_add_flow(OFTABLE_LOCAL_OUTPUT, 100,
> > > - &match, ofpacts_p, &mc->header_.uuid);
> > > + ofctrl_add_flow(flow_table, OFTABLE_LOCAL_OUTPUT, 100,
> > > + &match, ofpacts_p);
> > > }
> > >
> > > /* Table 32, priority 100.
> > > @@ -639,52 +629,19 @@ consider_mc_group(enum mf_field_id
> mff_ovn_geneve,
> > > if (local_ports) {
> > > put_resubmit(OFTABLE_LOCAL_OUTPUT, remote_ofpacts_p);
> > > }
> > > - ofctrl_add_flow(OFTABLE_REMOTE_OUTPUT, 100,
> > > - &match, remote_ofpacts_p, &mc->
> > header_.uuid);
> > > + ofctrl_add_flow(flow_table, OFTABLE_REMOTE_OUTPUT, 100,
> > > + &match, remote_ofpacts_p);
> > > }
> > > }
> > > sset_destroy(&remote_chassis);
> > > }
> > >
> > > -static bool
> > > -find_uuid_in_hmap(struct hmap *hmap_p, struct uuid *uuid)
> > > -{
> > > - struct uuid_hash_node *candidate;
> > > - HMAP_FOR_EACH_WITH_HASH (candidate, node, uuid_hash(uuid), hmap_p)
> {
> > > - if (uuid_equals(uuid, &candidate->uuid)) {
> > > - return true;
> > > - }
> > > - }
> > > - return false;
> > > -}
> > > -
> > > -/* Deletes the flows whose UUIDs are in 'old' but not 'new', and
> > > then replaces
> > > - * 'old' by 'new'. */
> > > -static void
> > > -rationalize_hmap_and_delete_flows(struct hmap *old, struct hmap *new)
> > > -{
> > > - struct uuid_hash_node *uuid_node, *old_node;
> > > - HMAP_FOR_EACH_SAFE (uuid_node, old_node, node, old) {
> > > - if (!find_uuid_in_hmap(new, &uuid_node->uuid)) {
> > > - ofctrl_remove_flows(&uuid_node->uuid);
> > > - }
> > > - }
> > > - hmap_swap(old, new);
> > > - HMAP_FOR_EACH_POP(uuid_node, node, new) {
> > > - free(uuid_node);
> > > - }
> > > -}
> > > -
> > > void
> > > physical_run(struct controller_ctx *ctx, enum mf_field_id
> > mff_ovn_geneve,
> > > const struct ovsrec_bridge *br_int, const char
> > *this_chassis_id,
> > > - const struct simap *ct_zones,
> > > + const struct simap *ct_zones, struct hmap *flow_table,
> > > struct hmap *local_datapaths, struct hmap
> > *patched_datapaths)
> > > {
> > > - if (!hc_uuid) {
> > > - hc_uuid = xmalloc(sizeof(struct uuid));
> > > - uuid_generate(hc_uuid);
> > > - }
> > >
> > > /* This bool tracks physical mapping changes. */
> > > bool physical_map_changed = false;
> > > @@ -822,10 +779,7 @@ physical_run(struct controller_ctx *ctx, enum
> > > mf_field_id mff_ovn_geneve,
> > > }
> > > }
> > > if (physical_map_changed) {
> > > - full_binding_processing = true;
> > > -
> > > /* Reprocess logical flow table immediately. */
> > > - lflow_reset_processing();
> > > poll_immediate_wake();
> > > }
> > >
> > > @@ -835,68 +789,30 @@ physical_run(struct controller_ctx *ctx, enum
> > > mf_field_id mff_ovn_geneve,
> > > /* Set up flows in table 0 for physical-to-logical translation
> > > and in table
> > > * 64 for logical-to-physical translation. */
> > > const struct sbrec_port_binding *binding;
> > > - if (full_binding_processing) {
> > > - struct hmap new_port_binding_uuids =
> > > - HMAP_INITIALIZER(&new_port_binding_uuids);
> > > - SBREC_PORT_BINDING_FOR_EACH (binding, ctx->ovnsb_idl) {
> > > - /* Because it is possible in the above code to enter this
> > > - * for loop without having cleared the flow table first,
> we
> > > - * should clear the old flows to avoid collisions. */
> > > - ofctrl_remove_flows(&binding->header_.uuid);
> > > - consider_port_binding(mff_ovn_geneve, ct_zones,
> > local_datapaths,
> > > - patched_datapaths, binding,
> &ofpacts);
> > > - struct uuid_hash_node *hash_node = xzalloc(sizeof
> > *hash_node);
> > > - hash_node->uuid = binding->header_.uuid;
> > > - hmap_insert(&new_port_binding_uuids, &hash_node->node,
> > > - uuid_hash(&hash_node->uuid));
> > > - }
> > > - rationalize_hmap_and_delete_flows(&port_binding_uuids,
> > > - &new_port_binding_uuids);
> > > - hmap_destroy(&new_port_binding_uuids);
> > > - full_binding_processing = false;
> > > - } else {
> > > - SBREC_PORT_BINDING_FOR_EACH_TRACKED (binding, ctx->ovnsb_idl)
> {
> > > - if (sbrec_port_binding_is_deleted(binding)) {
> > > - ofctrl_remove_flows(&binding->header_.uuid);
> > > - } else {
> > > - if (!sbrec_port_binding_is_new(binding)) {
> > > - ofctrl_remove_flows(&binding->header_.uuid);
> > > - }
> > > - consider_port_binding(mff_ovn_geneve, ct_zones,
> > > local_datapaths,
> > > - patched_datapaths, binding,
> > &ofpacts);
> > > - }
> > > - }
> > > + SBREC_PORT_BINDING_FOR_EACH (binding, ctx->ovnsb_idl) {
> > > + /* Because it is possible in the above code to enter this
> > > + * for loop without having cleared the flow table first, we
> > > + * should clear the old flows to avoid collisions. */
> > > + consider_port_binding(mff_ovn_geneve, ct_zones,
> local_datapaths,
> > > + patched_datapaths, binding, &ofpacts,
> > > + flow_table);
> > > }
> > >
> > > /* Handle output to multicast groups, in tables 32 and 33. */
> > > const struct sbrec_multicast_group *mc;
> > > struct ofpbuf remote_ofpacts;
> > > ofpbuf_init(&remote_ofpacts, 0);
> > > - struct hmap new_multicast_group_uuids =
> > > - HMAP_INITIALIZER(&new_multicast_group_uuids);
> > > SBREC_MULTICAST_GROUP_FOR_EACH (mc, ctx->ovnsb_idl) {
> > > /* As multicast groups are always reprocessed each time,
> > > * the first step is to clean the old flows for the group
> > > * so that we avoid warning messages on collisions. */
> > > - ofctrl_remove_flows(&mc->header_.uuid);
> > > consider_mc_group(mff_ovn_geneve, ct_zones,
> > > - local_datapaths, mc, &ofpacts,
> > &remote_ofpacts);
> > > - struct uuid_hash_node *hash_node = xzalloc(sizeof *hash_node);
> > > - hash_node->uuid = mc->header_.uuid;
> > > - hmap_insert(&new_multicast_group_uuids, &hash_node->node,
> > > - uuid_hash(&hash_node->uuid));
> > > + local_datapaths, mc, &ofpacts,
> > &remote_ofpacts,
> > > + flow_table);
> > > }
> > > - rationalize_hmap_and_delete_flows(&multicast_group_uuids,
> > > - &new_multicast_group_uuids);
> > > - hmap_destroy(&new_multicast_group_uuids);
> > >
> > > ofpbuf_uninit(&remote_ofpacts);
> > >
> > > - /* Because flows using the hard-coded uuid are recalculated each
> > > - * cycle, let's first remove the old flows to avoid duplicate flow
> > > - * warnings. */
> > > - ofctrl_remove_flows(hc_uuid);
> > > -
> > > /* Table 0, priority 100.
> > > * ======================
> > > *
> > > @@ -932,8 +848,7 @@ physical_run(struct controller_ctx *ctx, enum
> > > mf_field_id mff_ovn_geneve,
> > >
> > > put_resubmit(OFTABLE_LOCAL_OUTPUT, &ofpacts);
> > >
> > > - ofctrl_add_flow(OFTABLE_PHY_TO_LOG, 100, &match, &ofpacts,
> > > - hc_uuid);
> > > + ofctrl_add_flow(flow_table, OFTABLE_PHY_TO_LOG, 100,
> > > &match, &ofpacts);
> > > }
> > >
> > > /* Add flows for VXLAN encapsulations. Due to the limited amount
> of
> > > @@ -965,7 +880,8 @@ physical_run(struct controller_ctx *ctx, enum
> > > mf_field_id mff_ovn_geneve,
> > > put_load(1, MFF_LOG_FLAGS, MLF_RCV_FROM_VXLAN_BIT, 1,
> > &ofpacts);
> > > put_resubmit(OFTABLE_LOG_INGRESS_PIPELINE, &ofpacts);
> > >
> > > - ofctrl_add_flow(OFTABLE_PHY_TO_LOG, 100, &match,
> > > &ofpacts, hc_uuid);
> > > + ofctrl_add_flow(flow_table, OFTABLE_PHY_TO_LOG, 100,
> &match,
> > > + &ofpacts);
> > > }
> > > }
> > >
> > > @@ -978,7 +894,7 @@ physical_run(struct controller_ctx *ctx, enum
> > > mf_field_id mff_ovn_geneve,
> > > match_init_catchall(&match);
> > > ofpbuf_clear(&ofpacts);
> > > put_resubmit(OFTABLE_LOCAL_OUTPUT, &ofpacts);
> > > - ofctrl_add_flow(OFTABLE_REMOTE_OUTPUT, 0, &match, &ofpacts,
> > hc_uuid);
> > > + ofctrl_add_flow(flow_table, OFTABLE_REMOTE_OUTPUT, 0, &match,
> > &ofpacts);
> > >
> > > /* Table 34, Priority 0.
> > > * =======================
> > > @@ -992,7 +908,7 @@ physical_run(struct controller_ctx *ctx, enum
> > > mf_field_id mff_ovn_geneve,
> > > put_load(0, MFF_REG0 + i, 0, 32, &ofpacts);
> > > }
> > > put_resubmit(OFTABLE_LOG_EGRESS_PIPELINE, &ofpacts);
> > > - ofctrl_add_flow(OFTABLE_CHECK_LOOPBACK, 0, &match, &ofpacts,
> > hc_uuid);
> > > + ofctrl_add_flow(flow_table, OFTABLE_CHECK_LOOPBACK, 0, &match,
> > &ofpacts);
> > >
> > > /* Table 64, Priority 0.
> > > * =======================
> > > @@ -1002,7 +918,7 @@ physical_run(struct controller_ctx *ctx, enum
> > > mf_field_id mff_ovn_geneve,
> > > match_init_catchall(&match);
> > > ofpbuf_clear(&ofpacts);
> > > put_resubmit(OFTABLE_LOG_TO_PHY, &ofpacts);
> > > - ofctrl_add_flow(OFTABLE_SAVE_INPORT, 0, &match, &ofpacts,
> hc_uuid);
> > > + ofctrl_add_flow(flow_table, OFTABLE_SAVE_INPORT, 0, &match,
> > &ofpacts);
> > >
> > > ofpbuf_uninit(&ofpacts);
> > >
> > > diff --git a/ovn/controller/physical.h b/ovn/controller/physical.h
> > > index 28845b2..86ce93c 100644
> > > --- a/ovn/controller/physical.h
> > > +++ b/ovn/controller/physical.h
> > > @@ -43,8 +43,7 @@ struct simap;
> > > void physical_register_ovs_idl(struct ovsdb_idl *);
> > > void physical_run(struct controller_ctx *, enum mf_field_id
> > mff_ovn_geneve,
> > > const struct ovsrec_bridge *br_int, const char
> > *chassis_id,
> > > - const struct simap *ct_zones,
> > > + const struct simap *ct_zones, struct hmap
> *flow_table,
> > > struct hmap *local_datapaths, struct hmap
> > > *patched_datapaths);
> > > -void physical_reset_processing(void);
> > >
> > > #endif /* ovn/physical.h */
> > > --
> > > 2.7.4
> > >
> > > _______________________________________________
> > > dev mailing list
> > > [email protected]
> > > http://openvswitch.org/mailman/listinfo/dev
> > _______________________________________________
> > dev mailing list
> > [email protected]
> > http://openvswitch.org/mailman/listinfo/dev
> _______________________________________________
> dev mailing list
> [email protected]
> http://openvswitch.org/mailman/listinfo/dev
>
_______________________________________________
dev mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/dev