> > >>> + /* Check the need to update status. */ >>> + seq = seq_read(connectivity_seq_get()); >>> + if (seq != connectivity_seqno) { >>> + struct ovsdb_idl_txn *txn; >>> + >>> + connectivity_seqno = seq; >>> + txn = ovsdb_idl_txn_create(idl); >>> + HMAP_FOR_EACH (br, node, &all_bridges) { >>> + struct port *port; >>> + >>> + br_refresh_stp_status(br); >>> + HMAP_FOR_EACH (port, hmap_node, &br->ports) { >>> + struct iface *iface; >>> + >>> + port_refresh_stp_status(port); >>> + LIST_FOR_EACH (iface, port_elem, &port->ifaces) { >>> + iface_refresh_netdev_status(iface); >>> + iface_refresh_ofproto_status(iface); >>> + } >>> + } >>> + } >>> + ovsdb_idl_txn_commit(txn); >>> + ovsdb_idl_txn_destroy(txn); /* XXX */ >>> + } >>> + >>> run_system_stats(); >>> - instant_stats_run(); >>> } >>> >> >> This looks tidier and harder to miss :-) . Regarding the idl_txn, I see a >> logical difference when the transaction returns TXN_INCOMPLETE. The >> description above ovsdb_idl_txn_commit() says that the caller should call >> again later if this code is returned. Presumably this allows re-use of the >> same transaction object if the transaction was not completed. I don't know >> whether it is strictly required, or if it makes a difference in this >> situation---perhaps someone else could chime in on this? >> > > > I'll discuss with others on this and conduct more experiments. I think > there may be a problem when a lot of netdev/tunnel status are changing fast. > In that case, there will be many large transactions to OVSDB, jamming the > rpc queue. >
In the experiment with 5K bfd monitored tunnels, if I flap all sessions together, as fast as possible, the memory consumption issue showed up... So, I'll add the check for TXN_INCOMPLETE logic back~ and then repost the series,
_______________________________________________ dev mailing list dev@openvswitch.org http://openvswitch.org/mailman/listinfo/dev