Thx for the comments Joe, Please see my reply inline:
On Thu, Apr 10, 2014 at 7:49 PM, Joe Stringer <j...@wand.net.nz> wrote: > I see two primary functional changes here: > Great summary. This also makes me feel bad about my commit message. I'll add these two points!!! ;D > (1) Do not wake up the main thread every 100ms. > This means that if nothing is changing, the main thread will wake up less > often, so it does less work. This sounds good. > Yes, I do not like instant stats logic. > (2) Allow the database to be updated more often than every 100ms for > instant_stats (if things change often). > This means that whenever any connectivity changes, it will iterate through > all of the ports/bfd/etc, and push the statuses, regardless of whether it > has done this recently. This may result in more traffic to the database > under constantly changing conditions. My main concern here would be if > vswitchd sends more transactions to the database, then the database will > send more responses, so vswitchd will spend more time processing JSON > responses when it is already inundated with work to do. Do you think this > is likely to be a problem? > The previous patch optimizes the update of bfd/cfm status (if the status_changed flag is false, then, no bfd/cfm status update). This patch also optimize the update of netdev status. Therefore, though the check of status update can be run more often, there will not be duplicated netdev/bfd/cfm status updates. Regarding more transactions, the experiment did not show an observable increase of cpu utilization when the (bfd/cfm/netdev) status is flapping, so I do not want to worry about limiting the update frequency for now. > More feedback inline. > On 11 April 2014 10:59, Alex Wang <al...@nicira.com> wrote: > >> diff --git a/ofproto/ofproto-dpif.c b/ofproto/ofproto-dpif.c >> index 04d5454..7f327b9 100644 >> --- a/ofproto/ofproto-dpif.c >> +++ b/ofproto/ofproto-dpif.c >> @@ -375,6 +375,8 @@ ofproto_dpif_send_packet_in(struct ofproto_dpif >> *ofproto, >> COVERAGE_INC(packet_in_overflow); >> free(CONST_CAST(void *, pin->up.packet)); >> free(pin); >> + } else { >> + seq_change(connectivity_seq_get()); >> } >> } >> >> > Why do we modify the connectivity status each time there is a packet_in? > This relates to your first summary point. ;D Originally, ofproto_dpif_send_packet_in() depends on ovs-vswitchd() be waken up every 100ms to schedule the transmission in connmgr module. Now, since the 'Instant' stats logic is removed, we need to explicitly notify the global seq. Otherwise, the main thread will keep sleeping. Found it in a unittest failure. > > + /* Check the need to update status. */ >> + seq = seq_read(connectivity_seq_get()); >> + if (seq != connectivity_seqno) { >> + struct ovsdb_idl_txn *txn; >> + >> + connectivity_seqno = seq; >> + txn = ovsdb_idl_txn_create(idl); >> + HMAP_FOR_EACH (br, node, &all_bridges) { >> + struct port *port; >> + >> + br_refresh_stp_status(br); >> + HMAP_FOR_EACH (port, hmap_node, &br->ports) { >> + struct iface *iface; >> + >> + port_refresh_stp_status(port); >> + LIST_FOR_EACH (iface, port_elem, &port->ifaces) { >> + iface_refresh_netdev_status(iface); >> + iface_refresh_ofproto_status(iface); >> + } >> + } >> + } >> + ovsdb_idl_txn_commit(txn); >> + ovsdb_idl_txn_destroy(txn); /* XXX */ >> + } >> + >> run_system_stats(); >> - instant_stats_run(); >> } >> > > This looks tidier and harder to miss :-) . Regarding the idl_txn, I see a > logical difference when the transaction returns TXN_INCOMPLETE. The > description above ovsdb_idl_txn_commit() says that the caller should call > again later if this code is returned. Presumably this allows re-use of the > same transaction object if the transaction was not completed. I don't know > whether it is strictly required, or if it makes a difference in this > situation---perhaps someone else could chime in on this? > I'll discuss with others on this and conduct more experiments. I think there may be a problem when a lot of netdev/tunnel status are changing fast. In that case, there will be many large transactions to OVSDB, jamming the rpc queue. > > >> >> void >> @@ -2482,7 +2421,6 @@ bridge_wait(void) >> } >> >> system_stats_wait(); >> - instant_stats_wait(); >> } >> > > Previously the instant_stats_run() relied on the 100ms wakeup to ensure > that its logic was hit frequently. If that logic is removed, it would be > good to make sure that bridge waits on connectivity_seq() for these > instant_stats. (While vswitchd does wait on that seq for ofproto, I think > it would be better to add an additional seq_wait here to track against the > bridge's "connectivity_seqno", in case the other logic is changed.) > Yeah, I didn't do it because the ofproto_wait() waits on it. Well, I'd like to take your advise and add it in case of any future change in ofproto_wait().
_______________________________________________ dev mailing list dev@openvswitch.org http://openvswitch.org/mailman/listinfo/dev