I see two primary functional changes here:

(1) Do not wake up the main thread every 100ms.
This means that if nothing is changing, the main thread will wake up less
often, so it does less work. This sounds good.

(2) Allow the database to be updated more often than every 100ms for
instant_stats (if things change often).
This means that whenever any connectivity changes, it will iterate through
all of the ports/bfd/etc, and push the statuses, regardless of whether it
has done this recently. This may result in more traffic to the database
under constantly changing conditions. My main concern here would be if
vswitchd sends more transactions to the database, then the database will
send more responses, so vswitchd will spend more time processing JSON
responses when it is already inundated with work to do. Do you think this
is likely to be a problem?

More feedback inline.

On 11 April 2014 10:59, Alex Wang <al...@nicira.com> wrote:

> diff --git a/ofproto/ofproto-dpif.c b/ofproto/ofproto-dpif.c
> index 04d5454..7f327b9 100644
> --- a/ofproto/ofproto-dpif.c
> +++ b/ofproto/ofproto-dpif.c
> @@ -375,6 +375,8 @@ ofproto_dpif_send_packet_in(struct ofproto_dpif
> *ofproto,
>          COVERAGE_INC(packet_in_overflow);
>          free(CONST_CAST(void *, pin->up.packet));
>          free(pin);
> +    } else {
> +        seq_change(connectivity_seq_get());
>      }
>  }
>
>
Why do we modify the connectivity status each time there is a packet_in?


+    /* Check the need to update status. */
> +    seq = seq_read(connectivity_seq_get());
> +    if (seq != connectivity_seqno) {
> +        struct ovsdb_idl_txn *txn;
> +
> +        connectivity_seqno = seq;
> +        txn = ovsdb_idl_txn_create(idl);
> +        HMAP_FOR_EACH (br, node, &all_bridges) {
> +            struct port *port;
> +
> +            br_refresh_stp_status(br);
> +            HMAP_FOR_EACH (port, hmap_node, &br->ports) {
> +                struct iface *iface;
> +
> +                port_refresh_stp_status(port);
> +                LIST_FOR_EACH (iface, port_elem, &port->ifaces) {
> +                    iface_refresh_netdev_status(iface);
> +                    iface_refresh_ofproto_status(iface);
> +                }
> +            }
> +        }
> +        ovsdb_idl_txn_commit(txn);
> +        ovsdb_idl_txn_destroy(txn); /* XXX */
> +    }
> +
>      run_system_stats();
> -    instant_stats_run();
>  }
>

This looks tidier and harder to miss :-) . Regarding the idl_txn, I see a
logical difference when the transaction returns TXN_INCOMPLETE. The
description above ovsdb_idl_txn_commit() says that the caller should call
again later if this code is returned. Presumably this allows re-use of the
same transaction object if the transaction was not completed. I don't know
whether it is strictly required, or if it makes a difference in this
situation---perhaps someone else could chime in on this?



>
>  void
> @@ -2482,7 +2421,6 @@ bridge_wait(void)
>      }
>
>      system_stats_wait();
> -    instant_stats_wait();
>  }
>

Previously the instant_stats_run() relied on the 100ms wakeup to ensure
that its logic was hit frequently. If that logic is removed, it would be
good to make sure that bridge waits on connectivity_seq() for these
instant_stats. (While vswitchd does wait on that seq for ofproto, I think
it would be better to add an additional seq_wait here to track against the
bridge's "connectivity_seqno", in case the other logic is changed.)
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to