On Fri, Oct 11, 2013 at 3:55 PM, Andy Zhou <az...@nicira.com> wrote:
> diff --git a/datapath/datapath.c b/datapath/datapath.c
> index 9e6df12..859a3b1 100644
> --- a/datapath/datapath.c
> +++ b/datapath/datapath.c
> +static void get_dp_megaflow_stats(struct datapath *dp,
> +               struct ovs_dp_megaflow_stats *stats)
> +{

Could we have get_dp_stats() return this information as well? It seems
like that would eliminate a lot of the boilerplate per-cpu collection
code.

> +static struct sw_flow *ovs_flow_tbl_lookup__ (struct flow_table *tbl,
> +                                             const struct sw_flow_key *key)
> +{
> +       u32 __attribute__((unused)) n_mask_hit;

There's a kernel macro that you could use here - __always_unused.

> diff --git a/datapath/datapath.h b/datapath/datapath.h
> index 64920de..ae4975d 100644
> --- a/datapath/datapath.h
> +++ b/datapath/datapath.h
> @@ -48,11 +48,16 @@
>   * @n_lost: Number of received packets that had no matching flow in the flow
>   * table that could not be sent to userspace (normally due to an overflow in
>   * one of the datapath's queues).
> + * @n_mask_hit: Number of masks looked up for flow match.
> + * (@n_hit + @n_missed) / @n_mask_hit will be the average masks looked up per
> + * packet.

Should the numerator and denominator be reversed in this comment?

> diff --git a/datapath/flow_table.c b/datapath/flow_table.c
> index 98eb809..367befe 100644
> --- a/datapath/flow_table.c
> +++ b/datapath/flow_table.c
> @@ -422,6 +423,7 @@ static struct sw_flow *masked_flow_lookup(struct 
> table_instance *ti,
>         hash = flow_hash(&masked_key, key_start, key_end);
>         head = find_bucket(ti, hash);
>         hlist_for_each_entry_rcu(flow, head, hash_node[ti->node_ver]) {
> +               (*n_mask_hit)++;
>                 if (flow->mask == mask &&
>                     flow_cmp_masked_key(flow, &masked_key,
>                                           key_start, key_end))

This surprised me because it includes the number of hash collisions in
addition to the mask lookups. Obviously collisions impact performance
as well but it's not what I would intuitively expect to get from this
stat.

> +int ovs_flow_tbl_num_masks(const struct flow_table *table)
> +{
> +       struct sw_flow_mask *mask;
> +       int num = 0;
> +
> +       list_for_each_entry(mask, &table->mask_list, list)
> +               num++;
> +
> +       return num;
> +}
> +
> +

Extra blank line here.
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to