On Fri, Aug 07, 2015 at 04:57:37PM -0700, Jarno Rajahalme wrote:
> This makes stage mask computation happen only when a subtable is
> inserted and allows simplification of the main lookup function.
> 
> Classifier benchmark shows that this speeds up the classification
> (with wildcards) about 5%.
> 
> Signed-off-by: Jarno Rajahalme <jrajaha...@nicira.com>

I don't think that flow_hash_in_minimask_range() actually does what its
name implies anymore (where's the range?).  It at least needs a comment
update since the comment still talks about parameters it doesn't have.
Ditto for minimatch_hash_range().

In a construction like this:
    if (map->tnl_map) {
        MAP_FOR_EACH_INDEX(idx, map->tnl_map) {
            dst_u64[idx] |= *p++;
        }
    }
it appears to me that the 'if' is redundant.  In one case I see there's
an OVS_UNLIKELY on the 'if' but I wonder whether that's sufficient
benefit.

If we believe that tnl_map is usually 0, then it would be profitable to
test pkt_map first below to allow short-circuiting to bail out earlier:

    static inline bool
    miniflow_equal_maps(const struct miniflow *a, const struct miniflow *b)
    {
        return a->tnl_map == b->tnl_map && a->pkt_map == b->pkt_map;
    }


While reading code, I came up with some additional comments that are not
directly related to the changes here.

First, classifier.c has a lot of "static inline" functions.  Does the
"inline" actually produce measurable performance improvement?  If not,
then it's better to avoid "inline" in .c files since it suppresses
otherwise useful "unused function" warnings.

Second, the amount of duplicated code because of tnl_map versus pkt_map
is starting to bug me.  If these were just a 2-element array, for
example, then miniflow_and_mask_matches_flow_wc() could be in my opinion
tons cleaner as something like this:

{
    const uint64_t *flowp = miniflow_get_values(flow);
    const uint64_t *maskp = miniflow_get_values(&mask->masks);
    const uint64_t *target_u64 = (const uint64_t *)target;
    uint64_t *wc_u64 = (uint64_t *)&wc->masks;

    for (int i = 0; i < 2; i++) {
        size_t idx;

        MAP_FOR_EACH_INDEX(idx, mask->masks.maps[i]) {
            uint64_t msk = *maskp++;

            uint64_t diff = (*flowp++ ^ target_u64[idx]) & msk;
            if (diff) {
                /* Only unwildcard if none of the differing bits is already
                 * exact-matched. */
                if (!(wc_u64[idx] & diff)) {
                    /* Keep one bit of the difference.  The selected bit may be
                     * different in big-endian v.s. little-endian systems. */
                    wc_u64[idx] |= rightmost_1bit(diff);
                }
                return false;
            }

            /* Fill in the bits that were looked at. */
            wc_u64[idx] |= msk;
        }
        target_u64 += FLOW_TNL_U64S;
        wc_u64 += FLOW_TNL_U64S;
    }

    return true;
}

I noticed that MINIFLOW_IN_MAP could be a function, so I send out a
patch (which applies following this one):
        http://openvswitch.org/pipermail/dev/2015-August/059029.html
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to