On Mon, Mar 24, 2014 at 10:59:06AM -0700, Jarno Rajahalme wrote:
> This patch shrinks the struct ofpbuf from 104 to 48 bytes on 64-bit
> systems, or from 52 to 36 bytes on 32-bit systems (counting in the
> 'l7' removal by the previous patch).  This may help contribute to
> cache efficiency, and will speed up initializing, copying and
> manipulating ofpbufs.  This is potentially important for the DPDK
> datapath, but the rest of the code base may also see a little benefit.
> 
> Changes are:
> 
> - Remove 'l7' pointer (previous patch).
> - Use offsets instead of layer pointers for l2_5, l3, and l4 using
>   'l2' as basis.  Usually 'data' is the same as 'l2', but this is not
>   always the case (e.g., when parsing or constructing a packet), so it
>   can not be easily used as the offset basis.  Also, packet parsing is
>   faster if we do not need to maintain the offsets each time we pull
>   data from the ofpbuf.
> - Use uint32_t for 'allocated' and 'size', as 2^32 is enough even for
>   largest possible messages/packets.
> - Use packed enum for 'source'.
> - Rearrange to avoid unnecessary padding.
> - Remove 'private_p', which was used only in two cases, both of which
>   had the invariant ('l2' == 'data'), so we can temporarily use 'l2'
>   as a private pointer.
> 
> Signed-off-by: Jarno Rajahalme <jrajaha...@nicira.com>

ofpbuf_resize_l2() and ofpbuf_resize_l2_5() look kind of big to me for
inlining.

ofpbuf_resize_l2() looks rather redundant with ofpbuf_resize_l2_5().
I think it could be written as:

    static inline void * ofpbuf_resize_l2(struct ofpbuf *b, int increment)
    {
        ofpbuf_resize_l2_5(b, increment);
        OFPBUF_ADJUST_LAYER_OFFSET(b, l2_5, increment);
        return b->l2;
    }

I don't think OFPBUF_ADJUST_LAYER_OFFSET() really needs to be a macro,
e.g.:
    static inline void
    ofpbuf_adjust_layer_offset(uint16_t *offset, int increment)
    {
        if (*offset != UINT16_MAX) {
            *offset += increment;
        }
    }

Acked-by: Ben Pfaff <b...@nicira.com>
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to