> On Jun 8, 2015, at 9:36 AM, Ben Pfaff <[email protected]> wrote:
>
> +/* miniflow_builder */
> +
> +void
> +miniflow_builder_init(struct miniflow_builder *b)
> +{
> + b->map = 0;
> +}
> +
> +void
> +miniflow_builder_to_miniflow(struct miniflow_builder *b,
> + struct miniflow *miniflow)
> +{
> + miniflow->map = 0;
> +
> + int max = count_1bits(b->map);
> + miniflow->values_inline = max <= MINI_N_INLINE;
> + if (!miniflow->values_inline) {
> + miniflow->offline_values = xmalloc(max
> + * sizeof
> *miniflow->offline_values);
In the current implementation the callers always reserve enough buffer space so
that manifold_extract always extracts to inline buffer.
Miniflow extract is called for every packet, but in the cases where caches
match, we do not need to copy the resulting miniflow anywhere. And in cases
where we need to copy the result, there is no separate memory allocation for
the result, as the miniflow (with right amount of inline buffer) is inlined
into the cache entry.
So, this malloc, and the necessary free(), are going to significantly decrease
performance.
This could be circumvented by making MINI_N_INLINE big enough to fit the
extracted packet key in all cases, but then it would not be so “mini” any more.
Have you considered this kind of design:
struct miniflow_builder_flow {
uint64_t map;
struct flow flow;
};
struct miniflow_builder {
union {
struct miniflow miniflow;
struct miniflow_builder_flow builder;
};
};
The caller of miniflow_extract() would supply this from it’s stack. The flow is
transformed to a miniflow in-place, so the miniflow data is always inline. The
number of accessed cache lines is still more than in the current case, but
strictly less than in your current proposal. And this avoids the malloc/free.
Jarno
> + }
> +
> + uint64_t *values = miniflow_values(miniflow);
> + int n = 0;
> + int idx;
> + MAP_FOR_EACH_INDEX (idx, b->map) {
> + uint64_t value = flow_u64_value(&b->flow, idx);
> + if (OVS_LIKELY(value)) {
> + miniflow->map |= UINT64_C(1) << idx;
> + values[n++] = value;
> + }
> + }
> +}
> +
> +
_______________________________________________
dev mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/dev