Hey Pravin & OVS wizards.

Just found something fun. If you're updating to dpdk 2.2 the support is
better for tx handling when doing rte_mempool_create() is now ok to use the
private data size

You won't need the funky pointer arithmetic in  __rte_pktmbuf_init() any
more as you can use the standard rte_pktmbuf_init() inside
your ovs_rte_pktmbuf_init()

And - the following to set the private data size to fit in your dp_packet

dmp->mp = rte_mempool_create(mp_name, mp_size-1, MBUF_SIZE(mtu),
MP_CACHE_SZ,
*sizeof(struct dp_packet) - sizeof(struct rte_mbuf),*
rte_pktmbuf_pool_init, NULL,
ovs_rte_pktmbuf_init, NULL,
socket_id, 0);

This way, if you ever decide to extend to including stuff like
*ipv4_fragmentation* (EG if you have  outgoing packets you want to match
the MTU) the dpdk frag library uses indirect buffers you can be sure that
all the pointers will line up right for attach / detatch / free.

so you can make one of these

dmp->mp_indirect = rte_mempool_create(mp_name_indirect, mp_size-1,
sizeof(struct dp_packet), 32,
*sizeof(struct dp_packet) - sizeof(struct rte_mbuf),*
NULL, NULL,
rte_pktmbuf_init, NULL,
socket_id, 0);

and then based on packet size do the same as the ip_fragmentation example
in dpdk.

Inside the rte_pktmbuf_free_seg routine is some pointer arithmetic that
needs the private data size to be set correct  to find the original direct
buffer from the indirect one.

Hope this is useful. I don't want to submit a patch myself as my code has
all manner of other crazy application specific tricks beyond this.

Regards,
Dave.
_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to