Thank you Daniele. I like the way the multi-queue round robin thing works, I just need to be able to define the mask of cores for the dpdk port and to be able to identify the primary one for managing timers or the tx queue.
I found it quite awkward to nail down a pmd lcore id to do rte_timer_manage for each netdev. The other_config¹ field would be ok for a start I guess. You could set a mask for distributing the rxq¹s and a core number for the lcore to tie the tx / other future netdev housekeeping( rte timer management) to. Regards, Dave. On 9/30/15, 12:50 PM, "Daniele Di Proietto" <diproiet...@vmware.com> wrote: > > >On 30/09/2015 04:44, "David Evans" <davidjoshuaev...@gmail.com> wrote: > >>Hi OVS (Ben particularly :) ) >> >>How do i get OVS to assign ports to the PMD's that I choose? >> >>If i have say 6 or 12 ports and i want them distributed evenly across a >>mask of 12 or more cores on a multi node numa system / or where i know >>the NICs are on separate pci buses even what code do i touch to have more >>deterministic control over this. >> > >Currently each pmd thread loads the rx queues from the NICs on its NUMA >socket. >If you create more than one pmd thread per NUMA socket, the queues will be >assigned in a round robin fashion. > >The function that does this is pmd_load_queues(). It is called in >pmd_thread_main(). > >We're discussing a way to provide more control for the user. > > > >> >>Cheers, >> >> >>Dave. >> >> >> >> > _______________________________________________ discuss mailing list discuss@openvswitch.org http://openvswitch.org/mailman/listinfo/discuss