Thanks. I've just sent v2:
http://openvswitch.org/pipermail/dev/2016-May/071603.html

On 23.05.2016 23:18, Mauricio Vásquez wrote:
> Thanks Ilya for your answer,
> I'll try to give a look to your patch-set this week.
> 
> On Fri, May 20, 2016 at 11:55 AM, Ilya Maximets <i.maxim...@samsung.com 
> <mailto:i.maxim...@samsung.com>> wrote:
> 
>     Hi, Mauricio.
> 
>     My thoughts about TX queue management described in patch-set
>     "[PATCH RFC 0/6] dpif-netdev: Manual pinnig of RX queues + XPS."
>     ( http://openvswitch.org/pipermail/dev/2016-May/070902.html ).
> 
>     Shortly:
>     My solution is to allow user to set number of TX queues
>     for each device (options:n_txq) just like it done for number of
>     RX queues (options:n_rxq). PMD threads will choose appropriate
>     TX queue dynamically using some kind of XPS logic.
> 
>     This solution, I think, is more convenient for users and it allows
>     to solve various issues connected to static 'tx_qid' distribution.
> 
>     Manual setting of 'n_txq' will allow users to manage performance
>     of OVS more accurately and also simplifies the configuration code.
> 
>     Best regards, Ilya Maximets.
> 
>     On 20.05.2016 12:23, Mauricio Vásquez wrote:
>     > Hello,
>     >
>     > I noticed that pmd threads are created in a per NUMA node fashion,
>     > it means, pmd threads are only created in a specific NUMA node when
>     > there is a least one DPDK port on that node.
>     >
>     > My concern is that in some cases with this approach more threads than
>     > needed are created, for example, lets suppose a system with two ports
>     > with a single rx queue, in this case maximum 2 pmd threads would be
>     > necessary, but if the user has set a pmd core mask including more than
>     > two cores all of them would be created. (some cores would do "nothing",
>     > just poll if they have to be restarted).
>     >
>     > Is there any particular reason for this behavior?
>     > Would it make sense to consider starting pmd threads on demand?, I mean,
>     > only when they are actually needed.
>     >
>     > Another related question is about the number of tx queues, I noticed
>     > that it is set to n_cores + 1, where n_cores is the total number of
>     > cores in the system. I think it is not common to have all cpu cores
>     > are assigned to ovs, then many of those queues will be not used.
>     >
>     > Does it make sense to create the number of tx queues based on the
>     > core_mask, in a way that all of the queues are actually used?
>     >
>     > Thank you very much for the attention,
>     >
>     > Mauricio V,
> 
> 
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to