On Thu, Sep 11, 2014 at 11:29 PM, Alex Wang <al...@nicira.com> wrote:
> You mean netdev_open(), or rxq_open() or some function else?
>
I was referring to netdev_open().

> netdev_open() seems not a good place, it can be called at multiple
> places.  and we need to keep record of the n_rxq config at some high
> level module.
>
In that case you can add netdev_open_mq(n_rxq, n_txq) and call new
function only when we need multiple queues.
Since the configuration information coming from higher layer, we
should keep tx, rx queue configuration in high layer. I think we can
keep it in dp_netdev structure.


> for rxq_open(), currently it is used to create a 'struct netdev_rxq', not
> for
> specifying the number of queues.
>
> On Thu, Sep 11, 2014 at 11:22 PM, Pravin Shelar <pshe...@nicira.com> wrote:
>>
>> On Thu, Sep 11, 2014 at 10:56 PM, Alex Wang <al...@nicira.com> wrote:
>> >> >
>> >> > Specifically, the default number of rx queues will be the number
>> >> > of dpdk interfaces on the numa node.  And the upcoming work
>> >> > will assign each rx queue to a different poll thread.  The default
>> >> > number of tx queues will be the number of cpu cores on the machine.
>> >> > Although not all the tx queues will be used, each poll thread will
>> >> > have its own queue for transmission on the dpdk interface.
>> >> >
>> >> I thought we had decided to create one rx queue for each core on local
>> >> numa node. Is there problem with this ?
>> >> creating one rx-queue for each core is more predictable than number of
>> >> device on the switch at given point.
>> >
>> >
>> >
>> > Actually, I was not aware of that.  But I'm okay with it.
>> >
>> >
>> >> > +    netdev->tx_q = dpdk_rte_mzalloc(n_cores * sizeof *netdev->tx_q);
>> >> > +    for (i = 0; i < n_cores; i++) {
>> >> >          rte_spinlock_init(&netdev->tx_q[i].tx_lock);
>> >> >      }
>> >> > +    netdev_->n_txq = n_cores;
>> >> > +    netdev_->n_rxq = dpdk_get_n_devs(netdev->socket_id);
>> >> >
>> >>
>> >> Rather than calculating n_tx_q and n_rx_q, these values should be
>> >> calculated by dpif-netdev and passed down to netdev implementation.
>> >
>> >
>> >
>> > I see what you mean.  I'll bring forward the netdev_set_multiq() patch
>> > (not
>> > posted yet).  and use it to configure the n_rxq from dpif-netdev.
>> >
>> > For n_txq, since we always specify one per core, I'd like to still init
>> > it
>> > in
>> > netdev_dpdk_init().  so netdev_set_multiq will just mark the n_txq
>> > argument
>> > as OVS_UNUSED.  what do you think?
>> >
>>
>> If we pass number of queue via open we can avoid netdev_set_multiq()
>> function.
>> in case of any change in number of queue we can close device and
>> reopen with new configuration.
>
>
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to