On Thu, Sep 11, 2014 at 10:56 PM, Alex Wang <al...@nicira.com> wrote:
>> >
>> > Specifically, the default number of rx queues will be the number
>> > of dpdk interfaces on the numa node.  And the upcoming work
>> > will assign each rx queue to a different poll thread.  The default
>> > number of tx queues will be the number of cpu cores on the machine.
>> > Although not all the tx queues will be used, each poll thread will
>> > have its own queue for transmission on the dpdk interface.
>> >
>> I thought we had decided to create one rx queue for each core on local
>> numa node. Is there problem with this ?
>> creating one rx-queue for each core is more predictable than number of
>> device on the switch at given point.
>
>
>
> Actually, I was not aware of that.  But I'm okay with it.
>
>
>> > +    netdev->tx_q = dpdk_rte_mzalloc(n_cores * sizeof *netdev->tx_q);
>> > +    for (i = 0; i < n_cores; i++) {
>> >          rte_spinlock_init(&netdev->tx_q[i].tx_lock);
>> >      }
>> > +    netdev_->n_txq = n_cores;
>> > +    netdev_->n_rxq = dpdk_get_n_devs(netdev->socket_id);
>> >
>>
>> Rather than calculating n_tx_q and n_rx_q, these values should be
>> calculated by dpif-netdev and passed down to netdev implementation.
>
>
>
> I see what you mean.  I'll bring forward the netdev_set_multiq() patch (not
> posted yet).  and use it to configure the n_rxq from dpif-netdev.
>
> For n_txq, since we always specify one per core, I'd like to still init it
> in
> netdev_dpdk_init().  so netdev_set_multiq will just mark the n_txq argument
> as OVS_UNUSED.  what do you think?
>

If we pass number of queue via open we can avoid netdev_set_multiq() function.
in case of any change in number of queue we can close device and
reopen with new configuration.
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to