On 2015/6/10 19:16, Traynor, Kevin wrote:
-----Original Message-----
From: dev [mailto:dev-boun...@openvswitch.org] On Behalf Of Dongjun
Sent: Tuesday, June 9, 2015 4:36 AM
To: dev@openvswitch.org
Subject: [ovs-dev] Is the tx spinlock in __netdev_dpdk_vhost_send necessary?

This is the source code of "__netdev_dpdk_vhost_send" in master branch:
"
      ...
      /* There is vHost TX single queue, So we need to lock it for TX. */
      rte_spinlock_lock(&vhost_dev->vhost_tx_lock);

      do {
          unsigned int tx_pkts;

          tx_pkts = rte_vhost_enqueue_burst(virtio_dev, VIRTIO_RXQ,
                                            cur_pkts, cnt);
      ...
"

There is a spinlock for vshot TX single queue, but It seems the DPDK API
"virtio_dev_rx" or "virtio_dev_merge_rx" called in
"rte_vhost_enqueue_burst" has a lock-free mechanism.
I tried to remove the spinlock and did a simple concurrency test, two
TCP traffics form main thread(core 0) and pmd thread(core 1) were sent
to the same guest, it worked well.

I deeply appreciate some help for solving my confusion.
The current thinking is that the locking should be provided by the application
(OVS) and not the vhost library as the application knows whether it is needed
or not. There was a recent DPDK patch to make locking optional in the vhost
library, however I think it may get superceeded be some other changes in the
DPDK vhost library but in any case we are looking to remove the additional lock.



_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev
I get it, thank you.


_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to