On 2015/10/22 21:49, Bruce Richardson wrote: > On Thu, Oct 22, 2015 at 06:45:50PM +0900, Tetsuya Mukawa wrote: >> The patch introduces a new PMD. This PMD is implemented as thin wrapper >> of librte_vhost. It means librte_vhost is also needed to compile the PMD. >> The vhost messages will be handled only when a port is started. So start >> a port first, then invoke QEMU. >> >> The PMD has 2 parameters. >> - iface: The parameter is used to specify a path connect to a >> virtio-net device. >> - queues: The parameter is used to specify the number of the queues >> virtio-net device has. >> (Default: 1) >> >> Here is an example. >> $ ./testpmd -c f -n 4 --vdev 'eth_vhost0,iface=/tmp/sock0,queues=3' -- -i >> >> To connect above testpmd, here is qemu command example. >> >> $ qemu-system-x86_64 \ >> <snip> >> -chardev socket,id=chr0,path=/tmp/sock0 \ >> -netdev vhost-user,id=net0,chardev=chr0,vhostforce \ >> -device virtio-net-pci,netdev=net0 >> >> Signed-off-by: Tetsuya Mukawa <mukawa at igel.co.jp> > Hi Tetsuya, > > a few comments inline below. > > /Bruce > >> --- >> config/common_linuxapp | 6 + > <snip> >> index 0000000..66bfc2b >> --- /dev/null >> +++ b/drivers/net/vhost/rte_eth_vhost.c >> @@ -0,0 +1,735 @@ >> +/*- >> + * BSD LICENSE >> + * >> + * Copyright (c) 2010-2015 Intel Corporation. > This is probably not the copyright line you want on your new files.
Hi Bruce, I appreciate your comments. Yes, I will change above. >> + >> +static uint16_t >> +eth_vhost_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) >> +{ >> + struct vhost_queue *r = q; >> + uint16_t nb_rx = 0; >> + >> + if (unlikely(r->internal == NULL)) >> + return 0; >> + >> + if (unlikely(rte_atomic32_read(&r->allow_queuing) == 0)) >> + return 0; >> + >> + rte_atomic32_set(&r->while_queuing, 1); >> + >> + if (unlikely(rte_atomic32_read(&r->allow_queuing) == 0)) >> + goto out; >> + >> + /* Dequeue packets from guest TX queue */ >> + nb_rx = (uint16_t)rte_vhost_dequeue_burst(r->device, >> + VIRTIO_TXQ, r->mb_pool, bufs, nb_bufs); >> + >> + rte_atomic64_add(&(r->rx_pkts), nb_rx); > Do we really need to use atomics here? It will slow things down a lot. For > other PMDs the assumption is always that only a single thread can access each > queue at a time - it's up to the app to use locks to enforce that restriction > if necessary. I agree we don't need to use atomic here. I will change it in next patches. >> +static int >> +new_device(struct virtio_net *dev) >> +{ >> + struct rte_eth_dev *eth_dev; >> + struct pmd_internal *internal; >> + struct vhost_queue *vq; >> + uint16_t queues; >> + unsigned i; >> + >> + if (dev == NULL) { >> + RTE_LOG(INFO, PMD, "invalid argument\n"); >> + return -1; >> + } >> + >> + internal = find_internal_resource(dev->ifname); >> + if (internal == NULL) { >> + RTE_LOG(INFO, PMD, "invalid device name\n"); >> + return -1; >> + } >> + >> + /* >> + * Todo: To support multi queue, get the number of queues here. >> + * So far, vhost provides only one queue. >> + */ >> + queues = 1; >> + >> + if ((queues < internal->nb_rx_queues) || >> + (queues < internal->nb_tx_queues)) { >> + RTE_LOG(INFO, PMD, "Not enough queues\n"); >> + return -1; >> + } >> + >> + eth_dev = rte_eth_dev_allocated(internal->dev_name); >> + if (eth_dev == NULL) { >> + RTE_LOG(INFO, PMD, "failuer to find ethdev\n"); > typo "failure". Probably shoudl also be written just as "Failed to find > ethdev". Thanks, I will fix it. >> +static struct eth_driver rte_vhost_pmd = { >> + .pci_drv = { >> + .name = "rte_vhost_pmd", >> + .drv_flags = RTE_PCI_DRV_DETACHABLE, >> + }, >> +}; > If you base this patchset on top of Bernard's patchset to remove the PCI > devices > then you shouldn't need these pci_dev and id_table structures. Sure, I will check him latest patches. And will rebase on it. Regards, Tetsuya