[dpdk-dev] 82599 SR-IOV with passthrough

2013-10-17 Thread Prashant Upadhyaya
Hi Qinglai, I would say that SRIOV is 'useless' if the VF gets only one queue. At the heart of performance is to use one queue per core so that the the tx and rx remain lockless. Locks 'destroy' performance. So with one queue, if we want to remain lockless, that automatically means that the usec

[dpdk-dev] sending and receiving packets

2013-10-17 Thread Gal Sagie
Assigning an IP is a function of the network stack, DPDK and the NIC does not need to be aware of that. DPDK is just used to poll packets from the NIC and insert them into the network stack start point inside rump kernels In the tcp_http_get example it is assumed you are connected to a DHCP server

[dpdk-dev] 82599 SR-IOV with passthrough

2013-10-17 Thread Prashant Upadhyaya
Hi Qinglai, Why are you using the kernel driver at all. Use the DPDK driver to control the PF on the host. The guest would communicate with the PF on host using mailbox as usual. Then the changes will be limited to DPDK, isn't it ? Regards -Prashant -Original Message- From: dev [mailto:

[dpdk-dev] 82599 SR-IOV with passthrough

2013-10-17 Thread Prashant Upadhyaya
Hi Qinglai, Even with 1 queue, were you able to run the DPDK app in the guest OS ? If you were able to, which version of DPDK did you use, please let me know. I am trying to run the DPDK app in guest OS using QEMU/KVM with an SRIOV virtual function of an 82599 NIC. I can see the vf pci address i

[dpdk-dev] sending and receiving packets

2013-10-17 Thread Prashant Upadhyaya
Hi Gopi, I have not worked with rumpkernel tcpip stack. Does it run 'with' the DPDK in the userspace and is your tcp client application interacting over sockets to that tcpip stack in user space ? If your stack is running in the kernel, then ofcourse you have to use a tap interface to interface

[dpdk-dev] 82599 SR-IOV with passthrough

2013-10-17 Thread jigsaw
>>Is it possible to get the number of configured queues via mailbox ? Yes, this is exactly why I need to patch ixgbe PF. Otherwise it always returns 1, even the number is configured as 4. I will make some more experiments before sending the patch for review in one or two days. On Thu, Oct 17, 201

[dpdk-dev] sending and receiving packets

2013-10-17 Thread Gopi Krishna B
On Thu, Oct 17, 2013 at 3:01 PM, Gal Sagie wrote: > Rump kernels is a flexible kernel architecture which runs in user space > and is a very interesting project, you can read more about it > here => http://www.netbsd.org/docs/rump/#rump-about > It is currently part of the NetBSD source tree. > > A

[dpdk-dev] 82599 SR-IOV with passthrough

2013-10-17 Thread jigsaw
Hi Prashant, The problem is that my patch has to be applied to ixgbe PF driver as well. I have no idea how to make it happen. So even DPDK accepts my patch, user won't benefit from it unless he patched ixgbe PF by himself. I also hate the fact that SRIOV cannot get more queues to VF. But there's

[dpdk-dev] 82599 SR-IOV with passthrough

2013-10-17 Thread jigsaw
Hi Prashant, I'm using CentOS 6 as both host and guest, which is managed by virsh. A quite old kernel with latest ixgbe PF driver, plus DPDK trunk version running in guest. BTW, DPDK cannot take over ixgbe PF. At least not for now. For instance the PF mailbox is not implemented in DPDK. thx & rg

[dpdk-dev] 82599 SR-IOV with passthrough

2013-10-17 Thread Thomas Monjalon
17/10/2013 14:43, jigsaw : > I patched both Intel ixgbe PF driver and DPDK 1.5 VF driver, so that > DPDK gets 4 queues in one VF. It works fine with all 4 Tx queues. The > only trick is to set proper mac address for all outgoing packets, > which must be the same mac as you set to the VF. This trick

[dpdk-dev] 82599 SR-IOV with passthrough

2013-10-17 Thread jigsaw
Hi Prashant, I patched both Intel ixgbe PF driver and DPDK 1.5 VF driver, so that DPDK gets 4 queues in one VF. It works fine with all 4 Tx queues. The only trick is to set proper mac address for all outgoing packets, which must be the same mac as you set to the VF. This trick is described in the

[dpdk-dev] sending and receiving packets

2013-10-17 Thread Gal Sagie
Rump kernels is a flexible kernel architecture which runs in user space and is a very interesting project, you can read more about it here => http://www.netbsd.org/docs/rump/#rump-about It is currently part of the NetBSD source tree. A project was made to integrate Intel DPDK inside Rump kernel ne

[dpdk-dev] Multiple LCore receiving from same port/queue

2013-10-17 Thread Vladimir Medvedkin
Hi, By default, you can't poll same queue on same port from different lcores. If you need poll same queue on several lcores use locks to avoid race conditions. 2013/10/17 Sambath Kumar Balasubramanian > Hi, > > I have a test dpdk application with 2 lcores receiving packets > using rte_eth_rx