[dpdk-dev] [dpdk-pdump] Problem with dpdk-pdump

2017-01-30 Thread Clarylin L
Hi all, I just started to use dpdk-pdump and got some problem with it. Appreciate your help! By following the document, I got testpmd running with pdump framework initialized and tried to start dpdk-pdump as a secondary process. However, I failed to start dpdk-pdump. My testpmd was running like:

[dpdk-dev] is ixgbe supporting multi-segment mbuf?

2016-03-28 Thread Clarylin L
ixgbe_recv_scattered_pkts was set to be the rx function. Receiving packets smaller than mbuf size works perfectly. However, if an incoming packet is greater than the maximum acceptable length of one ?mbuf? data size, receiving does not work. In this case, isn't it supposed to use mbuf chaining to r

[dpdk-dev] Unable to get multi-segment mbuf working for ixgbe

2016-03-28 Thread Clarylin L
Any pointers to what the issue could be? thanks On Fri, Mar 25, 2016 at 4:13 PM, Clarylin L wrote: > Hello, > > I am trying to use multi-segment mbuf to receive large packet. I enabled > jumbo_frame and enable_scatter for the port and was expecting mbuf chaining > would be

[dpdk-dev] Unable to get multi-segment mbuf working for ixgbe

2016-03-25 Thread Clarylin L
Hello, I am trying to use multi-segment mbuf to receive large packet. I enabled jumbo_frame and enable_scatter for the port and was expecting mbuf chaining would be used to receive packets larger than the mbuf size (which was set to 2048). When sending 3000-byte (without fragmentation) packet fro

[dpdk-dev] multi-segment mbuf

2016-03-22 Thread Clarylin L
ained by > next pointer. > It is a bug in the creator of the mbuf, if number of segments and next > chain don't > match. > > There is a rte_pktmbuf_dump(), you can use to look at how your mbuf is > formatted. > > On Tue, Mar 22, 2016 at 1:36 PM, Clarylin L wrote: > &g

[dpdk-dev] multi-segment mbuf

2016-03-22 Thread Clarylin L
tor of the mbuf, if number of segments and next > chain don't > match. > > There is a rte_pktmbuf_dump(), you can use to look at how your mbuf is > formatted. > > On Tue, Mar 22, 2016 at 1:36 PM, Clarylin L wrote: > >> Sorry my bad. The mbuf size has been accidental

[dpdk-dev] multi-segment mbuf

2016-03-22 Thread Clarylin L
enic supporting multi-segment mbuf? The dpdk version is 2.0.0. I have enabled jumbo-frame and enable_scatter for the port. On Tue, Mar 22, 2016 at 3:27 AM, Bruce Richardson < bruce.richardson at intel.com> wrote: > On Mon, Mar 21, 2016 at 04:34:50PM -0700, Clarylin L wrote: > > I am

[dpdk-dev] multi-segment mbuf

2016-03-21 Thread Clarylin L
I am trying multi-segment mbuf, but it seems not working. On my target host, the mbuf size is set to 2048 and I am trying to send large packet to it (say 2500 bytes without fragmentation) from another host. I enabled both jumbo_frame and enable_scatter for the port. But I saw on the target only on

[dpdk-dev] [dpdk-virtio] DPDK stopped working on virtio

2016-02-25 Thread Clarylin L
Played with it a little bit more. It can receive packets only if promiscuous mode is enabled in DPDK. On Mon, Feb 22, 2016 at 11:00 PM, Clarylin L wrote: > I am working with DPDK 2.0. > > I guess it's not DPDK code issue , but more like an environment issue (as > same code has

[dpdk-dev] [dpdk-virtio] DPDK stopped working on virtio

2016-02-22 Thread Clarylin L
Liu wrote: > On Mon, Feb 22, 2016 at 11:15:57AM -0800, Clarylin L wrote: > > I am running DPDK application (testpmd) within a VM based on virtio. With > > the same hypervisor and same DPDK code, it used to work well. But it > > stopped working since last week. The VM's port cou

[dpdk-dev] [dpdk-virtio] DPDK stopped working on virtio

2016-02-22 Thread Clarylin L
I am running DPDK application (testpmd) within a VM based on virtio. With the same hypervisor and same DPDK code, it used to work well. But it stopped working since last week. The VM's port could not receive anything. I ran tcpdump on host's physical port, bridge interface as well as vnet interface

[dpdk-dev] L3 Forwarding performance of DPDK on virtio

2016-01-20 Thread Clarylin L
an 20, 2016 at 5:25 PM, Tan, Jianfeng wrote: > > Hello! > > > On 1/21/2016 7:51 AM, Clarylin L wrote: > >> I am running dpdk within a virtual guest as a L3 forwarder. >> >> >> The VM has two ports connecting to two linux bridges (in turn connecting >>

[dpdk-dev] L3 Forwarding performance of DPDK on virtio

2016-01-20 Thread Clarylin L
I am running dpdk within a virtual guest as a L3 forwarder. The VM has two ports connecting to two linux bridges (in turn connecting two physical ports). DPDK is used to forward between these two ports (one port connected to traffic generator and the other connected to sink). I used iperf to test

[dpdk-dev] [igb_uio] ethtool: Operation not supported

2015-08-13 Thread Clarylin L
I am running dpdk in KVM-based virtual machine. Two ports were bound to igb_uio driver and KNI generated two ports vEth0 and vEth1. I was trying to use ethtool to get information of these two ports, but failed to do so. It reported "Operation not supported". How to address this issue? Thanks

[dpdk-dev] [dpdk-virtio] Performance tuning for dpdk with virtio?

2015-07-17 Thread Clarylin L
inger < stephen at networkplumber.org> wrote: > On Fri, 17 Jul 2015 11:03:15 -0700 > Clarylin L wrote: > > > I am running dpdk with a virtual guest as a L2 forwarder. > > > > If the virtual guest is on passthrough, dpdk can achieve around 10G > > throughput. How

[dpdk-dev] [dpdk-virtio] Performance tuning for dpdk with virtio?

2015-07-17 Thread Clarylin L
I am running dpdk with a virtual guest as a L2 forwarder. If the virtual guest is on passthrough, dpdk can achieve around 10G throughput. However if the virtual guest is on virtio, dpdk achieves just 150M throughput, which is a huge degrade. Any idea what could be the cause of such poor performanc

[dpdk-dev] [dpdk-virtio]: cannot start testpmd after binding virtio devices to gib_uio

2015-07-16 Thread Clarylin L
> I am running a virtual guest on Ubuntu and trying to use dpdk testpmd as a > packet forwarder. > > After starting the virtual guest, I do > insmod igb_uio.ko > insmod rte_kni.ko > echo ":00:06.0" > /sys/bus/pci/drivers/virtio-pci/unbind > echo ":00:07.0" > /sys/bus/pci/drivers/virtio-pci/