[dpdk-dev] [dpdk-pdump] Problem with dpdk-pdump

2017-01-30 Thread Clarylin L
Hi all,

I just started to use dpdk-pdump and got some problem with it. Appreciate
your help!

By following the document, I got testpmd running with pdump framework
initialized and tried to start dpdk-pdump as a secondary process. However,
I failed to start dpdk-pdump.

My testpmd was running like:

testpmd -w :00:05.0 -w :00:06.0 -n 2 -c 0xf --socket-mem 1024 0
--proc-type primary --syslog local4


Then I started dpdk-pdump like


dpdk-pdump -w :00:05.0 -w :00:06.0 -- --pdump
'port=0,queue=*,rx-dev=/tmp/rx.pcap’


but got error messages as shown below. I double check the PCI device
:00:05.0 and it was bounded to dpdk driver (I specified whitelist
because I have more than two ports in my system and only two of them were
given to dpdk)


EAL: Detected 8 lcore(s)

Mon Jan 30 17:23:35 2017EAL: WARNING: cpu flags constant_tsc=yes
nonstop_tsc=no -> using unreliable clock cycles !

Mon Jan 30 17:23:35 2017PMD: bnxt_rte_pmd_init() called for (null)

Mon Jan 30 17:23:35 2017EAL: PCI device :00:05.0 on NUMA socket -1

Mon Jan 30 17:23:35 2017EAL:   probe driver: 1137:43 rte_enic_pmd

Mon Jan 30 17:23:35 2017PMD: rte_enic_pmd: vNIC BAR0 res hdr not mem-mapped

Mon Jan 30 17:23:35 2017PMD: rte_enic_pmd: vNIC registration failed,
aborting

Mon Jan 30 17:23:35 2017EAL: Error - exiting with code: 1

  Cause: Mon Jan 30 17:23:35 2017Requested device :00:05.0 cannot be
used


[dpdk-dev] [igb_uio] ethtool: Operation not supported

2015-08-13 Thread Clarylin L
I am running dpdk in KVM-based virtual machine. Two ports were bound to
igb_uio driver and KNI generated two ports vEth0 and vEth1. I was trying to
use ethtool to get information of these two ports, but failed to do so. It
reported "Operation not supported".

How to address this issue?

Thanks


[dpdk-dev] [dpdk-virtio] DPDK stopped working on virtio

2016-02-22 Thread Clarylin L
I am running DPDK application (testpmd) within a VM based on virtio. With
the same hypervisor and same DPDK code, it used to work well. But it
stopped working since last week. The VM's port could not receive anything.
I ran tcpdump on host's physical port, bridge interface as well as vnet
interface, and did see packets coming in. However on VM's port there was
nothing.

I ran gdb trying to debug, it hit function virtio_recv_mergeable_pkts(),
but nb_used=VIRTQUEUE_NUSED(rxvq) always gave 0. I guess it's because the
queue was empty.

I enabled the PMD debug logging and the only thing that might be an issue
was the following part. Other than this I could not see any thing that
could indicate potential issues.

Thu Feb 18 19:20:10 2016^@PMD: get_uio_dev(): Could not find uio resource
Thu Feb 18 19:20:10 2016^@PMD: virtio_resource_init_by_ioports(): PCI Port
IO found start=0xc040 with size=0x40

If someone can give any pointers that I should further look into, that'd be
very helpful. Appreciate your help!


[dpdk-dev] [dpdk-virtio] DPDK stopped working on virtio

2016-02-22 Thread Clarylin L
I am working with DPDK 2.0.

I guess it's not DPDK code issue , but more like an environment issue (as
same code has been working fine before. It's even working on another setup
now). Someone might have accidentally changed my setup, and I want to find
out what made dpdk-virtio stop working.

Is DPDK-virtio dependent on any specific modules on the hypervisor? or are
there any configurations on the hypervisor that would impact the
functionality of  virtio? or anything else I need to look into and check?
My VM is running on Ubuntu KVM.

On Mon, Feb 22, 2016 at 7:50 PM, Yuanhan Liu 
wrote:

> On Mon, Feb 22, 2016 at 11:15:57AM -0800, Clarylin L wrote:
> > I am running DPDK application (testpmd) within a VM based on virtio. With
> > the same hypervisor and same DPDK code, it used to work well. But it
> > stopped working since last week. The VM's port could not receive
> anything.
> > I ran tcpdump on host's physical port, bridge interface as well as vnet
> > interface, and did see packets coming in. However on VM's port there was
> > nothing.
> >
> > I ran gdb trying to debug, it hit function virtio_recv_mergeable_pkts(),
> > but nb_used=VIRTQUEUE_NUSED(rxvq) always gave 0. I guess it's because the
> > queue was empty.
> >
> > I enabled the PMD debug logging and the only thing that might be an issue
> > was the following part. Other than this I could not see any thing that
> > could indicate potential issues.
> >
> > Thu Feb 18 19:20:10 2016^@PMD: get_uio_dev(): Could not find uio resource
> > Thu Feb 18 19:20:10 2016^@PMD: virtio_resource_init_by_ioports(): PCI
> Port
> > IO found start=0xc040 with size=0x40
>
> That could be normal, when you don't bind the driver to igb_uio.
>
> > If someone can give any pointers that I should further look into, that'd
> be
> > very helpful. Appreciate your help!
>
> What's the last commit you are testing? And what are the steps
> to reproduce it? I have a quick try with vhost-switch example,
> with pkts injected by IXIA; it works fine here.
>
> Or better, mind do a git bisect? There aren't too many commits
> there. It should be a pretty fast bisect.
>
> --yliu
>


[dpdk-dev] [dpdk-virtio] DPDK stopped working on virtio

2016-02-25 Thread Clarylin L
Played with it a little bit more. It can receive packets only if
promiscuous mode is enabled in DPDK.

On Mon, Feb 22, 2016 at 11:00 PM, Clarylin L  wrote:

> I am working with DPDK 2.0.
>
> I guess it's not DPDK code issue , but more like an environment issue (as
> same code has been working fine before. It's even working on another setup
> now). Someone might have accidentally changed my setup, and I want to find
> out what made dpdk-virtio stop working.
>
> Is DPDK-virtio dependent on any specific modules on the hypervisor? or are
> there any configurations on the hypervisor that would impact the
> functionality of  virtio? or anything else I need to look into and check?
> My VM is running on Ubuntu KVM.
>
> On Mon, Feb 22, 2016 at 7:50 PM, Yuanhan Liu 
> wrote:
>
>> On Mon, Feb 22, 2016 at 11:15:57AM -0800, Clarylin L wrote:
>> > I am running DPDK application (testpmd) within a VM based on virtio.
>> With
>> > the same hypervisor and same DPDK code, it used to work well. But it
>> > stopped working since last week. The VM's port could not receive
>> anything.
>> > I ran tcpdump on host's physical port, bridge interface as well as vnet
>> > interface, and did see packets coming in. However on VM's port there was
>> > nothing.
>> >
>> > I ran gdb trying to debug, it hit function virtio_recv_mergeable_pkts(),
>> > but nb_used=VIRTQUEUE_NUSED(rxvq) always gave 0. I guess it's because
>> the
>> > queue was empty.
>> >
>> > I enabled the PMD debug logging and the only thing that might be an
>> issue
>> > was the following part. Other than this I could not see any thing that
>> > could indicate potential issues.
>> >
>> > Thu Feb 18 19:20:10 2016^@PMD: get_uio_dev(): Could not find uio
>> resource
>> > Thu Feb 18 19:20:10 2016^@PMD: virtio_resource_init_by_ioports(): PCI
>> Port
>> > IO found start=0xc040 with size=0x40
>>
>> That could be normal, when you don't bind the driver to igb_uio.
>>
>> > If someone can give any pointers that I should further look into,
>> that'd be
>> > very helpful. Appreciate your help!
>>
>> What's the last commit you are testing? And what are the steps
>> to reproduce it? I have a quick try with vhost-switch example,
>> with pkts injected by IXIA; it works fine here.
>>
>> Or better, mind do a git bisect? There aren't too many commits
>> there. It should be a pretty fast bisect.
>>
>> --yliu
>>
>
>


[dpdk-dev] L3 Forwarding performance of DPDK on virtio

2016-01-20 Thread Clarylin L
I am running dpdk within a virtual guest as a L3 forwarder.


The VM has two ports connecting to two linux bridges (in turn connecting
two physical ports). DPDK is used to forward between these two ports (one
port connected to traffic generator and the other connected to sink). I
used iperf to test the throughput.


If the VM/DPDK is running on passthrough, it can achieve around 10G
end-to-end (from traffic generator to sink) throughput. However if the
VM/DPDK is running on virtio (virtio-net-pmd), it achieves just 150M
throughput, which is a huge degrade.


On the virtio, I also measured the throughput between the traffic generator
and its connected port on VM, as well as throughput between the sink and
it's VM port. Both legs show around 7.5G throughput. So I guess forwarding
within the VM (from one port to the other) would be a big killer of the
performance.


Any suggestion on how I can root cause the poor performance issue, or any
idea on performance tuning techniques for virtio? thanks a lot!


[dpdk-dev] L3 Forwarding performance of DPDK on virtio

2016-01-20 Thread Clarylin L
Sorry. It's L2 forwarding.

I used testpmd with forwarding mode, like
testpmd --pci-blacklist :00:05.0 -c f -n 4  -- --portmask 3 -i
--total-num-mbufs=2 --nb-cores=3 --mbcache=512 --burst=512
--forward-mode=mac --eth-peer=0,90:e2:ba:9f:95:94
--eth-peer=1,90:e2:ba:9f:95:95

On Wed, Jan 20, 2016 at 5:25 PM, Tan, Jianfeng 
wrote:

>
> Hello!
>
>
> On 1/21/2016 7:51 AM, Clarylin L wrote:
>
>> I am running dpdk within a virtual guest as a L3 forwarder.
>>
>>
>> The VM has two ports connecting to two linux bridges (in turn connecting
>> two physical ports). DPDK is used to forward between these two ports (one
>> port connected to traffic generator and the other connected to sink). I
>> used iperf to test the throughput.
>>
>>
>> If the VM/DPDK is running on passthrough, it can achieve around 10G
>> end-to-end (from traffic generator to sink) throughput. However if the
>> VM/DPDK is running on virtio (virtio-net-pmd), it achieves just 150M
>> throughput, which is a huge degrade.
>>
>>
>> On the virtio, I also measured the throughput between the traffic
>> generator
>> and its connected port on VM, as well as throughput between the sink and
>> it's VM port. Both legs show around 7.5G throughput. So I guess forwarding
>> within the VM (from one port to the other) would be a big killer of the
>> performance.
>>
>>
>> Any suggestion on how I can root cause the poor performance issue, or any
>> idea on performance tuning techniques for virtio? thanks a lot!
>>
>
> The L3 forwarder, you mentioned, is the l3fwd example in DPDK? If so, I
> doubt it can work well with virtio, see another thread "Add API to get
> packet type info".
>
> Thanks,
> Jianfeng
>


[dpdk-dev] [dpdk-virtio]: cannot start testpmd after binding virtio devices to gib_uio

2015-07-16 Thread Clarylin L
> I am running a virtual guest on Ubuntu and trying to use dpdk testpmd as a
> packet forwarder.
>
> After starting the virtual guest, I do
> insmod igb_uio.ko
> insmod rte_kni.ko
> echo ":00:06.0" > /sys/bus/pci/drivers/virtio-pci/unbind
> echo ":00:07.0" > /sys/bus/pci/drivers/virtio-pci/unbind
> echo "1af4 1000" > /sys/bus/pci/drivers/igb_uio/new_id
> mkdir -p /tmp/huge
> mount -t hugetlbfs nodev /tmp/huge
> echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
>
> Where :00:06.0 and :00:07.0 are the two virtio devices I am gonna
> use, and 1af4 1000 is the corresponding vendor and device id.
>
> After the above steps, I verified that the virtio devices are actually
> bound to igb_uio:
>
> lspci -s 00:06.0 -vvv | grep driver
>
> Kernel driver in use: igb_uio
>
>
> However, I couldn't start testpmd and it hang at the the last line below
> "PMD: rte_eh_dev_config_restore.."
>
> ...
>
> EAL: PCI device :00:05.0 on NUMA socket -1
>
> EAL:   probe driver: 1af4:1000 rte_virtio_pmd
>
> EAL:   Device is blacklisted, not initializing
>
> EAL: PCI device :00:06.0 on NUMA socket -1
>
> EAL:   probe driver: 1af4:1000 rte_virtio_pmd
>
> EAL: PCI device :00:07.0 on NUMA socket -1
>
> EAL:   probe driver: 1af4:1000 rte_virtio_pmd
>
> Interactive-mode selected
>
> Set mac packet forwarding mode
>
> Configuring Port 0 (socket 0)
>
> PMD: rte_eth_dev_config_restore: port 0: MAC address array not supported
>
>
> If I do not bind interface to igb_uio, testpmd can start successfully
> which also shows "probe driver: 1af4:1000 rte_virtio_pmd" during starting
> process. However, even after testpmd started, virtio devices are bound to
> nothing ("lspci -s 00:06.0 -vvv | grep driver" shows nothing).
>
>
> I am also attaching my virtual guest configuration below. Thanks for your
> help. Highly appreciate!!
>
>
>
> lab at vpc-2:~$ ps aux | grep qemu
>
> libvirt+ 12020  228  0.0 102832508 52860 ? Sl   14:54  61:06 
> *qemu*-system-x86_64
> -enable-kvm -name dpdk-perftest -S -machine
> pc-i440fx-trusty,accel=kvm,usb=off,mem-merge=off -cpu host -m 98304
> -mem-prealloc -mem-path /dev/hugepages/libvirt/*qemu* -realtime mlock=off
> -smp 24,sockets=2,cores=12,threads=1 -numa
> node,nodeid=0,cpus=0-11,mem=49152 -numa node,nodeid=1,cpus=12-23,mem=49152
> -uuid eb5f8848-9983-4f13-983c-e3bd4c59387d -no-user-config -nodefaults
> -chardev 
> socket,id=charmonitor,path=/var/lib/libvirt/*qemu*/dpdk-perftest.monitor,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
> -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
> file=/var/lib/libvirt/images/dpdk-perftest-hda.img,if=none,id=drive-ide0-0-0,format=qcow2
> -device
> ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -drive
> file=/var/lib/libvirt/images/dpdk-perftest-hdb.img,if=none,id=drive-ide0-0-1,format=qcow2
> -device ide-hd,bus=ide.0,unit=1,drive=drive-ide0-0-1,id=ide0-0-1 -drive
> if=none,id=drive-ide0-1-0,readonly=on,format=raw -device
> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=2
> -netdev tap,fd=24,id=hostnet0,vhost=on,vhostfd=25 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:45:ff:5e,bus=pci.0,addr=0x5
> -netdev
> tap,fds=26:27:28:29:30:31:32:33,id=hostnet1,vhost=on,vhostfds=34:35:36:37:38:39:40:41
> -device
> virtio-net-pci,mq=on,vectors=17,netdev=hostnet1,id=net1,mac=52:54:00:7e:b5:6b,bus=pci.0,addr=0x6
> -netdev
> tap,fds=42:43:44:45:46:47:48:49,id=hostnet2,vhost=on,vhostfds=50:51:52:53:54:55:56:57
> -device
> virtio-net-pci,mq=on,vectors=17,netdev=hostnet2,id=net2,mac=52:54:00:f1:a5:20,bus=pci.0,addr=0x7
> -chardev pty,id=charserial0 -device
> isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1
> -device isa-serial,chardev=charserial1,id=serial1 -vnc 127.0.0.1:0
> -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device
> i6300esb,id=watchdog0,bus=pci.0,addr=0x3 -watchdog-action reset -device
> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4
>


[dpdk-dev] [dpdk-virtio] Performance tuning for dpdk with virtio?

2015-07-17 Thread Clarylin L
I am running dpdk with a virtual guest as a L2 forwarder.

If the virtual guest is on passthrough, dpdk can achieve around 10G
throughput. However if the virtual guest is on virtio, dpdk achieves just
150M throughput, which is a huge degrade. Any idea what could be the cause
of such poor performance on virtio? and any performance tuning techniques I
could try? Thanks a lot!

lab at vpc-2:~$ ps aux | grep qemu

libvirt+ 12020  228  0.0 102832508 52860 ? Sl   14:54  61:06
*qemu*-system-x86_64
-enable-kvm -name dpdk-perftest -S -machine
pc-i440fx-trusty,accel=kvm,usb=off,mem-merge=off -cpu host -m 98304
-mem-prealloc -mem-path /dev/hugepages/libvirt/*qemu* -realtime mlock=off
-smp 24,sockets=2,cores=12,threads=1 -numa
node,nodeid=0,cpus=0-11,mem=49152 -numa node,nodeid=1,cpus=12-23,mem=49152
-uuid eb5f8848-9983-4f13-983c-e3bd4c59387d -no-user-config -nodefaults
-chardev 
socket,id=charmonitor,path=/var/lib/libvirt/*qemu*/dpdk-perftest.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown
-boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/libvirt/images/dpdk-perftest-hda.img,if=none,id=drive-ide0-0-0,format=qcow2
-device
ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -drive
file=/var/lib/libvirt/images/dpdk-perftest-hdb.img,if=none,id=drive-ide0-0-1,format=qcow2
-device ide-hd,bus=ide.0,unit=1,drive=drive-ide0-0-1,id=ide0-0-1 -drive
if=none,id=drive-ide0-1-0,readonly=on,format=raw -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=2
-netdev tap,fd=24,id=hostnet0,vhost=on,vhostfd=25 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:45:ff:5e,bus=pci.0,addr=0x5
-netdev
tap,fds=26:27:28:29:30:31:32:33,id=hostnet1,vhost=on,vhostfds=34:35:36:37:38:39:40:41
-device
virtio-net-pci,mq=on,vectors=17,netdev=hostnet1,id=net1,mac=52:54:00:7e:b5:6b,bus=pci.0,addr=0x6
-netdev
tap,fds=42:43:44:45:46:47:48:49,id=hostnet2,vhost=on,vhostfds=50:51:52:53:54:55:56:57
-device
virtio-net-pci,mq=on,vectors=17,netdev=hostnet2,id=net2,mac=52:54:00:f1:a5:20,bus=pci.0,addr=0x7
-chardev pty,id=charserial0 -device
isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1
-device isa-serial,chardev=charserial1,id=serial1 -vnc 127.0.0.1:0 -device
cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device
i6300esb,id=watchdog0,bus=pci.0,addr=0x3 -watchdog-action reset -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4


[dpdk-dev] [dpdk-virtio] Performance tuning for dpdk with virtio?

2015-07-17 Thread Clarylin L
My VM has two ports connecting to two linux bridges (in turn connecting two
physical ports). DPDK is used to forward between these two ports (one port
connected to traffic generator and the other connected to sink). I used
iperf to test the throughput between the traffic generator and one port on
VM, as well as throughput between the other port and the sink. Both legs
show around 7.5G throughput.

Traffic anyway would goes through bridge to reach to the VM ports, so I
think linux bridge does support much higher throughput, doesn't it?

On Fri, Jul 17, 2015 at 2:20 PM, Stephen Hemminger <
stephen at networkplumber.org> wrote:

> On Fri, 17 Jul 2015 11:03:15 -0700
> Clarylin L  wrote:
>
> > I am running dpdk with a virtual guest as a L2 forwarder.
> >
> > If the virtual guest is on passthrough, dpdk can achieve around 10G
> > throughput. However if the virtual guest is on virtio, dpdk achieves just
> > 150M throughput, which is a huge degrade. Any idea what could be the
> cause
> > of such poor performance on virtio? and any performance tuning
> techniques I
> > could try? Thanks a lot!
>
> The default Linux bridge (and OVS) switch are your bottleneck.
> It is not DPDK virtio issue in general. There are some small performance
> gains still possible with virtio enhancements (like offloading).
>
> Did you try running OVS-DPDK on the host?
>


[dpdk-dev] multi-segment mbuf

2016-03-21 Thread Clarylin L
I am trying multi-segment mbuf, but it seems not working.

On my target host, the mbuf size is set to 2048 and I am trying to send
large packet to it (say 2500 bytes without fragmentation) from another
host. I enabled both jumbo_frame and enable_scatter for the port. But I saw
on the target only one mbuf is received with data_len equal to 2500 (it's
supposed to be a two-mbuf chain).  Although mbuf itself is not working as
expected, ping between two hosts succeeded (large ping size; no
fragmentation).

1. my mbuf size is only 2048. how can it support receiving such large
packet in one mbuf?

2.how to make it work as expected (enable multi-segment mbuf and receive
using mbuf chain when needed)?

Appreciate your help.


[dpdk-dev] multi-segment mbuf

2016-03-22 Thread Clarylin L
Sorry my bad. The mbuf size has been accidentally changed to 3000.

After fixing this by setting mbuf size to 2048, multi-segment mbuf still
doesn't work. I was trying to send 2500-byte packets to the target system
and was expecting to see two-segment mbuf chain), but got errors on it.

Tue Mar 22 14:52:00 2016^@PMD: rte_enic_pmd: packet error


Tue Mar 22 14:52:01 2016^@PMD: rte_enic_pmd: packet error


Tue Mar 22 14:52:02 2016^@PMD: rte_enic_pmd: packet error


Tue Mar 22 14:52:03 2016^@PMD: rte_enic_pmd: packet error


Tue Mar 22 14:52:04 2016^@PMD: rte_enic_pmd: packet error


Is enic supporting multi-segment mbuf? The dpdk version is 2.0.0. I have
enabled jumbo-frame and enable_scatter for the port.

On Tue, Mar 22, 2016 at 3:27 AM, Bruce Richardson <
bruce.richardson at intel.com> wrote:

> On Mon, Mar 21, 2016 at 04:34:50PM -0700, Clarylin L wrote:
> > I am trying multi-segment mbuf, but it seems not working.
> >
> > On my target host, the mbuf size is set to 2048 and I am trying to send
> > large packet to it (say 2500 bytes without fragmentation) from another
> > host. I enabled both jumbo_frame and enable_scatter for the port. But I
> saw
> > on the target only one mbuf is received with data_len equal to 2500 (it's
> > supposed to be a two-mbuf chain).  Although mbuf itself is not working as
> > expected, ping between two hosts succeeded (large ping size; no
> > fragmentation).
> >
> > 1. my mbuf size is only 2048. how can it support receiving such large
> > packet in one mbuf?
> >
> > 2.how to make it work as expected (enable multi-segment mbuf and receive
> > using mbuf chain when needed)?
> >
> > Appreciate your help.
>
> Hi,
>
> when you get the single mbuf with data_len == 2500, what is the buf_len
> value
> reported as?
>
> /Bruce
>


[dpdk-dev] multi-segment mbuf

2016-03-22 Thread Clarylin L
On my setup, the sending host is a regular one without running dpdk. It
sends out 2500-byte packet without fragmentation to the dpdk node.

Isn't it the enic pmd driver that is responsible for fetch the packet and
format the mbuf (or mbuf chain if required)? Or do you mean I need to write
my own codes to format the mbuf in the driver?

On Tue, Mar 22, 2016 at 3:13 PM, Stephen Hemminger <
stephen at networkplumber.org> wrote:

> Read the source.
>
> A multi-segment mbuf has the first mbuf with nb_segs > 1 and chained by
> next pointer.
> It is a bug in the creator of the mbuf, if number of segments and next
> chain don't
> match.
>
> There is a rte_pktmbuf_dump(), you can use to look at how your mbuf is
> formatted.
>
> On Tue, Mar 22, 2016 at 1:36 PM, Clarylin L  wrote:
>
>> Sorry my bad. The mbuf size has been accidentally changed to 3000.
>>
>> After fixing this by setting mbuf size to 2048, multi-segment mbuf still
>> doesn't work. I was trying to send 2500-byte packets to the target system
>> and was expecting to see two-segment mbuf chain), but got errors on it.
>>
>> Tue Mar 22 14:52:00 2016^@PMD: rte_enic_pmd: packet error
>>
>>
>> Tue Mar 22 14:52:01 2016^@PMD: rte_enic_pmd: packet error
>>
>>
>> Tue Mar 22 14:52:02 2016^@PMD: rte_enic_pmd: packet error
>>
>>
>> Tue Mar 22 14:52:03 2016^@PMD: rte_enic_pmd: packet error
>>
>>
>> Tue Mar 22 14:52:04 2016^@PMD: rte_enic_pmd: packet error
>>
>>
>> Is enic supporting multi-segment mbuf? The dpdk version is 2.0.0. I have
>> enabled jumbo-frame and enable_scatter for the port.
>>
>> On Tue, Mar 22, 2016 at 3:27 AM, Bruce Richardson <
>> bruce.richardson at intel.com> wrote:
>>
>> > On Mon, Mar 21, 2016 at 04:34:50PM -0700, Clarylin L wrote:
>> > > I am trying multi-segment mbuf, but it seems not working.
>> > >
>> > > On my target host, the mbuf size is set to 2048 and I am trying to
>> send
>> > > large packet to it (say 2500 bytes without fragmentation) from another
>> > > host. I enabled both jumbo_frame and enable_scatter for the port. But
>> I
>> > saw
>> > > on the target only one mbuf is received with data_len equal to 2500
>> (it's
>> > > supposed to be a two-mbuf chain).  Although mbuf itself is not
>> working as
>> > > expected, ping between two hosts succeeded (large ping size; no
>> > > fragmentation).
>> > >
>> > > 1. my mbuf size is only 2048. how can it support receiving such large
>> > > packet in one mbuf?
>> > >
>> > > 2.how to make it work as expected (enable multi-segment mbuf and
>> receive
>> > > using mbuf chain when needed)?
>> > >
>> > > Appreciate your help.
>> >
>> > Hi,
>> >
>> > when you get the single mbuf with data_len == 2500, what is the buf_len
>> > value
>> > reported as?
>> >
>> > /Bruce
>> >
>>
>
>


[dpdk-dev] multi-segment mbuf

2016-03-22 Thread Clarylin L
Ok...I think you meant there's a bug in the driver code when formatting
multi-segment mbuf.

On Tue, Mar 22, 2016 at 3:13 PM, Stephen Hemminger <
stephen at networkplumber.org> wrote:

> Read the source.
>
> A multi-segment mbuf has the first mbuf with nb_segs > 1 and chained by
> next pointer.
> It is a bug in the creator of the mbuf, if number of segments and next
> chain don't
> match.
>
> There is a rte_pktmbuf_dump(), you can use to look at how your mbuf is
> formatted.
>
> On Tue, Mar 22, 2016 at 1:36 PM, Clarylin L  wrote:
>
>> Sorry my bad. The mbuf size has been accidentally changed to 3000.
>>
>> After fixing this by setting mbuf size to 2048, multi-segment mbuf still
>> doesn't work. I was trying to send 2500-byte packets to the target system
>> and was expecting to see two-segment mbuf chain), but got errors on it.
>>
>> Tue Mar 22 14:52:00 2016^@PMD: rte_enic_pmd: packet error
>>
>>
>> Tue Mar 22 14:52:01 2016^@PMD: rte_enic_pmd: packet error
>>
>>
>> Tue Mar 22 14:52:02 2016^@PMD: rte_enic_pmd: packet error
>>
>>
>> Tue Mar 22 14:52:03 2016^@PMD: rte_enic_pmd: packet error
>>
>>
>> Tue Mar 22 14:52:04 2016^@PMD: rte_enic_pmd: packet error
>>
>>
>> Is enic supporting multi-segment mbuf? The dpdk version is 2.0.0. I have
>> enabled jumbo-frame and enable_scatter for the port.
>>
>> On Tue, Mar 22, 2016 at 3:27 AM, Bruce Richardson <
>> bruce.richardson at intel.com> wrote:
>>
>> > On Mon, Mar 21, 2016 at 04:34:50PM -0700, Clarylin L wrote:
>> > > I am trying multi-segment mbuf, but it seems not working.
>> > >
>> > > On my target host, the mbuf size is set to 2048 and I am trying to
>> send
>> > > large packet to it (say 2500 bytes without fragmentation) from another
>> > > host. I enabled both jumbo_frame and enable_scatter for the port. But
>> I
>> > saw
>> > > on the target only one mbuf is received with data_len equal to 2500
>> (it's
>> > > supposed to be a two-mbuf chain).  Although mbuf itself is not
>> working as
>> > > expected, ping between two hosts succeeded (large ping size; no
>> > > fragmentation).
>> > >
>> > > 1. my mbuf size is only 2048. how can it support receiving such large
>> > > packet in one mbuf?
>> > >
>> > > 2.how to make it work as expected (enable multi-segment mbuf and
>> receive
>> > > using mbuf chain when needed)?
>> > >
>> > > Appreciate your help.
>> >
>> > Hi,
>> >
>> > when you get the single mbuf with data_len == 2500, what is the buf_len
>> > value
>> > reported as?
>> >
>> > /Bruce
>> >
>>
>
>


[dpdk-dev] Unable to get multi-segment mbuf working for ixgbe

2016-03-25 Thread Clarylin L
Hello,

I am trying to use multi-segment mbuf to receive large packet. I enabled
jumbo_frame and enable_scatter for the port and was expecting mbuf chaining
would be used to receive packets larger than the mbuf size (which was set
to 2048).

When sending 3000-byte (without fragmentation) packet from another non-dpdk
host, I didn't see packet was received by the ixgbe PMD driver.

After a quick debugging session I found that the following statement
in ixgbe_recv_scattered_pkts
(ixgbe_rxtx.c) is
always true and break the loop in case of large packet, while it's not the
case for small packet (smaller than mbuf size):

if (! staterr & rte_cpu_to_le32(IXGBE_RXDADV_STAT_DD))
break;

Is enabling jumbo_frame and enable_scatter good enough to get started the
mbuf chaining?

Appreciate any input! Thanks.


[dpdk-dev] Unable to get multi-segment mbuf working for ixgbe

2016-03-28 Thread Clarylin L
Any pointers to what the issue could be? thanks

On Fri, Mar 25, 2016 at 4:13 PM, Clarylin L  wrote:

> Hello,
>
> I am trying to use multi-segment mbuf to receive large packet. I enabled
> jumbo_frame and enable_scatter for the port and was expecting mbuf chaining
> would be used to receive packets larger than the mbuf size (which was set
> to 2048).
>
> When sending 3000-byte (without fragmentation) packet from another
> non-dpdk host, I didn't see packet was received by the ixgbe PMD driver.
>
> After a quick debugging session I found that the following statement in 
> ixgbe_recv_scattered_pkts
> (ixgbe_rxtx.c) is
> always true and break the loop in case of large packet, while it's not the
> case for small packet (smaller than mbuf size):
>
> if (! staterr & rte_cpu_to_le32(IXGBE_RXDADV_STAT_DD))
> break;
>
> Is enabling jumbo_frame and enable_scatter good enough to get started the
> mbuf chaining?
>
> Appreciate any input! Thanks.
>


[dpdk-dev] is ixgbe supporting multi-segment mbuf?

2016-03-28 Thread Clarylin L
ixgbe_recv_scattered_pkts was set to be the rx function. Receiving packets
smaller than mbuf size works perfectly. However, if an incoming packet is
greater than the maximum acceptable length of one ?mbuf? data size,
receiving does not work. In this case, isn't it supposed to use
mbuf chaining to receive?

The port has both jumbo_frame and enable_scatter being on. are these two
flags good enough to make mbuf chaining going?