[dpdk-dev] running l2fwd with endless error message: PMD: Unhandled Msg 00000006

2013-11-08 Thread cheng.luo...@hitachi.com
Hi,

Recently I try to run l2fwd.
When I try to ifconfig the VF created by igb_uio based PF in the VMs,
The l2fwd will print out endless error as follow:
PMD: Unhandled Msg 0006
PMD: Unhandled Msg 0008
---

And the VM will also print out endless error message as follow:
---
[  447.339765] ixgbevf: eth1: ixgbevf_watchdog_task: NIC Link is Up, 10 Gbps
[  447.748556] ixgbevf: eth1: ixgbevf_watchdog_task: NIC Link is Up, 10 Gbps
[  447.844914] ixgbevf: eth1: ixgbevf_watchdog_task: NIC Link is Up, 10 Gbps


I have one Intel X520 with two ports.
I bind the two PF to igb_uio and then create one VF per PF.
I bind each VF to one VM and start the two VM in the same physical server.
I run the l2fwd in the host to take over the PF.
When I ifconfig in the VM then the above errors accur.

I use DPDK 1.5.0r0 version in the host with 3.6.10-4.fc18.x86_64.
In the virtual machine I use ixgbevf V2.11.3.

Anyone has the some problems?
Or I have any wrong step to run the l2fwd?

Any advises are appreciated.

Cheng Luo.


[dpdk-dev] DPDK failed to handle intialize VF

2013-11-12 Thread cheng.luo...@hitachi.com
Hi,

I am using DPDK for SRIOV.
I have one Intel X520 with two PFs.
After binding the two PFs to igb_uio, I create two VFs and assign
them to a virtual machine.
I found that to wake up VF, the VF will send some message to PF 
such as IXGBE_VF_SET_MACVLAN(0x06) and IXGBE_VF_API_NEGOTIATE(0x08)
for the initialization.

However, DPDK's PMD driver does not handle this messgae.
In the source code /lib/librte_pmd_ixgbe/ixgbe_pf.c
the function ixgbe_rcv_msg_from_vf only handle the following message:
IXGBE_VF_SET_MAC_ADDR   (0x02)
IXGBE_VF_SET_MULTICAST  (0x03)
IXGBE_VF_SET_LPE  (0x04)
IXGBE_VF_SET_VLAN   (0x05)
and take other message as error message and take no operation.
Therefore, if take DPDK to control the PF, I can not use VF with 
ixgbevf driver in the VMs.
The PMD driver will endless print error message as follow:
-
PMD: Unhandled Msg 0006
PMD: Unhandled Msg 0008
---

And in the VMs, it also prints endless error message
-
[  447.339765] ixgbevf: eth1: ixgbevf_watchdog_task: NIC Link is Up, 10 Gbps
[  447.748556] ixgbevf: eth1: ixgbevf_watchdog_task: NIC Link is Up, 10 Gbps
[  447.844914] ixgbevf: eth1: ixgbevf_watchdog_task: NIC Link is Up, 10 Gbps


While I also read the code in ixgbe, and found that it can handle message 0x06 
and 0x08.
Is this a bug for DPDK ?
Or any step is wrong for my configuration?

I think DPDK should be ok to run in the host while with VF in the VM uses 
normal driver such as ixgbevf.







[dpdk-dev] fail to run with testpmd, EAL fail to bind socket

2013-10-24 Thread cheng.luo...@hitachi.com
Dear all,

I am a beginner with DPDK. 
I just install it in fedora18 with two X540 Enthercards.

After compile it and run the testpmd I got the following message:

[root at localhost test-pmd]# ./testpmd -c7 -n3 -- -i --nb-cores=2 --nb-ports=2
EAL: Detected lcore 0 as core 0 on socket 0
EAL: Detected lcore 1 as core 1 on socket 0
EAL: Detected lcore 2 as core 2 on socket 0
EAL: Setting up hugepage memory...
EAL: Ask a virtual area of 0x2097152 bytes
EAL: Virtual area found at 0x2a80 (size = 0x20)
EAL: Ask a virtual area of 0x2143289344 bytes
EAL: Virtual area found at 0x2aaa2aa0 (size = 0x7fc0)
EAL: Ask a virtual area of 0x2097152 bytes
EAL: Virtual area found at 0x7fc6aec0 (size = 0x20)
EAL: Requesting 1024 pages of size 2MB from socket 0
EAL: TSC frequency is ~1795673 KHz
EAL: Master core 0 is ready (tid=af35b880)
EAL: Core 2 is ready (tid=ad7f1700)
EAL: Core 1 is ready (tid=adff2700)
EAL: PCI device :02:00.0 on NUMA socket -1
EAL:   probe driver: 8086:1528 rte_ixgbe_pmd
EAL:   PCI memory mapped at 0x7fc6af128000
EAL:   PCI memory mapped at 0x7fc6af361000
EAL: PCI device :02:00.1 on NUMA socket -1
EAL:   probe driver: 8086:1528 rte_ixgbe_pmd
EAL:   PCI memory mapped at 0x7fc6aef28000
EAL:   PCI memory mapped at 0x7fc6aef24000
Interactive-mode selected
Configuring Port 0 (socket -1)
Configuring Port 1 (socket -1)
Checking link statuses...
Port 0 Link Up - speed 1 Mbps - full-duplex
Port 1 Link Up - speed 1 Mbps - full-duplex
Done
testpmd>

Although I can get into the testpmd command mode, but it seems EAL fails to 
bind the socket on 
the PCI device.
Therefore, when I run start tx_first and stp, no package forwarding as follow

=
testpmd> start tx_first
  io packet forwarding - CRC stripping disabled - packets/burst=16
  nb forwarding cores=2 - nb forwarding ports=2
  RX queues=1 - RX desc=128 - RX free threshold=0
  RX threshold registers: pthresh=8 hthresh=8 wthresh=4
  TX queues=1 - TX desc=512 - TX free threshold=0
  TX threshold registers: pthresh=36 hthresh=0 wthresh=0
  TX RS bit threshold=0 - TXQ flags=0x0
testpmd> stop
Telling cores to stop...
Waiting for lcores to finish...

  -- Forward statistics for port 0  --
  RX-packets: 0  RX-dropped: 0 RX-total: 0
  TX-packets: 16 TX-dropped: 0 TX-total: 16
  

  -- Forward statistics for port 1  --
  RX-packets: 0  RX-dropped: 0 RX-total: 0
  TX-packets: 16 TX-dropped: 0 TX-total: 16
  

  +++ Accumulated forward statistics for all ports+++
  RX-packets: 0  RX-dropped: 0 RX-total: 0
  TX-packets: 32 TX-dropped: 0 TX-total: 32
  

Done.


Can anyone tell me how to fix this?

Thank you very much.


[dpdk-dev] /sys/bus/pci/devices/[interface]/numa_node is -1

2013-10-31 Thread cheng.luo...@hitachi.com
Hi,

I try to run DPDMK testpmd and failed.
I found that EAL will read /sys/bus/pci/devices/[interface]/numa_node file
and initilize the PCI devices.
My PC does not support NUMA, it has only one processor.
However, initial value of numa_node of the device is -1.

I found some suggestion that BIOS should support 
the proper ACPI features to detect PCI devices affinity to CPU.
Otherwise, the numa_node will be -1.

However, I can not find the proper option in the BIOS.
Does anyone have the same problem or can give some suggestion?

Thank you.

Cheng Luo.