Hi Thadeu, Thank you for your inspiring advice and patience.
I set the brige and device as type dpdk, then I found that it's caused by insufficient memory. Since I get the reason, I will change NUMA memory policy and try to solve it. Thanks again. Here are logs: 2016-10-26T13:50:29Z|00022|dpdk|ERR|Insufficient memory to create memory pool for netdev dpdk0, with MTU 1500 on socket 2 2016-10-26T13:50:29Z|00023|bridge|WARN|could not open network device dpdk0 (Cannot allocate memory) 2016-10-26 21:31 GMT+08:00 Thadeu Lima de Souza Cascardo < casca...@redhat.com>: > On Wed, Oct 26, 2016 at 09:25:38PM +0800, Bo Sun wrote: > > It also failed. > > I think, if dpdk0 is available, it should appear in logs. But I never see > > dpdk0 in logs. > > > > Here are details of adding dpdk0: > > > > # ovs-vsctl add-port ovsbr dpdk0 > > ovs-vsctl: Error detected while setting up 'dpdk0'. See ovs-vswitchd log > > for details. > > > > Log details: > > > > 2016-10-26T13:16:39.224Z|00048|bridge|WARN|could not open network device > > dpdk0 (No such device) > > > > # ovs-vsctl show > > cc1f6d24-9196-4976-8b6f-09581a9d996a > > Bridge ovsbr > > Port "dpdk0" > > Interface "dpdk0" > > error: "could not open network device dpdk0 (No such > > device)" > > Port "ens5f0" > > Interface "ens5f0" > > error: "could not open network device ens5f0 (No such > > device)" > > Port ovsbr > > Interface ovsbr > > type: internal > > > > My bad. You need to set it as type dpdk. > > ovs-vsctl set bridge ovsbr datapath_type=netdev > ovs-vsctl set iface dpdk0 type=dpdk > > > 2016-10-26 21:15 GMT+08:00 Thadeu Lima de Souza Cascardo < > > casca...@redhat.com>: > > > > > On Wed, Oct 26, 2016 at 09:13:46PM +0800, Bo Sun wrote: > > > > Sorry for forgetting to include mail list, I'm re-sending my reply. > > > > > > > > The problem is that there isn't dpdk0 or dpdk1 port shown up. > > > > So I can't add dpdk port to the brige, which means that I can't use > dpdk > > > to > > > > transfer packets. > > > > I didn't add any other ports to the brige, because I intend to use > > > OVS-DPDK. > > > > > > Did you try adding the dpdk0 port? It will not appear in the list of > kernel > > > interfaces. > > > > > > ovs-vsctl add-port ovsbr dpdk0 > > > > > > > > > > > The result of dpdk-devbind.py --status: > > > > > > > > Network devices using DPDK-compatible driver > > > > ============================================ > > > > 0000:8b:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' > > > drv=vfio-pci > > > > unused= > > > > > > > > Network devices using kernel driver > > > > =================================== > > > > 0000:01:00.0 'I350 Gigabit Network Connection' if=eno24 drv=igb > > > > unused=vfio-pci *Active* > > > > 0000:01:00.1 'I350 Gigabit Network Connection' if=eno25 drv=igb > > > > unused=vfio-pci > > > > 0000:01:00.2 'I350 Gigabit Network Connection' if=eno26 drv=igb > > > > unused=vfio-pci > > > > 0000:01:00.3 'I350 Gigabit Network Connection' if=eno27 drv=igb > > > > unused=vfio-pci > > > > 0000:8b:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection' > if=ens5f1 > > > > drv=ixgbe unused=vfio-pci > > > > > > > > > > > > Since there isn't any dpdk port, I try to add the 10G Ethernet > NIC(its > > > name > > > > is ens5f0) to the bridge, then get this: > > > > > > > > # ovs-vsctl add-port ovsbr ens5f0 > > > > ovs-vsctl: Error detected while setting up 'ens5f0'. See > ovs-vswitchd > > > log > > > > for details. > > > > > > > > Log detail is: > > > > > > > > 2016-10-26T12:53:01.806Z|00047|bridge|WARN|could not open network > device > > > > ens5f0 (No such device). > > > > > > > > The result of ovs-vsctl show: > > > > > > > > cc1f6d24-9196-4976-8b6f-09581a9d996a > > > > Bridge ovsbr > > > > Port "ens5f0" > > > > Interface "ens5f0" > > > > error: "could not open network device ens5f0 (No such > > > > device)" > > > > Port ovsbr > > > > Interface ovsbr > > > > type: internal > > > > > > > > > > > > 2016-10-26 19:56 GMT+08:00 Thadeu Lima de Souza Cascardo < > > > > casca...@redhat.com>: > > > > > > > > > On Wed, Oct 26, 2016 at 06:08:24PM +0800, Bo Sun wrote: > > > > > > Hi list, > > > > > > > > > > > > I'm using OVS 2.6.0 and DPDK 16.07 on CentOS 7.2, following the > > > > > > instructions on ovs/INSTALL.DPDK.rst. > > > > > > > > > > > > When I start ovs-vswitchd, it can't function properly. > > > > > > > > > > > > I got warnings like "ovs|00014|timeval|WARN|Unreasonably long > 6567ms > > > > > poll > > > > > > interval (232ms user, 5958ms system)" and many others. I > searched on > > > the > > > > > > Internet, and found a similar situation which is fixed by > upgrading > > > > > kernel. > > > > > > It seems that ovs-vswitchd hangs. So I also upgraded kernel, but > it > > > > > didn't > > > > > > solve the problem. > > > > > > > > > > > > The logs of DPDK seems strange. I'm using an Intel 82599 10Gb > > > Ethernet > > > > > > Adapter, and bind it to vfio-pci kernel module. I also mount > > > hugepages. I > > > > > > can't figure out why it goes wrong. > > > > > > > > > > > > I'd appreciate any suggestions and help on this issue. > > > > > > > > > > > > Here are logs of ovs: > > > > > > > > > > > > Oct 26 05:27:41 localhost.localdomain ovsdb-server[13795]: > > > > > > ovs|00001|ovsdb_server|INFO|ovsdb-server (Open vSwitch) 2.6.0 > > > > > > Oct 26 05:27:48 localhost.localdomain ovs-vsctl[13796]: > > > > > > ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set > Open_vSwitch . > > > > > > other_config:dpdk-init=true > > > > > > Oct 26 05:27:51 localhost.localdomain ovsdb-server[13795]: > > > > > > ovs|00002|memory|INFO|2300 kB peak resident set size after 10.0 > > > seconds > > > > > > Oct 26 05:27:51 localhost.localdomain ovsdb-server[13795]: > > > > > > ovs|00003|memory|INFO|cells:16 monitors:0 > > > > > > Oct 26 05:27:52 localhost.localdomain ovs-vsctl[13797]: > > > > > > ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set > Open_vSwitch . > > > > > > other_config:dpdk-socket-mem=256,256,256,256 > > > > > > Oct 26 05:28:00 localhost.localdomain ovs-vswitchd[13798]: > > > > > > ovs|00001|socket_util|ERR|localhost: port must be specified > > > > > > Oct 26 05:28:00 localhost.localdomain ovs-vswitchd[13798]: > > > > > > ovs|00002|vlog|INFO|opened log file /usr/local/var/log/ > > > > > > openvswitch/ovs-vsctl.log > > > > > > Oct 26 05:28:00 localhost.localdomain ovs-vswitchd[13799]: > > > > > > ovs|00003|ovs_numa|INFO|Discovered 20 CPU cores on NUMA node 0 > > > > > > Oct 26 05:28:00 localhost.localdomain ovs-vswitchd[13799]: > > > > > > ovs|00004|ovs_numa|INFO|Discovered 20 CPU cores on NUMA node 1 > > > > > > Oct 26 05:28:00 localhost.localdomain ovs-vswitchd[13799]: > > > > > > ovs|00005|ovs_numa|INFO|Discovered 20 CPU cores on NUMA node 2 > > > > > > Oct 26 05:28:00 localhost.localdomain ovs-vswitchd[13799]: > > > > > > ovs|00006|ovs_numa|INFO|Discovered 20 CPU cores on NUMA node 3 > > > > > > Oct 26 05:28:00 localhost.localdomain ovs-vswitchd[13799]: > > > > > > ovs|00007|ovs_numa|INFO|Discovered 4 NUMA nodes and 80 CPU cores > > > > > > Oct 26 05:28:00 localhost.localdomain ovs-vswitchd[13799]: > > > > > > ovs|00008|reconnect|INFO|unix:/usr/local/var/run/ > > > openvswitch/db.sock: > > > > > > connecting... > > > > > > Oct 26 05:28:00 localhost.localdomain ovs-vswitchd[13799]: > > > > > > ovs|00009|reconnect|INFO|unix:/usr/local/var/run/ > > > openvswitch/db.sock: > > > > > > connected > > > > > > Oct 26 05:28:00 localhost.localdomain ovs-vswitchd[13799]: > > > > > > ovs|00010|dpdk|INFO|DPDK Enabled, initializing > > > > > > Oct 26 05:28:00 localhost.localdomain ovs-vswitchd[13799]: > > > > > > ovs|00011|dpdk|INFO|No vhost-sock-dir provided - defaulting to > > > > > > /usr/local/var/run/openvswitch > > > > > > Oct 26 05:28:00 localhost.localdomain ovs-vswitchd[13799]: > > > > > > ovs|00012|dpdk|INFO|EAL ARGS: ovs-vswitchd --socket-mem > > > 256,256,256,256 > > > > > -c > > > > > > 0x00000001 > > > > > > Oct 26 05:28:06 localhost.localdomain ovs-vswitchd[13799]: PMD: > > > > > > bnxt_rte_pmd_init() called for (null) > > > > > > Oct 26 05:28:06 localhost.localdomain ovs-vswitchd[13799]: EAL: > PCI > > > > > device > > > > > > 0000:01:00.0 on NUMA socket 0 > > > > > > Oct 26 05:28:06 localhost.localdomain ovs-vswitchd[13799]: EAL: > > > probe > > > > > > driver: 8086:1521 rte_igb_pmd > > > > > > Oct 26 05:28:06 localhost.localdomain ovs-vswitchd[13799]: EAL: > PCI > > > > > device > > > > > > 0000:01:00.1 on NUMA socket 0 > > > > > > Oct 26 05:28:06 localhost.localdomain ovs-vswitchd[13799]: EAL: > > > probe > > > > > > driver: 8086:1521 rte_igb_pmd > > > > > > Oct 26 05:28:06 localhost.localdomain ovs-vswitchd[13799]: EAL: > PCI > > > > > device > > > > > > 0000:01:00.2 on NUMA socket 0 > > > > > > Oct 26 05:28:06 localhost.localdomain ovs-vswitchd[13799]: EAL: > > > probe > > > > > > driver: 8086:1521 rte_igb_pmd > > > > > > Oct 26 05:28:06 localhost.localdomain ovs-vswitchd[13799]: EAL: > PCI > > > > > device > > > > > > 0000:01:00.3 on NUMA socket 0 > > > > > > Oct 26 05:28:06 localhost.localdomain ovs-vswitchd[13799]: EAL: > > > probe > > > > > > driver: 8086:1521 rte_igb_pmd > > > > > > Oct 26 05:28:06 localhost.localdomain ovs-vswitchd[13799]: EAL: > PCI > > > > > device > > > > > > 0000:8b:00.0 on NUMA socket 2 > > > > > > Oct 26 05:28:06 localhost.localdomain ovs-vswitchd[13799]: EAL: > > > probe > > > > > > driver: 8086:10fb rte_ixgbe_pmd > > > > > > Oct 26 05:28:06 localhost.localdomain ovs-vswitchd[13799]: EAL: > > > using > > > > > > IOMMU type 1 (Type 1) > > > > > > Oct 26 05:28:06 localhost.localdomain ovs-vswitchd[13799]: EAL: > > > Ignore > > > > > > mapping IO port bar(2) addr: ffc1 > > > > > > Oct 26 05:28:07 localhost.localdomain ovs-vswitchd[13799]: EAL: > PCI > > > > > device > > > > > > 0000:8b:00.1 on NUMA socket 2 > > > > > > Oct 26 05:28:07 localhost.localdomain ovs-vswitchd[13799]: EAL: > > > probe > > > > > > driver: 8086:10fb rte_ixgbe_pmd > > > > > > Oct 26 05:28:07 localhost.localdomain ovs-vswitchd[13799]: > > > > > > ovs|00013|bridge|INFO|ovs-vswitchd (Open vSwitch) 2.6.0 > > > > > > Oct 26 05:28:07 localhost.localdomain ovs-vswitchd[13799]: > > > > > > ovs|00014|timeval|WARN|Unreasonably long 6567ms poll interval > (232ms > > > > > user, > > > > > > 5958ms system) > > > > > > Oct 26 05:28:07 localhost.localdomain ovs-vswitchd[13799]: > > > > > > ovs|00015|timeval|WARN|faults: 9659 minor, 10 major > > > > > > Oct 26 05:28:07 localhost.localdomain ovs-vswitchd[13799]: > > > > > > ovs|00016|timeval|WARN|disk: 2384 reads, 0 writes > > > > > > Oct 26 05:28:07 localhost.localdomain ovs-vswitchd[13799]: > > > > > > ovs|00017|timeval|WARN|context switches: 19 voluntary, 62 > involuntary > > > > > > Oct 26 05:28:07 localhost.localdomain ovs-vswitchd[13799]: > > > > > > > > > > This message appears right after startup, which might explain it, > since > > > > > DPDK > > > > > seems to take a long time to initialize (6 seconds in this case). > > > > > > > > > > Do you see any other messages like that after you try to operate? > > > > > > > > > > What is the problem that you see exactly? What is the > configuration of > > > the > > > > > bridges? Can you send the output of ovs-vsctl show? > > > > > > > > > > Cascardo. > > > > > > > > > > > ovs|00018|coverage|INFO|Event coverage, avg rate over last: 5 > > > seconds, > > > > > last > > > > > > minute, last hour, hash=ec4c58ab: > > > > > > Oct 26 05:28:07 localhost.localdomain ovs-vswitchd[13799]: > > > > > > ovs|00019|coverage|INFO|bridge_reconfigure 0.0/sec > > > 0.000/sec > > > > > > 0.0000/sec total: 1 > > > > > > Oct 26 05:28:07 localhost.localdomain ovs-vswitchd[13799]: > > > > > > ovs|00020|coverage|INFO|cmap_expand 0.0/sec > > > 0.000/sec > > > > > > 0.0000/sec total: 9 > > > > > > Oct 26 05:28:07 localhost.localdomain ovs-vswitchd[13799]: > > > > > > ovs|00021|coverage|INFO|miniflow_malloc 0.0/sec > > > 0.000/sec > > > > > > 0.0000/sec total: 11 > > > > > > Oct 26 05:28:07 localhost.localdomain ovs-vswitchd[13799]: > > > > > > ovs|00022|coverage|INFO|hmap_pathological 0.0/sec > > > 0.000/sec > > > > > > 0.0000/sec total: 3 > > > > > > Oct 26 05:28:07 localhost.localdomain ovs-vswitchd[13799]: > > > > > > ovs|00023|coverage|INFO|hmap_expand 0.0/sec > > > 0.000/sec > > > > > > 0.0000/sec total: 646 > > > > > > Oct 26 05:28:07 localhost.localdomain ovs-vswitchd[13799]: > > > > > > ovs|00024|coverage|INFO|txn_unchanged 0.0/sec > > > 0.000/sec > > > > > > 0.0000/sec total: 3 > > > > > > Oct 26 05:28:07 localhost.localdomain ovs-vswitchd[13799]: > > > > > > ovs|00025|coverage|INFO|poll_create_node 0.0/sec > > > 0.000/sec > > > > > > 0.0000/sec total: 42 > > > > > > Oct 26 05:28:07 localhost.localdomain ovs-vswitchd[13799]: > > > > > > ovs|00026|coverage|INFO|seq_change 0.0/sec > > > 0.000/sec > > > > > > 0.0000/sec total: 49 > > > > > > Oct 26 05:28:07 localhost.localdomain ovs-vswitchd[13799]: > > > > > > ovs|00027|coverage|INFO|pstream_open 0.0/sec > > > 0.000/sec > > > > > > 0.0000/sec total: 1 > > > > > > Oct 26 05:28:07 localhost.localdomain ovs-vswitchd[13799]: > > > > > > ovs|00028|coverage|INFO|stream_open 0.0/sec > > > 0.000/sec > > > > > > 0.0000/sec total: 1 > > > > > > Oct 26 05:28:07 localhost.localdomain ovs-vswitchd[13799]: > > > > > > ovs|00029|coverage|INFO|util_xalloc 0.0/sec > > > 0.000/sec > > > > > > 0.0000/sec total: 11218 > > > > > > Oct 26 05:28:07 localhost.localdomain ovs-vswitchd[13799]: > > > > > > ovs|00030|coverage|INFO|netdev_get_hwaddr 0.0/sec > > > 0.000/sec > > > > > > 0.0000/sec total: 2 > > > > > > Oct 26 05:28:07 localhost.localdomain ovs-vswitchd[13799]: > > > > > > ovs|00031|coverage|INFO|netlink_received 0.0/sec > > > 0.000/sec > > > > > > 0.0000/sec total: 3 > > > > > > Oct 26 05:28:07 localhost.localdomain ovs-vswitchd[13799]: > > > > > > ovs|00032|coverage|INFO|netlink_sent 0.0/sec > > > 0.000/sec > > > > > > 0.0000/sec total: 1 > > > > > > Oct 26 05:28:07 localhost.localdomain ovs-vswitchd[13799]: > > > > > > ovs|00033|coverage|INFO|89 events never hit > > > > > > > > > > > > Here are logs of DPDK: > > > > > > > > > > > > EAL: Detected 80 lcore(s) > > > > > > EAL: No free hugepages reported in hugepages-1048576kB > > > > > > EAL: Probing VFIO support... > > > > > > EAL: VFIO support initialized > > > > > > PMD: bnxt_rte_pmd_init() called for (null) > > > > > > EAL: PCI device 0000:01:00.0 on NUMA socket 0 > > > > > > EAL: probe driver: 8086:1521 rte_igb_pmd > > > > > > EAL: PCI device 0000:01:00.1 on NUMA socket 0 > > > > > > EAL: probe driver: 8086:1521 rte_igb_pmd > > > > > > EAL: PCI device 0000:01:00.2 on NUMA socket 0 > > > > > > EAL: probe driver: 8086:1521 rte_igb_pmd > > > > > > EAL: PCI device 0000:01:00.3 on NUMA socket 0 > > > > > > EAL: probe driver: 8086:1521 rte_igb_pmd > > > > > > EAL: PCI device 0000:8b:00.0 on NUMA socket 2 > > > > > > EAL: probe driver: 8086:10fb rte_ixgbe_pmd > > > > > > EAL: using IOMMU type 1 (Type 1) > > > > > > EAL: Ignore mapping IO port bar(2) addr: ffc1 > > > > > > EAL: PCI device 0000:8b:00.1 on NUMA socket 2 > > > > > > EAL: probe driver: 8086:10fb rte_ixgbe_pmd > > > > > > Zone 0: name:<rte_eth_dev_data>, phys:0x98dced40, len:0x30100, > > > > > > virt:0x7f5232fced40, socket_id:0, flags:0 > > > > > > > > > > > > -- > > > > > > ----------------------------- > > > > > > Sincerely, > > > > > > Bo Sun > > > > > > > > > > > _______________________________________________ > > > > > > discuss mailing list > > > > > > discuss@openvswitch.org > > > > > > http://openvswitch.org/mailman/listinfo/discuss > > > > > > > > > > > > > > > > > > > > > > -- > > > > ----------------------------- > > > > Sincerely, > > > > Bo Sun > > > > > > > > > > > -- > > ----------------------------- > > Sincerely, > > Bo Sun > -- ----------------------------- Sincerely, Bo Sun
_______________________________________________ discuss mailing list discuss@openvswitch.org http://openvswitch.org/mailman/listinfo/discuss