Hi
I am running ovs with dpdk, i have two different kind dpdk
compatible NIC. When i bind 10-Gigabit SFI/SFP+ cards ports; I am able
to add port and so on. But if i bind to I350 Gigabit Network cards, when
i try to add port crash occurs with following ovs-vswitchd output :
2016-04-07T12:59:51Z|00020|bridge|INFO|bridge br0: using datapath ID
0000469c6610e84c
2016-04-07T12:59:51Z|00021|connmgr|INFO|br0: added service controller
"punix:/usr/local/var/run/openvswitch/br0.mgmt"
2016-04-07T13:00:01Z|00022|memory|INFO|peak resident set size grew 97%
in last 10.0 seconds, from 8040 kB to 15840 kB
2016-04-07T13:00:01Z|00023|memory|INFO|handlers:20 ports:1
revalidators:8 rules:5
PMD: eth_igb_tx_queue_setup(): To improve 1G driver performance,
consider setting the TX WTHRESH value to 4, 8, or 16.
PMD: eth_igb_tx_queue_setup(): sw_ring=0x7fe8127a1ac0
hw_ring=0x7fe8127a9b00 dma_addr=0x3d27a9b00
PMD: eth_igb_rx_queue_setup(): sw_ring=0x7fe81278d940
hw_ring=0x7fe812791980 dma_addr=0x3d2791980
let me describe what command i have run to reach this point :
ovs-vswitchd --dpdk -c 0x2 -n 4 --socket-mem 2048 --
unix:/usr/local/var/run/openvswitch/db.sock --pidfile
ovs-vsctl add-br br0 -- set bridge br0 datapath_type=netdev
sudo ovs-vsctl add-port br0 dpdk0 -- set Interface dpdk0 type=dpdk
same commands work for other NIC but not this one.
To understand whether card has a problem, i have connected this port
with one NetXtreme BCM5719 Gigabit Ethernet PCI port. Send ping from
NetXtreme to I350, and listen I350 with tcpdump, i have seen incoming
arp requests.
Is there any information about that kind of situation, or do you have
any suggestion ?
Thanks in advance,
kursat
_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss