> -----Original Message-----
> From: Flavio Leitner [mailto:f...@sysclose.org]
> Sent: Tuesday, May 26, 2015 12:34 PM
> To: Rao, Ravi
> Cc: dev@openvswitch.org
> Subject: Re: [ovs-dev] FW: performance issue with ovs + dpdk2.0 with vhost
> 
> On Sun, May 24, 2015 at 08:55:59AM -0500, Ravi Rao wrote:
> > Hi
> >   Below are the exact sequence of steps that I followed..
> >
> > This is what I am trying to Do.
> > Below is the setup..
> >
> > |                   +----------------------+   |
> >                   | guest                |   |
> >                   |                      |   |
> >                   |                      |   |  guest
> >                   |  eth0   L3fwd  eth1  |   |
> >                   |   |              |   |   |
> >                   +---+--------------+---+ __|
> >                           ^      :
> >                           |      |
> >                           :      v                       __
> >     +-----------------+--------------+-----------------+   |
> >     |                 | ovs-br0      |                 |
> >     |                 +--------------+                 |   |
> >     |                     ^      :                     |   |
> >     |          +----------+      +---------+           |   |  host
> >     |          :                           v           |   |
> >     |   +--------------+            +--------------+   |   |
> >     |   |   dpdk0      |  ovs-dpdk  |   dpdk1      |   |   |
> >     +---+--------------+------------+--------------+---+ __|
> >                ^                           :
> >                |                           |
> >                :                           v
> >     +--------------------------------------------------+
> >     |                                                  |
> >     |                traffic generator                 |
> >     |                                                  |
> >     +--------------------------------------------------+|
> >
> >
> > Step1: Use the latest ovs and dpdk2.0 to get the ovs running with 2
> > dpdk interfaces that are bound to 2 10GB physical interfaces
> > #** Inser the required Modules
> > cd /root/dpdk-2.0.0
> > modprobe uio
> > modprobe cuse
> > rmmod igb_uio
> > rmmod rte_kni
> > insmod x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
> >
> > #**** Assign the dpdk capable interfaces to igb_uio driver
> > tools/dpdk_nic_bind.py --status tools/dpdk_nic_bind.py -b igb_uio
> > 0000:02:00.0 tools/dpdk_nic_bind.py -b igb_uio 0000:02:00.1
> > tools/dpdk_nic_bind.py --status
> >
> > #--- Setup the openVswitch
> > cd /root/ovs
> > pkill -9 ovs
> > mkdir -p /usr/local/etc/openvswitch
> > mkdir -p /usr/local/var/run/openvswitch rm -rf
> > /usr/local/etc/openvswitch/conf.db
> > ovsdb/ovsdb-tool create /usr/local/etc/openvswitch/conf.db
> > vswitchd/vswitch.ovsschema
> >
> > #Start ovsdb-server
> > ovsdb/ovsdb-server
> > --remote=punix:/usr/local/var/run/openvswitch/db.sock
> > --remote=db:Open_vSwitch,Open_vSwitch,manager_options --pidfile
> > --detach utilities/ovs-vsctl --no-wait init
> >
> > #Start vswitchd:
> > export DB_SOCK=/usr/local/var/run/openvswitch/db.sock
> > rm /dev/vhost-net
> > vswitchd/ovs-vswitchd --dpdk -c 0x3 -n 4 --socket-mem 1024,0 --
> > unix:$DB_SOCK --pidfile --detach
> >
> > #Add bridge & ports
> > utilities/ovs-vsctl add-br ovs-br0 -- set bridge ovs-br0
> > datapath_type=netdev utilities/ovs-vsctl add-port ovs-br0 dpdk0 -- set
> > Interface dpdk0 type=dpdk utilities/ovs-vsctl add-port ovs-br0 dpdk1
> > -- set Interface dpdk1 type=dpdk
> >
> > Step2: Create the dpdkvhost interfaces and bring up the guestVM using
> > QEMU export DPDK_DIR=/root/dpdk-2.0.0 insmod
> > $DPDK_DIR/lib/librte_vhost/eventfd_link/eventfd_link.ko
> > cd /root/ovs
> > utilities/ovs-vsctl add-port ovs-br0 dpdkvhost0 -- set Interface
> > dpdkvhost0 type=dpdkvhost utilities/ovs-vsctl add-port ovs-br0
> > dpdkvhost1 -- set Interface dpdkvhost1 type=dpdkvhost
> >
> > #**** Start the guest ubuntu VM1 from a terminal that is logged in as
> > root
> > qemu-system-x86_64 --enable-kvm -k fr -m 1G \
> >         -cpu host -smp cores=2,threads=1,sockets=1 \
> >         -serial telnet::4444,server,nowait -monitor
> > telnet::5555,server,nowait \
> >         -hda /root/VMs/images/ubuntu-14.04-template.qcow2 \
> >         -object
> > memory-backend-file,id=mem,size=1G,mem-
> path=/mnt/huge_1GB,share=on \
> >         -numa node,memdev=mem \
> >         -netdev
> >
> type=tap,id=dpdkvhost0,script=no,downscript=no,ifname=dpdkvhost0,vhos
> t=on \
> >         -device
> > virtio-net-pci,netdev=dpdkvhost0,mac=52:54:00:12:34:56,csum=off,gso=of
> > f,guest_tso4=off,guest_tso6=off,guest_ecn=off
> > \
> >         -netdev
> >
> type=tap,id=dpdkvhost1,script=no,downscript=no,ifname=dpdkvhost1,vhos
> t=on \
> >         -device
> > virtio-net-pci,netdev=dpdkvhost1,mac=52:54:00:12:34:57,csum=off,gso=of
> > f,guest_tso4=off,guest_tso6=off,guest_ecn=off
> > \
> >         -device ne2k_pci,mac=DE:AD:DE:01:02:03,netdev=user.0 -netdev
> > user,id=user.0,hostfwd=tcp::2222-:22 &
> >
> > # **** Add flows between ports
> > utilities/ovs-ofctl del-flows ovs-br0
> > utilities/ovs-ofctl add-flow ovs-br0 in_port=1,action=output:3
> > utilities/ovs-ofctl add-flow ovs-br0 in_port=2,action=output:4
> > utilities/ovs-ofctl add-flow ovs-br0 in_port=3,action=output:1
> > utilities/ovs-ofctl add-flow ovs-br0 in_port=4,action=output:2
> >
> > Once I complete the above settings. I log into the VM and enable ipv4
> > forwarding so that it could do l3 fwd between eth0 and eth1
> >
> > The issue I am seeing is when I start pumping packets on IXIA port
> > connected to physical port dpdk0 I see lots of tx_errors on dpdk0.
> > I can only pass about 1000 pps without getting any errors. Is there
> > anything I am doing wrong or missing in the above setup.
> 
> 
> I looked at your setup and I am not finding anything wrong.
> Maybe there is an error inside of guest that packets are being flooded?
> What happens if you change the flows in such way that it loops the packet
> back to IXIA instead of guest? Does it work?

How are all your threads affinitized? Is the qemu vcpu anti-affinitized with
respect to the ovs pmd threads?

Are flows getting installed in the datapath (ovs-appctl dpctl/dump-flows)?

What commit are you working off?, are you using 

>95e9881f843896751a76481cfe7869e2c0c1270 
(netdev-dpdk: Add vhost enqueue retries)


> 
> fbl
> 
> >
> > *Qemu vesion is 2.2.1**
> > *
> > Thanks & Regards,
> > Ravi..
> >
> > On 05/22/15 19:38, Flavio Leitner wrote:
> > >On Fri, May 15, 2015 at 02:07:07PM +0000, Rao, Ravi wrote:
> > >>Hi All,
> > >>    I am trying to get a Guest VM connected to the dpdkvhost
> > >>    interface on a host which has the ovs running from the latest ovs
> > >>    git and dpdk2.0. Looks like I am missing something as 95% of
> > >>    traffic is not getting to the VM. Can one of you please let me
> > >>    know which mailing list I should be posting the details for
> > >>    getting a resolution. Is it this dpdk list OR would it be the
> > >>    openvswitch list?
> > >This is the right place, but it would be great if you could tell us
> > >the configuration, how you're testing and the qemu version too.
> > >
> > >fbl
> > >
> >
> > _______________________________________________
> > dev mailing list
> > dev@openvswitch.org
> > http://openvswitch.org/mailman/listinfo/dev
> 
> _______________________________________________
> dev mailing list
> dev@openvswitch.org
> http://openvswitch.org/mailman/listinfo/dev
_______________________________________________
dev mailing list
dev@openvswitch.org
http://openvswitch.org/mailman/listinfo/dev

Reply via email to