Hi Przemek,
Thank you for your response, It's really provided us breakthrough.
After setting up DPDK on compute node for stable/kilo, Trying to set up
Openstack stable/liberty all-in-one setup, At present not able to get the IP
allocation for the vhost type instances through DHCP. Also tried assigning IP's
manually to them but the inter-VM communication also not happening,
#neutron agent-list
root@nfv-dpdk-devstack:/etc/neutron# neutron agent-list
+--------------------------------------+--------------------+-------------------+-------+----------------+---------------------------+
| id | agent_type | host
| alive | admin_state_up | binary |
+--------------------------------------+--------------------+-------------------+-------+----------------+---------------------------+
| 3b29e93c-3a25-4f7d-bf6c-6bb309db5ec0 | DPDK OVS Agent | nfv-dpdk-devstack
| :-) | True | neutron-openvswitch-agent |
| 62593b2c-c10f-4d93-8551-c46ce24895a6 | L3 agent | nfv-dpdk-devstack
| :-) | True | neutron-l3-agent |
| 7cb97af9-cc20-41f8-90fb-aba97d39dfbd | DHCP agent | nfv-dpdk-devstack
| :-) | True | neutron-dhcp-agent |
| b613c654-99b7-437e-9317-20fa651a1310 | Linux bridge agent | nfv-dpdk-devstack
| :-) | True | neutron-linuxbridge-agent |
| c2dd0384-6517-4b44-9c25-0d2825d23f57 | Metadata agent | nfv-dpdk-devstack
| :-) | True | neutron-metadata-agent |
| f23dde40-7dc0-4f20-8b3e-eb90ddb15e49 | Open vSwitch agent | nfv-dpdk-devstack
| xxx | True | neutron-openvswitch-agent |
+--------------------------------------+--------------------+-------------------+-------+----------------+---------------------------+
ovs-vsctl show output#
--------------------------------------------------------
Bridge br-dpdk
Port br-dpdk
Interface br-dpdk
type: internal
Port phy-br-dpdk
Interface phy-br-dpdk
type: patch
options: {peer=int-br-dpdk}
Bridge br-int
fail_mode: secure
Port "vhufa41e799-f2"
tag: 5
Interface "vhufa41e799-f2"
type: dpdkvhostuser
Port int-br-dpdk
Interface int-br-dpdk
type: patch
options: {peer=phy-br-dpdk}
Port "tap4e19f8e1-59"
tag: 5
Interface "tap4e19f8e1-59"
type: internal
Port "vhu05734c49-3b"
tag: 5
Interface "vhu05734c49-3b"
type: dpdkvhostuser
Port "vhu10c06b4d-84"
tag: 5
Interface "vhu10c06b4d-84"
type: dpdkvhostuser
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port "vhue169c581-ef"
tag: 5
Interface "vhue169c581-ef"
type: dpdkvhostuser
Port br-int
Interface br-int
type: internal
Bridge br-tun
fail_mode: secure
Port br-tun
Interface br-tun
type: internal
error: "could not open network device br-tun (Invalid argument)"
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
ovs_version: "2.4.0"
--------------------------------------------------------
ovs-ofctl dump-flows br-int#
--------------------------------------------------------
root@nfv-dpdk-devstack:/etc/neutron# ovs-ofctl dump-flows br-int
NXST_FLOW reply (xid=0x4):
cookie=0xaaa002bb2bcf827b, duration=2410.012s, table=0, n_packets=0,
n_bytes=0, idle_age=2410, priority=10,icmp6,in_port=43,icmp_type=136
actions=resubmit(,24)
cookie=0xaaa002bb2bcf827b, duration=2409.480s, table=0, n_packets=0,
n_bytes=0, idle_age=2409, priority=10,icmp6,in_port=44,icmp_type=136
actions=resubmit(,24)
cookie=0xaaa002bb2bcf827b, duration=2408.704s, table=0, n_packets=0,
n_bytes=0, idle_age=2408, priority=10,icmp6,in_port=45,icmp_type=136
actions=resubmit(,24)
cookie=0xaaa002bb2bcf827b, duration=2408.155s, table=0, n_packets=0,
n_bytes=0, idle_age=2408, priority=10,icmp6,in_port=42,icmp_type=136
actions=resubmit(,24)
cookie=0xaaa002bb2bcf827b, duration=2409.858s, table=0, n_packets=0,
n_bytes=0, idle_age=2409, priority=10,arp,in_port=43 actions=resubmit(,24)
cookie=0xaaa002bb2bcf827b, duration=2409.314s, table=0, n_packets=0,
n_bytes=0, idle_age=2409, priority=10,arp,in_port=44 actions=resubmit(,24)
cookie=0xaaa002bb2bcf827b, duration=2408.564s, table=0, n_packets=0,
n_bytes=0, idle_age=2408, priority=10,arp,in_port=45 actions=resubmit(,24)
cookie=0xaaa002bb2bcf827b, duration=2408.019s, table=0, n_packets=0,
n_bytes=0, idle_age=2408, priority=10,arp,in_port=42 actions=resubmit(,24)
cookie=0xaaa002bb2bcf827b, duration=2411.538s, table=0, n_packets=0,
n_bytes=0, idle_age=2411, priority=3,in_port=1,dl_vlan=346
actions=mod_vlan_vid:5,NORMAL
cookie=0xaaa002bb2bcf827b, duration=2415.038s, table=0, n_packets=0,
n_bytes=0, idle_age=2415, priority=2,in_port=1 actions=drop
cookie=0xaaa002bb2bcf827b, duration=2416.148s, table=0, n_packets=0,
n_bytes=0, idle_age=2416, priority=0 actions=NORMAL
cookie=0xaaa002bb2bcf827b, duration=2416.059s, table=23, n_packets=0,
n_bytes=0, idle_age=2416, priority=0 actions=drop
cookie=0xaaa002bb2bcf827b, duration=2410.101s, table=24, n_packets=0,
n_bytes=0, idle_age=2410,
priority=2,icmp6,in_port=43,icmp_type=136,nd_target=fe80::f816:3eff:fe81:da61
actions=NORMAL
ÿcookie=0xaaa002bb2bcf827b, duration=2409.571s, table=24, n_packets=0,
n_bytes=0, idle_age=2409,
priority=2,icmp6,in_port=44,icmp_type=136,nd_target=fe80::f816:3eff:fe73:254
actions=NORMAL
ÿcookie=0xaaa002bb2bcf827b, duration=2408.775s, table=24, n_packets=0,
n_bytes=0, idle_age=2408,
priority=2,icmp6,in_port=45,icmp_type=136,nd_target=fe80::f816:3eff:fe88:5cc
actions=NORMAL
ÿcookie=0xaaa002bb2bcf827b, duration=2408.231s, table=24, n_packets=0,
n_bytes=0, idle_age=2408,
priority=2,icmp6,in_port=42,icmp_type=136,nd_target=fe80::f816:3eff:fe86:f5f7
actions=NORMAL
ÿcookie=0xaaa002bb2bcf827b, duration=2409.930s, table=24, n_packets=0,
n_bytes=0, idle_age=2409, priority=2,arp,in_port=43,arp_spa=20.20.20.14
actions=NORMAL
ÿcookie=0xaaa002bb2bcf827b, duration=2409.389s, table=24, n_packets=0,
n_bytes=0, idle_age=2409, priority=2,arp,in_port=44,arp_spa=20.20.20.16
actions=NORMAL
ÿcookie=0xaaa002bb2bcf827b, duration=2408.633s, table=24, n_packets=0,
n_bytes=0, idle_age=2408, priority=2,arp,in_port=45,arp_spa=20.20.20.17
actions=NORMAL
ÿcookie=0xaaa002bb2bcf827b, duration=2408.085s, table=24, n_packets=0,
n_bytes=0, idle_age=2408, priority=2,arp,in_port=42,arp_spa=20.20.20.13
actions=NORMAL
ÿcookie=0xaaa002bb2bcf827b, duration=2415.974s, table=24, n_packets=0,
n_bytes=0, idle_age=2415, priority=0 actions=drop
root@nfv-dpdk-devstack:/etc/neutron#
--------------------------------------------------------
It will be really great for us if get any hint to overcome out of this Inter-VM
& DHCP communication issue,
Thanks & Regards
Abhijeet Karve
"Czesnowicz, Przemyslaw" ---01/04/2016 07:54:52 PM---You should be able to
clone networking-ovs-dpdk, switch to kilo branch, ÿand run python setup.py ins
From: "Czesnowicz, Przemyslaw" <przemyslaw.czesnow...@intel.com>
To: Abhijeet Karve <abhijeet.ka...@tcs.com>
Cc: "d...@dpdk.org" <d...@dpdk.org>, "discuss@openvswitch.org"
<discuss@openvswitch.org>, "Gray, Mark D" <mark.d.g...@intel.com>
Date: 01/04/2016 07:54 PM
Subject: RE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
Getting memory backing issues with qemu parameter passing
You should be able to clone networking-ovs-dpdk, switch to kilo branch, ÿand
run
python setup.py install
in the root of networking-ovs-dpdk, that should install agent and mech driver.
Then you would need to enable mech driver (ovsdpdk) on the controller in the
/etc/neutron/plugins/ml2/ml2_conf.ini
And run the right agent on the computes (networking-ovs-dpdk-agent).
ÿ
ÿ
There should be pip packeges of networking-ovs-dpdk available shortly,
I’ll let you know when that happens.
ÿ
Przemek
ÿ
From:ÿAbhijeet Karve [mailto:abhijeet.ka...@tcs.com]
Sent:ÿThursday, December 24, 2015 6:42 PM
To:ÿCzesnowicz, Przemyslaw
Cc:ÿd...@dpdk.org; discuss@openvswitch.org; Gray, Mark D
Subject:ÿRE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved# Getting
memory backing issues with qemu parameter passing
ÿ
Hi Przemek,ÿ
Thank you so much for your quick response.
The
guide(https://github.com/openstack/networking-ovs-dpdk/blob/stable/kilo/doc/source/getstarted/ubuntu.rst)
which you have suggested that is for openstack vhost user installations with
devstack.
Can't we have any reference for including ovs-dpdk mechanisam driver for
openstack Ubuntu distribution which we are following for
compute+controller node setup?"
We are facing below listed issues With the current approach of setting up
openstack kilo interactively + replacing ovs with ovs-dpdk enabled and Instance
creation in openstack withÿ
passing that instance id to QEMU command line which further passes the
vhost-user sockets to instances for enabling the DPDK libraries in it.ÿ
1. Created a flavor m1.hugepages which is backed by hugepage memory, unable to
spawn instance with this flavor – Getting a issue like: No matching
hugetlbfs for the number of hugepages assigned to the flavor.ÿ
2. Passing socket info to instances via qemu manually and instnaces created are
not persistent.ÿ
Now as you suggested, we are looking in enabling ovsdpdk ml2 mechanism driver
and agent all of that in our openstack ubuntu distribution.ÿ
Would be really appriciate if get any help or ref with explanation.ÿ
We are using compute + controller node setup and we are using following
software platform on compute node:
_____________
Openstack: Kilo
Distribution: Ubuntu 14.04
OVS Version: 2.4.0
DPDK 2.0.0
_____________
Thanks,ÿ
Abhijeet Karveÿ
From: ÿ ÿ ÿ ÿ"Czesnowicz, Przemyslaw" <przemyslaw.czesnow...@intel.com>ÿ
To: ÿ ÿ ÿ ÿAbhijeet Karve <abhijeet.ka...@tcs.com>ÿ
Cc: ÿ ÿ ÿ ÿ"d...@dpdk.org"ÿ<d...@dpdk.org>,
"discuss@openvswitch.org"ÿ<discuss@openvswitch.org>, "Gray, Mark D"
<mark.d.g...@intel.com>ÿ
Date: ÿ ÿ ÿ ÿ12/17/2015 06:32 PMÿ
Subject: ÿ ÿ ÿ ÿRE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
Successfully setup DPDK OVS with vhostuserÿ
I haven’t tried that approach not sure if that would work, it seems
clunky.ÿ
ÿ
If you enable ovsdpdk ml2 mechanism driver and agent all of that (add ports to
ovs with the right type, pass the sockets to qemu) would be done by OpenStack.ÿ
ÿ
Przemekÿ
ÿ
From:ÿAbhijeet Karve [mailto:abhijeet.ka...@tcs.com]
Sent:ÿThursday, December 17, 2015 12:41 PM
To:ÿCzesnowicz, Przemyslaw
Cc:ÿd...@dpdk.org; discuss@openvswitch.org; Gray, Mark D
Subject:ÿRE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
Successfully setup DPDK OVS with vhostuserÿ
ÿ
Hi Przemek,ÿ
Thank you so much for sharing the ref guide.ÿ
Would be appreciate if clear one doubt.
At present we are setting up openstack kilo interactively and further replacing
ovs with ovs-dpdk enabled.
Once the above setup done, We are creating instance in openstack and passing
that instance id to QEMU command line which further passes the vhost-user
sockets to instances, enabling the DPDK libraries in it.ÿ
Isn't this the correct way of integrating ovs-dpdk with openstack?ÿ
Thanks & Regards
Abhijeet Karve
From: ÿ ÿ ÿ ÿ"Czesnowicz, Przemyslaw" <przemyslaw.czesnow...@intel.com>ÿ
To: ÿ ÿ ÿ ÿAbhijeet Karve <abhijeet.ka...@tcs.com>ÿ
Cc: ÿ ÿ ÿ ÿ"d...@dpdk.org"ÿ<d...@dpdk.org>,
"discuss@openvswitch.org"ÿ<discuss@openvswitch.org>, "Gray, Mark D"
<mark.d.g...@intel.com>ÿ
Date: ÿ ÿ ÿ ÿ12/17/2015 05:27 PMÿ
Subject: ÿ ÿ ÿ ÿRE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
Successfully setup DPDK OVS with vhostuserÿ
HI Abhijeet,ÿ
For Kilo you need to use ovsdpdk mechanism driver and a matching agent to
integrate ovs-dpdk with OpenStack.ÿ
The guide you are following only talks about running ovs-dpdk not how it should
be integrated with OpenStack.ÿ
Please follow this guide:ÿ
https://github.com/openstack/networking-ovs-dpdk/blob/stable/kilo/doc/source/getstarted/ubuntu.rstÿ
Best regardsÿ
Przemekÿ
From:ÿAbhijeet Karve [mailto:abhijeet.ka...@tcs.com]
Sent:ÿWednesday, December 16, 2015 9:37 AM
To:ÿCzesnowicz, Przemyslaw
Cc:ÿd...@dpdk.org; discuss@openvswitch.org; Gray, Mark D
Subject:ÿRE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
Successfully setup DPDK OVS with vhostuserÿ
Hi Przemek,ÿ
We have configured theÿaccelerated data path between a physical interface to
the VM using openvswitch netdev-dpdk with vhost-user support. The VM created
with this special data path and vhost library, I am calling as DPDK instance.
If assigning ip manually to the newly created Cirros VM instance, We are able
to make 2 VM's to communicate on the same compute node. Else it's not
associating any ip through DHCP though DHCP is in compute node only.ÿ
Yes it's a compute + controller node setup and we are using following software
platform on compute node:ÿ
_____________ÿ
Openstack: Kiloÿ
Distribution: Ubuntu 14.04ÿ
OVS Version: 2.4.0ÿ
DPDK 2.0.0ÿ
_____________ÿ
We are following the intel guide
https://software.intel.com/en-us/blogs/2015/06/09/building-vhost-user-for-ovs-today-using-dpdk-200ÿ
When doing "ovs-vsctl show" in compute node, it shows below output:ÿ
_____________________________________________ÿ
ovs-vsctl showÿ
c2ec29a5-992d-4875-8adc-1265c23e0304ÿ
ÿBridge br-exÿ
ÿ ÿ ÿPort phy-br-exÿ
ÿ ÿ ÿ ÿ ÿInterface phy-br-exÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: patchÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿoptions: {peer=int-br-ex}ÿ
ÿ ÿ ÿPort br-exÿ
ÿ ÿ ÿ ÿ ÿInterface br-exÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: internalÿ
ÿBridge br-tunÿ
ÿ ÿ ÿfail_mode: secureÿ
ÿ ÿ ÿPort br-tunÿ
ÿ ÿ ÿ ÿ ÿInterface br-tunÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: internalÿ
ÿ ÿ ÿPort patch-intÿ
ÿ ÿ ÿ ÿ ÿInterface patch-intÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: patchÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿoptions: {peer=patch-tun}ÿ
ÿBridge br-intÿ
ÿ ÿ ÿfail_mode: secureÿ
ÿ ÿ ÿPort "qvo0ae19a43-b6"ÿ
ÿ ÿ ÿ ÿ ÿtag: 2ÿ
ÿ ÿ ÿ ÿ ÿInterface "qvo0ae19a43-b6"ÿ
ÿ ÿ ÿPort br-intÿ
ÿ ÿ ÿ ÿ ÿInterface br-intÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: internalÿ
ÿ ÿ ÿPort "qvo31c89856-a2"ÿ
ÿ ÿ ÿ ÿ ÿtag: 1ÿ
ÿ ÿ ÿ ÿ ÿInterface "qvo31c89856-a2"ÿ
ÿ ÿ ÿPort patch-tunÿ
ÿ ÿ ÿ ÿ ÿInterface patch-tunÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: patchÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿoptions: {peer=patch-int}ÿ
ÿ ÿ ÿPort int-br-exÿ
ÿ ÿ ÿ ÿ ÿInterface int-br-exÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: patchÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿoptions: {peer=phy-br-ex}ÿ
ÿ ÿ ÿPort "qvo97fef28a-ec"ÿ
ÿ ÿ ÿ ÿ ÿtag: 2ÿ
ÿ ÿ ÿ ÿ ÿInterface "qvo97fef28a-ec"ÿ
ÿBridge br-dpdkÿ
ÿ ÿ ÿPort br-dpdkÿ
ÿ ÿ ÿ ÿ ÿInterface br-dpdkÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: internalÿ
ÿBridge "br0"ÿ
ÿ ÿ ÿPort "br0"ÿ
ÿ ÿ ÿ ÿ ÿInterface "br0"ÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: internalÿ
ÿ ÿ ÿPort "dpdk0"ÿ
ÿ ÿ ÿ ÿ ÿInterface "dpdk0"ÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: dpdkÿ
ÿ ÿ ÿPort "vhost-user-2"ÿ
ÿ ÿ ÿ ÿ ÿInterface "vhost-user-2"ÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: dpdkvhostuserÿ
ÿ ÿ ÿPort "vhost-user-0"ÿ
ÿ ÿ ÿ ÿ ÿInterface "vhost-user-0"ÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: dpdkvhostuserÿ
ÿ ÿ ÿPort "vhost-user-1"ÿ
ÿ ÿ ÿ ÿ ÿInterface "vhost-user-1"ÿ
ÿ ÿ ÿ ÿ ÿ ÿ ÿtype: dpdkvhostuserÿ
ÿovs_version: "2.4.0"ÿ
root@dpdk:~#
_____________________________________________ÿ
Open flows output in bridge in compute node are as below:ÿ
_____________________________________________ÿ
root@dpdk:~# ovs-ofctl dump-flows br-tunÿ
NXST_FLOW reply (xid=0x4):ÿ
cookie=0x0, duration=71796.741s, table=0, n_packets=519, n_bytes=33794,
idle_age=19982, hard_age=65534, priority=1,in_port=1 actions=resubmit(,2)ÿ
cookie=0x0, duration=71796.700s, table=0, n_packets=0, n_bytes=0,
idle_age=65534, hard_age=65534, priority=0 actions=dropÿ
cookie=0x0, duration=71796.649s, table=2, n_packets=0, n_bytes=0,
idle_age=65534, hard_age=65534,
priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20)ÿ
cookie=0x0, duration=71796.610s, table=2, n_packets=519, n_bytes=33794,
idle_age=19982, hard_age=65534,
priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,22)ÿ
cookie=0x0, duration=71794.631s, table=3, n_packets=0, n_bytes=0,
idle_age=65534, hard_age=65534, priority=1,tun_id=0x5c
actions=mod_vlan_vid:2,resubmit(,10)ÿ
cookie=0x0, duration=71794.316s, table=3, n_packets=0, n_bytes=0,
idle_age=65534, hard_age=65534, priority=1,tun_id=0x57
actions=mod_vlan_vid:1,resubmit(,10)ÿ
cookie=0x0, duration=71796.565s, table=3, n_packets=0, n_bytes=0,
idle_age=65534, hard_age=65534, priority=0 actions=dropÿ
cookie=0x0, duration=71796.522s, table=4, n_packets=0, n_bytes=0,
idle_age=65534, hard_age=65534, priority=0 actions=dropÿ
cookie=0x0, duration=71796.481s, table=10, n_packets=0, n_bytes=0,
idle_age=65534, hard_age=65534, priority=1
actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1ÿ
cookie=0x0, duration=71796.439s, table=20, n_packets=0, n_bytes=0,
idle_age=65534, hard_age=65534, priority=0 actions=resubmit(,22)ÿ
cookie=0x0, duration=71796.398s, table=22, n_packets=519, n_bytes=33794,
idle_age=19982, hard_age=65534, priority=0 actions=dropÿ
root@dpdk:~#
root@dpdk:~#
root@dpdk:~#
root@dpdk:~# ovs-ofctl dump-flows br-tunÿ
int NXST_FLOW reply (xid=0x4):ÿ
cookie=0x0, duration=71801.275s, table=0, n_packets=0, n_bytes=0,
idle_age=65534, hard_age=65534, priority=2,in_port=10 actions=dropÿ
cookie=0x0, duration=71801.862s, table=0, n_packets=661, n_bytes=48912,
idle_age=19981, hard_age=65534, priority=1 actions=NORMALÿ
cookie=0x0, duration=71801.817s, table=23, n_packets=0, n_bytes=0,
idle_age=65534, hard_age=65534, priority=0 actions=dropÿ
root@dpdk:~#
_____________________________________________ÿ
Further we don't know what all the network changes(Packet Flow addition) if
required for associating IP address through the DHCP.ÿ
Would be really appreciate if have clarity on DHCP flow establishment.
Thanks & Regards
Abhijeet Karve
From: ÿ ÿ ÿ ÿ"Czesnowicz, Przemyslaw" <przemyslaw.czesnow...@intel.com>ÿ
To: ÿ ÿ ÿ ÿAbhijeet Karve <abhijeet.ka...@tcs.com>, "Gray, Mark D"
<mark.d.g...@intel.com>ÿ
Cc: ÿ ÿ ÿ ÿ"d...@dpdk.org"ÿ<d...@dpdk.org>,
"discuss@openvswitch.org"ÿ<discuss@openvswitch.org>ÿ
Date: ÿ ÿ ÿ ÿ12/15/2015 09:13 PMÿ
Subject: ÿ ÿ ÿ ÿRE: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
Successfully setup DPDK OVS with vhostuserÿ
Hi Abhijeet,
If you answer below questions it will help me understand your problem.
What do you mean by DPDK instance?
Are you able to communicate with other VM's on the same compute node?
Can you check if the DHCP requests arrive on the controller node? (I'm assuming
this is at least compute+ controller setup)
Best regards
Przemek
> -----Original Message-----
> From: dev [mailto:dev-boun...@dpdk.org] On Behalf Of Abhijeet Karve
> Sent: Tuesday, December 15, 2015 5:56 AM
> To: Gray, Mark D
> Cc: d...@dpdk.org; discuss@openvswitch.org
> Subject: Re: [dpdk-dev] DPDK OVS on Ubuntu 14.04# Issue's Resolved#
> Successfully setup DPDK OVS with vhostuser
>
> Dear All,
>
> After seting up system boot parameters as shown below, the issue is
> resolved now & we are able to successfully setup openvswitch netdev-dpdk
> with vhostuser support.
>
> __________________________________________________________
> _______________________________________________________
> Setup 2 sets of huge pages with different sizes. One for Vhost and another
> for Guest VM.
> ÿ ÿ ÿ ÿ ÿEdit /etc/default/grub.
> ÿ ÿ ÿ ÿ ÿ ÿ GRUB_CMDLINE_LINUX="iommu=pt intel_iommu=on ÿhugepagesz=1G
> hugepages=10 hugepagesz=2M hugepages=4096"
> ÿ ÿ ÿ ÿ ÿ# update-grub
> ÿ ÿ ÿ ÿ- Mount the huge pages into different directory.
> ÿ ÿ ÿ ÿ ÿ # sudo mount -t hugetlbfs nodev /mnt/huge_2M -o pagesize=2M
> ÿ ÿ ÿ ÿ ÿ # sudo mount -t hugetlbfs nodev /mnt/huge_1G -o pagesize=1G
> __________________________________________________________
> _______________________________________________________
>
> At present we are facing an issue in Testing DPDK application on setup. In our
> scenario, We have DPDK instance launched on top of the Openstack Kilo
> compute node. Not able to assign DHCP IP from controller.
>
>
> Thanks & Regards
> Abhijeet Karve
>
> =====-----=====-----=====
> Notice: The information contained in this e-mail message and/or
> attachments to it may contain confidential or privileged information. If you
> are not the intended recipient, any dissemination, use, review, distribution,
> printing or copying of the information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If you have received this
> communication in error, please notify us by reply e-mail or telephone and
> immediately and permanently delete the message and any attachments.
> Thank you
>
[attachment "nova-scheduler.log" removed by Abhijeet Karve/AHD/TCS]
[attachment "nova-compute.log" removed by Abhijeet Karve/AHD/TCS]
[attachment "neutron-server.log" removed by Abhijeet Karve/AHD/TCS]
_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss