Hi @Scott: You were totally right. This was not related to OvS or DPDK. I resolved the problem by using the below command: Thanks for the reply.
sudo virt-sysprep -a Fedora-Cloud-Base-23-20151030.x86_64.qcow2 --root-password password:xxx Now, I have another problem - of course :) - I created two VMs according to the Intel website (https://software.intel.com/en-us/articles/using-open-vswitch-with-dpdk-for-inter-vm-nfv-applications) and INSTALL.dpdk but still I cannot ping a guest VM from the other VM, even though the related flow entries exist. My configuration is below. What am I'm missing? Should I do things in a different way? Thanks in advance - Volkan ============================= sudo qemu-system-x86_64 -m 1024 -smp 4 -cpu host -enable-kvm -no-reboot -nographic -net none -chardev socket,id=char1,path=/usr/local/var/run/openvswitch/vhost-user1 -netdev type=vhost-user,id=mynet1,chardev=char1,vhostforce -device virtio-net-pci,mac=00:00:00:00:00:01,netdev=mynet1 -object memory-backend-file,id=mem,size=1024M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc -fda trusty-server-cloudimg-amd64-floppy -drive if=virtio,file=disk1.qcow2 -boot a sudo qemu-system-x86_64 -m 1024 -smp 4 -cpu host -enable-kvm -no-reboot -nographic -net none -chardev socket,id=char2,path=/usr/local/var/run/openvswitch/vhost-user2 -netdev type=vhost-user,id=mynet2,chardev=char2,vhostforce -device virtio-net-pci,mac=00:00:00:00:00:02,netdev=mynet2 -object memory-backend-file,id=mem,size=1024M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc -fda trusty-server-cloudimg-amd64-floppy -drive if=virtio,file=disk2.qcow2 -boot a ============================= on VM1 ============================= ubuntu@ubuntu:~$ ifconfig -a eth0 eth0 Link encap:Ethernet HWaddr 00:00:00:00:00:01 inet addr:10.0.0.101 Bcast:10.0.0.255 Mask:255.255.255.0 inet6 addr: fe80::200:ff:fe00:1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:218 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:62892 (62.8 KB) ============================= and on VM2 ============================= ubuntu@ubuntu:~$ ifconfig -a eth0 eth0 Link encap:Ethernet HWaddr 00:00:00:00:00:02 inet addr:10.0.0.102 Bcast:10.0.0.255 Mask:255.255.255.0 inet6 addr: fe80::200:ff:fe00:2/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:245 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:64338 (64.3 KB) ============================= and on the host with OvS+DPDK ============================= argela@argela-HP-Z800-Workstation:~$ sudo ovs-vsctl show 456e3026-6f52-49be-8dbd-d7aba47c790f Bridge "br0" Port "dpdk0" Interface "dpdk0" type: dpdk Port "br0" Interface "br0" type: internal Port "vhost-user1" Interface "vhost-user1" type: dpdkvhostuser Port "vhost-user2" Interface "vhost-user2" type: dpdkvhostuser argela@argela-HP-Z800-Workstation:~$ sudo ovs-ofctl dump-flows br0 [sudo] password for argela: NXST_FLOW reply (xid=0x4): cookie=0x0, duration=3666.672s, table=0, n_packets=0, n_bytes=0, idle_age=3666, in_port=1 actions=output:2 cookie=0x0, duration=3659.312s, table=0, n_packets=201, n_bytes=57078, idle_age=0, in_port=2 actions=output:1 ============================= ________________________________________ From: Scott Lowe [scott.l...@scottlowe.org] Sent: Monday, December 07, 2015 6:19 PM To: Ali Volkan Atli Cc: Traynor, Kevin; discuss@openvswitch.org; ashok.em...@intel.com Subject: Re: [ovs-discuss] DPDK vhost-user socket problem for QEMU Please see my response below. > On Dec 7, 2015, at 6:43 AM, Ali Volkan Atli <volkan.a...@argela.com.tr> wrote: > > Hi > > First, thanks for the reply. vHost-user(s) can bind to the related socket > right now, but I have another problem. I followed the link > (https://software.intel.com/en-us/articles/using-open-vswitch-with-dpdk-for-inter-vm-nfv-applications) > and while booting a VM with Fedora-Cloud-Base-23-20151030.x86_64.qcow2 image > by qemu, it gives me the following error, even so I can login the VM but and > I cannot connect the VM via ssh or cannot connect to the Internet from the > VM. Therefore I cannot transfer anything into the VM. By the way, I don't > know it is related to the OvS DPDK or not? > > <snip> > ... > Starting Initial cloud-init job (metadata service crawler)... > [ 66.669708] cloud-init[681]: Cloud-init v. 0.7.7 running 'init' at Mon, 07 > Dec 2015 13:33:13 +0000. Up 65.97 seconds. > [ 66.670147] cloud-init[681]: ci-info: +++++++++++++++++++++++++++Net > device info++++++++++++++++++++++++++++ > [ 66.671295] cloud-init[681]: ci-info: > +--------+-------+-----------+-----------+-------+-------------------+ > [ 66.671479] cloud-init[681]: ci-info: | Device | Up | Address | > Mask | Scope | Hw-Address | > [ 66.671656] cloud-init[681]: ci-info: > +--------+-------+-----------+-----------+-------+-------------------+ > [ 66.671835] cloud-init[681]: ci-info: | lo: | True | 127.0.0.1 | > 255.0.0.0 | . | . | > [ 66.672133] cloud-init[681]: ci-info: | lo: | True | . | . > | d | . | > [ 66.672409] cloud-init[681]: ci-info: | eth0: | True | . | . > | . | 00:00:00:00:00:01 | > [ 66.672703] cloud-init[681]: ci-info: | eth0: | True | . | . > | d | 00:00:00:00:00:01 | > [ 66.672968] cloud-init[681]: ci-info: | eth1: | False | . | . > | . | 52:54:00:12:34:56 | > [ 66.673271] cloud-init[681]: ci-info: > +--------+-------+-----------+-----------+-------+-------------------+ > [ 66.673535] cloud-init[681]: 2015-12-07 13:33:14,286 - > url_helper.py[WARNING]: Calling > 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [0/120s]: > request error [('Connection aborted.', OSError(101, 'Network is > unreachable'))] > [ 67.675085] cloud-init[681]: 2015-12-07 13:33:15,290 - > url_helper.py[WARNING]: Calling > 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [1/120s]: > request error [('Connection aborted.', OSError(101, 'Network is > unreachable'))] > ... > ... > [ 185.978295] cloud-init[681]: 2015-12-07 13:35:13,593 - > url_helper.py[WARNING]: Calling > 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [119/120s]: > request error [('Connection aborted.', OSError(101, 'Network is > unreachable'))] > [ 192.985262] cloud-init[681]: 2015-12-07 13:35:20,601 - > DataSourceEc2.py[CRITICAL]: Giving up on md from > ['http://169.254.169.254/2009-04-04/meta-data/instance-id'] after 126 seconds > [ 192.996385] cloud-init[681]: 2015-12-07 13:35:20,608 - > DataSourceCloudSigma.py[WARNING]: failed to get hypervisor product name via > dmi data > ... > ... > Fedora 23 (Cloud Edition) > Kernel 4.2.3-300.fc23.x86_64 on an x86_64 (ttyS0) > > localhost login: I certainly could be mistaken, but I think this is not related to OVS or DPDK. Most "cloud" images expect the presence of a metadata service in order to perform instance customization. In this case, you're booting a cloud image via QEMU, so there is no metadata service, and therefore no customization (password setting, SSH key injection, etc.) can occur. The warnings you're seeing are all related to cloud-init, as it is unable to connect to a metadata source. Hope this helps! -- Scott _______________________________________________ discuss mailing list discuss@openvswitch.org http://openvswitch.org/mailman/listinfo/discuss