Thanks Mark - I get VM console working now.

Charlie

On Wed, Aug 3, 2016 at 11:06 AM, Kavanagh, Mark B <mark.b.kavan...@intel.com
> wrote:

> >
> >Thanks Mark,
> >
> >That is a very good point.
> >
> >Just found out I forgot to add "--socket-mem 1024,0" after increasing the
> page number from 1
> >to 4.
> >
> >With "sudo ./ovs-vswitchd --dpdk -c 0x1 -n 4 --socket-mem 1024,0 --
> unix:$DB_SOCK --pidfile -
> >-detach", the problem is gone.
> >
> >Now I got:
> >
> >qemu-system-x86_64: -netdev
> type=vhost-user,id=net1,chardev=char1,vhostforce: chardev "char1"
> >went up
> >qemu-system-x86_64: -netdev
> type=vhost-user,id=net2,chardev=char2,vhostforce: chardev "char2"
> >went up
> >
> >Then the terminal is frozen.
> >
> >How can I get into the console of the VM?
>
> QEMU usage is beyond the scope of the OVS ML Charlie; having said that,
> you have a few options to access the guest :)
>
> Try adding one of the following options to your qemu command line:
>          -nographic      # disable graphical output and redirect serial
> I/Os to console
>       -vnc :display     # start a VNC server on 'display'; access guest's
> console using vncviewer
>
> Hope this helps - Mark.
>
> >
> >Regards,
> >Charlie
> >
> >On Wed, Aug 3, 2016 at 9:24 AM, Kavanagh, Mark B <
> mark.b.kavan...@intel.com> wrote:
> >>
> >>Thanks Sugesh for your response.
> >>
> >>I have 4 1G hugepages allocated and the VM is requesting 3G memory.
> >>
> >>$ cat /proc/cmdline
> >>BOOT_IMAGE=/vmlinuz-4.5.7-200.fc23.x86_64
> root=/dev/mapper/fedora--desktop-root ro
> >>rd.lvm.lv=fedora-desktop/root rd.lvm.lv=fedora-desktop/swap
> default_hugepagesz=1G
> >>hugepagesz=1G hugepages=4 rhgb quiet
> >>
> >>$ grep Huge /proc/meminfo
> >>AnonHugePages:         0 kB
> >>HugePages_Total:       4
> >>HugePages_Free:        0
> >>HugePages_Rsvd:        0
> >>HugePages_Surp:        0
> >>Hugepagesize:    1048576 kB
> >
> >Hi Charlie,
> >
> >Even though you have 4 pages mounted, none are available (see
> Hugepages_Free = 0, above).
> >
> >Have you tried unmounting and remounting the hugepages?
> >
> >Thanks,
> >Mark
> >
> >>
> >>Regards,
> >>
> >>Charlie
> >>
> >>On Tue, Aug 2, 2016 at 5:23 PM, Chandran, Sugesh <
> sugesh.chand...@intel.com> wrote:
> >>
> >>
> >>Regards
> >>_Sugesh
> >>
> >>From: discuss [mailto:discuss-boun...@openvswitch.org] On Behalf Of
> Charlie Li
> >>Sent: Tuesday, August 2, 2016 4:28 PM
> >>To: discuss@openvswitch.org
> >>Subject: [ovs-discuss] unable to map backing store for hugepages: Cannot
> allocate memory
> >>
> >>Hi All,
> >>
> >>I am trying to use dpdkvhostuser to pass traffic to VM.
> >>
> >>Here is my basic system configuration.
> >>
> >>Host and VM OS: Fedora server 23
> >>DPDK 2.2.0
> >>OVS 2.50
> >>QEMU 2.4.1
> >>
> >>When I tried to start the VM, it got the following error:
> >>
> >>qemu-system-x86_64: -object memory-backend-file,id=mem,size=3072M,mem-
> >>path=/dev/hugepages,share=on: unable to map backing store for hugepages:
> Cannot allocate
> >>memory
> >>
> >>I must did some thing wrong.
> >>Any help is appreciated.
> >>
> >>Thanks,
> >>
> >>Charlie
> >>
> >>More details
>
> >>--------------------------------------------------------------------------------------------
> >-
> >>----------------------
> >>
> >>[cli@cli-desktop ~]$ grep Huge /proc/meminfo
> >>AnonHugePages:         0 kB
> >>HugePages_Total:       4
> >>HugePages_Free:        0
> >>HugePages_Rsvd:        0
> >>HugePages_Surp:        0
> >>Hugepagesize:    1048576 kB
> >>
> >>[cli@cli-desktop utilities]$ sudo ./ovs-vsctl show
> >>4fcf27a6-edbd-4770-ab39-f440e3532bcc
> >>    Bridge "br0"
> >>        Port "vhost0"
> >>            Interface "vhost0"
> >>                type: dpdkvhostuser
> >>        Port "vhost1"
> >>            Interface "vhost1"
> >>                type: dpdkvhostuser
> >>        Port "br0"
> >>            Interface "br0"
> >>                type: internal
> >>        Port "dpdk1"
> >>            Interface "dpdk1"
> >>                type: dpdk
> >>        Port "dpdk0"
> >>            Interface "dpdk0"
> >>                type: dpdk
> >>
> >>[cli@cli-desktop utilities]$ sudo ./ovs-vsctl list-ports br0
> >>dpdk0
> >>dpdk1
> >>vhost0
> >>vhost1
> >>
> >>[cli@cli-desktop utilities]$ sudo ./ovs-ofctl dump-flows br0
> >>NXST_FLOW reply (xid=0x4):
> >> cookie=0x0, duration=61391.667s, table=0, n_packets=0, n_bytes=0,
> idle_age=61391, in_port=1
> >>actions=output:3
> >> cookie=0x0, duration=61391.650s, table=0, n_packets=0, n_bytes=0,
> idle_age=61391, in_port=2
> >>actions=output:4
> >> cookie=0x0, duration=61391.632s, table=0, n_packets=0, n_bytes=0,
> idle_age=61391, in_port=3
> >>actions=output:1
> >> cookie=0x0, duration=61391.616s, table=0, n_packets=0, n_bytes=0,
> idle_age=61391, in_port=4
> >>actions=output:2
> >>
> >>[cli@cli-desktop utilities]$ sudo qemu-system-x86_64 -m 3072 -cpu host
> -hda
> >>/home/cli/VM1/FC23.qcow2 -boot c -enable-kvm -pidfile
> /home/cli/VM1/vm1.pid -monitor
> >>unix:/home/cli/VM1/vm1monitor,server,nowait -name 'FC23-VM1' -net none
> -no-reboot -object
> >>memory-backend-file,id=mem,size=3072M,mem-path=/dev/hugepages,share=on
> -numa node,memdev=mem
> >>-mem-prealloc -chardev
> socket,id=char1,path=/usr/local/var/run/openvswitch/vhost0 -netdev
> >>type=vhost-user,id=net1,chardev=char1,vhostforce -device virtio-net-
>
> >>pci,netdev=net1,mac=00:00:00:00:00:01,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_e
> >c
> >>n=off,mrg_rxbuf=off -chardev
> socket,id=char2,path=/usr/local/var/run/openvswitch/vhost1 -
> >>netdev type=vhost-user,id=net2,chardev=char2,vhostforce -device
> virtio-net-
>
> >>pci,netdev=net2,mac=00:00:00:00:00:02,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_e
> >c
> >>n=off,mrg_rxbuf=off --nographic -vnc :14
> >>
> >>qemu-system-x86_64: -object memory-backend-file,id=mem,size=3072M,mem-
> >>path=/dev/hugepages,share=on: unable to map backing store for hugepages:
> Cannot allocate
> >>m/emory
> >>[Sugesh] May be you are out of hugepages? How much memory you are
> allocating for OVS-DPDK?
> >>
> >>
>
>
_______________________________________________
discuss mailing list
discuss@openvswitch.org
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to