On 2015/2/6 13:54, Xu, Qian Q wrote:
> Haifeng
> Are you using the latest dpdk branch with vhost-user patches? I have never
> met the issue.
> When is the vhost sample crashed? When you start VM or when you run sth in
> VM? Is your qemu 2.2? How about your memory info? Could you give more details
> about your steps?
>
>
Hi,Xu
what is sth means?
I use the dpdk branch of a09f3e4c50467512970519943d26d9c5753584e0 and qemu
branch of v2.2.0.
Here is my host information:
linux-mRFnwZ:/mnt/sdc/linhf/dpdk-vhost-user/dpdk # free
total used free shared buffers cached
Mem: 82450600 22555172 59895428 0 1506132 3205304
-/+ buffers/cache: 17843736 64606864
Swap: 0 0 0
linux-mRFnwZ:/mnt/sdc/linhf/dpdk-vhost-user/dpdk # cat /proc/meminfo |grep Huge
AnonHugePages: 20480 kB
HugePages_Total: 8192
HugePages_Free: 7052
HugePages_Rsvd: 396
HugePages_Surp: 0
Hugepagesize: 2048 kB
linux-mRFnwZ:/mnt/sdc/linhf/dpdk-vhost-user/dpdk # uname -a
Linux linux-mRFnwZ 3.0.93-0.8-default #1 SMP Tue Aug 27 08:44:18 UTC 2013
(70ed288) x86_64 x86_64 x86_64 GNU/Linux
Here is my guest infomation:
SUSE 11 SP3 with kernel 3.0.76-0.8-default
step:
umount /dev/hugepages/
rmmod igb_uio
rmmod rte_kni
mount -t hugetlbfs nodev /dev/hugepages -o pagesize=2M
echo 8192 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
export RTE_SDK=/mnt/sdc/linhf/dpdk-vhost-user/dpdk
export RTE_TARGET=x86_64-native-linuxapp-gcc
modprobe uio
insmod ${RTE_SDK}/x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
dpdk_nic_bind.py -b igb_uio 02:00.0 02:00.1
rmmod vhost_net
modprobe cuse
insmod ${RTE_SDK}/lib/librte_vhost/eventfd_link/eventfd_link.ko
${RTE_SDK}/examples/vhost/build/app/vhost-switch -c 0x300 -n 4 --huge-dir
/dev/hugepages -m 2048 -- -p 0x1 --vm2vm 2 --mergeable 0 --zero-copy 0
qemu-system-x86_64 -name vm1 -enable-kvm -smp 2 -m 1024 \
-object memory-backend-file,id=mem,size=1024M,mem-path=/dev/hugepages,share=on
-numa node,memdev=mem \
-chardev socket,id=chr0,path=/mnt/sdc/linhf/dpdk-vhost-user/vhost-net \
-netdev type=vhost-user,id=net0,chardev=chr0 -device
virtio-net-pci,netdev=net0,mac=00:00:00:00:00:01,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
\
-chardev socket,id=chr1,path=/mnt/sdc/linhf/dpdk-vhost-user/vhost-net \
-netdev type=vhost-user,id=net1,chardev=chr1 -device
virtio-net-pci,netdev=net1,mac=00:00:00:00:00:02,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off
\
-drive file=/mnt/sdc/linhf/vm1.img -vnc :0
qemu-system-x86_64: -netdev type=vhost-user,id=net0,chardev=chr0: chardev
"chr0" went up
qemu-system-x86_64: -netdev type=vhost-user,id=net1,chardev=chr1: chardev
"chr1" went up
EAL: Detected lcore 0 as core 0 on socket 1
EAL: Detected lcore 1 as core 1 on socket 1
EAL: Detected lcore 2 as core 9 on socket 1
EAL: Detected lcore 3 as core 10 on socket 1
EAL: Detected lcore 4 as core 0 on socket 0
EAL: Detected lcore 5 as core 1 on socket 0
EAL: Detected lcore 6 as core 9 on socket 0
EAL: Detected lcore 7 as core 10 on socket 0
EAL: Detected lcore 8 as core 0 on socket 1
EAL: Detected lcore 9 as core 1 on socket 1
EAL: Detected lcore 10 as core 9 on socket 1
EAL: Detected lcore 11 as core 10 on socket 1
EAL: Detected lcore 12 as core 0 on socket 0
EAL: Detected lcore 13 as core 1 on socket 0
EAL: Detected lcore 14 as core 9 on socket 0
EAL: Detected lcore 15 as core 10 on socket 0
EAL: Support maximum 128 logical core(s) by configuration.
EAL: Detected 16 lcore(s)
EAL: Setting up memory...
EAL: Ask a virtual area of 0x200000000 bytes
EAL: Virtual area found at 0x7f3b57400000 (size = 0x200000000)
EAL: Ask a virtual area of 0x200000 bytes
EAL: Virtual area found at 0x7f3b57000000 (size = 0x200000)
EAL: Ask a virtual area of 0x1ffe00000 bytes
EAL: Virtual area found at 0x7f3957000000 (size = 0x1ffe00000)
EAL: Requesting 1024 pages of size 2MB from socket 1
EAL: WARNING: clock_gettime cannot use CLOCK_MONOTONIC_RAW and HPET is not
available - clock timings may be less accurate.
EAL: TSC frequency is ~2400234 KHz
EAL: Master core 8 is ready (tid=5a596800)
EAL: Core 9 is ready (tid=58d34700)
EAL: PCI device 0000:02:00.0 on NUMA socket -1
EAL: probe driver: 8086:10fb rte_ixgbe_pmd
EAL: PCI memory mapped at 0x7f3b57200000
EAL: PCI memory mapped at 0x7f3b57280000
PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 17, SFP+: 5
PMD: eth_ixgbe_dev_init(): port 0 vendorID=0x8086 deviceID=0x10fb
EAL: PCI device 0000:02:00.1 on NUMA socket -1
EAL: probe driver: 8086:10fb rte_ixgbe_pmd
EAL: PCI memory mapped at 0x7f3b57284000
EAL: PCI memory mapped at 0x7f3b57304000
PMD: eth_ixgbe_dev_init(): MAC: 2, PHY: 1
PMD: eth_ixgbe_dev_init(): port 1 vendorID=0x8086 deviceID=0x10fb
pf queue num: 0, configured vmdq pool num: 64, each vmdq pool has 2 queues
PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f3957aebbc0 hw_ring=0x7f3b57028580
dma_addr=0xedf428580
PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are
satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=0.
PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please make sure RX burst
size no less than 32.
... ...
PMD: ixgbe_dev_rx_queue_setup(): sw_ring=0x7f39579ca040 hw_ring=0x7f39a2eb3b80
dma_addr=0xf2b6b3b80
PMD: ixgbe_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are
satisfied. Rx Burst Bulk Alloc function will be used on port=0, queue=127.
PMD: ixgbe_dev_rx_queue_setup(): Vector rx enabled, please make sure RX burst
size no less than 32.
PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f39579c7f00 hw_ring=0x7f39a2ec3c00
dma_addr=0xf2b6c3c00
PMD: set_tx_function(): Using full-featured tx code path
PMD: set_tx_function(): - txq_flags = e01 [IXGBE_SIMPLE_FLAGS=f01]
PMD: set_tx_function(): - tx_rs_thresh = 32 [RTE_PMD_IXGBE_TX_MAX_BURST=32]
PMD: ixgbe_dev_tx_queue_setup(): sw_ring=0x7f39579c5dc0 hw_ring=0x7f39a2ed3c00
dma_addr=0xf2b6d3c00
PMD: set_tx_function(): Using full-featured tx code path
PMD: set_tx_function(): - txq_flags = e01 [IXGBE_SIMPLE_FLAGS=f01]
PMD: set_tx_function(): - tx_rs_thresh = 32 [RTE_PMD_IXGBE_TX_MAX_BURST=32]
VHOST_PORT: Max virtio devices supported: 64
VHOST_PORT: Port 0 MAC: 00 1b 21 69 f7 c8
VHOST_PORT: Skipping disabled port 1
VHOST_DATA: Procesing on Core 9 started
VHOST_CONFIG: socket created, fd:15
VHOST_CONFIG: bind to vhost-net
VHOST_CONFIG: new virtio connection is 16
VHOST_CONFIG: new device, handle is 0
VHOST_CONFIG: new virtio connection is 17
VHOST_CONFIG: new device, handle is 1
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:18
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:19
VHOST_CONFIG: read message VHOST_USER_SET_OWNER
VHOST_CONFIG: read message VHOST_USER_GET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:20
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:21
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:22
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:18
VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
VHOST_CONFIG: mapped region 0 fd:19 to 0xffffffffffffffff sz:0xa0000 off:0x0
VHOST_CONFIG: mmap qemu guest failed.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
./run_dpdk_vhost.sh: line 19: 20796 Segmentation fault
${RTE_SDK}/examples/vhost/build/app/vhost-switch -c 0x300 -n 4 --huge-dir
/dev/hugepages -m 2048 -- -p 0x1 --vm2vm 2 --mergeable 0 --zero-copy 0
>
> -----Original Message-----
> From: Linhaifeng [mailto:haifeng.lin at huawei.com]
> Sent: Friday, February 06, 2015 12:02 PM
> To: Xu, Qian Q; Xie, Huawei
> Cc: lilijun; liuyongan at huawei.com; dev at dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there
> is no buffer
>
>
>
> On 2015/2/4 9:38, Xu, Qian Q wrote:
>> 4. Launch the VM1 and VM2 with virtio device, note: you need use qemu
>> version>2.1 to enable the vhost-user server's feature. Old qemu such as
>> 1.5,1.6 didn't support it.
>> Below is my VM1 startup command, for your reference, similar for VM2.
>> /home/qemu-2.2.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 -cpu
>> host -enable-kvm -m 2048 -object
>> memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa
>> node,memdev=mem -mem-prealloc -smp 2 -drive file=/home/img/dpdk1-vm1.img
>> -chardev socket,id=char0,path=/home/dpdk-vhost/vhost-net -netdev
>> type=vhost-user,id=mynet1,chardev=char0,vhostforce -device
>> virtio-net-pci,mac=00:00:00:00:00:01, -nographic
>>
>> 5. Then in the VM, you can have the same operations as before, send packet
>> from virtio1 to virtio2.
>>
>> Pls let me know if any questions, issues.
>
> Hi xie & xu
>
> When I try to start VM vhost-switch crashed.
>
> VHOST_CONFIG: read message VHOST_USER_SET_MEM_TABLE
> VHOST_CONFIG: mapped region 0 fd:19 to 0xffffffffffffffff sz:0xa0000 off:0x0
> VHOST_CONFIG: mmap qemu guest failed.
> VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
> VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
> VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
> run_dpdk_vhost.sh: line 19: 1854 Segmentation fault
> ${RTE_SDK}/examples/vhost/build/app/vhost-switch -c 0x300 -n 4 --huge-dir
> /dev/hugepages -m 2048 -- -p 0x1 --vm2vm 2 --mergeable 0 --zero-copy 0
>
>
>
--
Regards,
Haifeng