Hi,xie & xu I found that the new code had try to notify guest after send each packet after 2bbb811. So this bug not exist now.
static inline uint32_t __attribute__((always_inline)) virtio_dev_merge_rx(struct virtio_net *dev, uint16_t queue_id, struct rte_mbuf **pkts, uint32_t count) { ... ... for (pkt_idx = 0; pkt_idx < count; pkt_idx++) { ... ... /* Kick the guest if necessary. */ if (!(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT)) eventfd_write((int)vq->kickfd, 1); } return count; } thank you very much! On 2015/1/30 16:20, Xu, Qian Q wrote: > Haifeng > Could you give more information so that we can reproduce your issue? Thanks. > 1. What's your dpdk package, based on which branch, with Huawei's > vhost-user's patches? > 2. What's your step and command to launch vhost sample? > 3. What is mz? Your internal tool? I can't yum install mz or download mz > tool. > 4. As to your test scenario, I understand it in this way: virtio1 in VM1, > virtio2 in VM2, then let virtio1 send packages to virtio2, the problem is > that after 3 hours, virtio2 can't receive packets, but virtio1 is still > sending packets, am I right? So mz is like a packet generator to send > packets, right? > > > -----Original Message----- > From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Linhaifeng > Sent: Thursday, January 29, 2015 9:51 PM > To: Xie, Huawei; dev at dpdk.org > Subject: Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer when there > is no buffer > > > > On 2015/1/29 21:00, Xie, Huawei wrote: >> >> >>> -----Original Message----- >>> From: Linhaifeng [mailto:haifeng.lin at huawei.com] >>> Sent: Thursday, January 29, 2015 8:39 PM >>> To: Xie, Huawei; dev at dpdk.org >>> Subject: Re: [dpdk-dev] [PATCH] vhost: notify guest to fill buffer >>> when there is no buffer >>> >>> >>> >>> On 2015/1/29 18:39, Xie, Huawei wrote: >>> >>>>> - if (count == 0) >>>>> + /* If there is no buffers we should notify guest to fill. >>>>> + * This is need when guest use virtio_net driver(not pmd). >>>>> + */ >>>>> + if (count == 0) { >>>>> + if (!(vq->avail->flags & >>>>> VRING_AVAIL_F_NO_INTERRUPT)) >>>>> + eventfd_write((int)vq->kickfd, 1); >>>>> return 0; >>>>> + } >>>> >>>> Haifeng: >>>> Is it the root cause and is it protocol required? >>>> Could you give a detailed description for that scenario? >>>> >>> >>> I use mz to send data from one VM1 to VM2.The two VM use virtio-net driver. >>> VM1 execute follow script: >>> for((i=0;i<999999999;i++)); >>> do >>> mz eth0 -t udp -A 1.1.1.1 -B 1.1.1.2 -a 00:00:00:00:00:01 -b >>> 00:00:00:00:00:02 -c >>> 10000000 -p 512 >>> sleep 4 >>> done >>> >>> VM2 execute follow command to watch: >>> watch -d ifconfig >>> >>> After many hours VM2 stop to receive data. >>> >>> Could you test it ? >> >> >> We could try next week after I send the whole patch. >> How many hours? Is it reproducible at your side? I inject packets through >> packet generator to guest for more than ten hours, haven't met issues. > > About three hours. > What kind of driver you used in guest?virtio-net-pmd or virtio-net? > > >> As I said in another mail sent to you, could you dump the status of vring >> if you still have the spot? > > How to dump the status of vring in guest? > >> Could you please also reply to that mail? >> > > Which mail? > > >> For the patch, if we have no root cause, I prefer not to apply it, so that >> we don't send more interrupts than needed to guest to affect performance. > > I found that if we add this notify the performance is better(growth of > 100kpps when use 64byte UDP packets) > >> People could temporarily apply this patch as a work around. >> >> Or anyone >> > > OK.I'm also not sure about this bug.I think i should do something to found > the real reason. > >> >>> -- >>> Regards, >>> Haifeng >> >> >> > -- Regards, Haifeng