Yes, that is correct. Closing the socket on failure needs to be added. Regards,
Patrik On 04/11/2016 11:34 AM, Christian Ehrhardt wrote: > I like the approach as well to go for the fix for robustness first. > > I was accidentally able to find another testcase to hit the same root > cause. > Adding guests with 15 vhost_user based NICs each while having rxq for > openvswitch-dpdk set to 4 and multiqueue for the guest devices at 4 > already breaks when adding the thirds such guests. > That is way earlier than I would have expected the fd's to be > exhausted but still the same root cause, so just another test for the > same. > > In prep to the wider check to the patch a minor review question from me: > On the section of rte_vhost_driver_register that now detects if there > were issues we might want to close the socketfd as well when bailing out. > Otherwise we would just have another source of fd leaks or would that > be reused later on even now that we have freed vserver-path and > vserver itself? > > ret = fdset_add(&g_vhost_server.fdset, vserver->listenfd, > vserver_new_vq_conn, NULL, vserver); > if (ret < 0) { > pthread_mutex_unlock(&g_vhost_server.server_mutex); > RTE_LOG(ERR, VHOST_CONFIG, > "failed to add listen fd %d to vhost server fdset\n", > vserver->listenfd); > free(vserver->path); > + close(vserver->listenfd); > free(vserver); > return -1; > } > > > Christian Ehrhardt > Software Engineer, Ubuntu Server > Canonical Ltd > > On Mon, Apr 11, 2016 at 8:06 AM, Patrik Andersson R > <patrik.r.andersson at ericsson.com > <mailto:patrik.r.andersson at ericsson.com>> wrote: > > I fully agree with this course of action. > > Thank you, > > Patrik > > > > On 04/08/2016 08:47 AM, Xie, Huawei wrote: > > On 4/7/2016 10:52 PM, Christian Ehrhardt wrote: > > I totally agree to that there is no deterministic rule > what to expect. > The only rule is that #fd certainly always is > > #vhost_user devices. > In various setup variants I've crossed fd 1024 anywhere > between 475 > and 970 vhost_user ports. > > Once the discussion continues and we have an updates > version of the > patch with some more agreement I hope I can help to test it. > > Thanks. Let us first temporarily fix this problem for > robustness, then > we consider whether upgrade to (e)poll. > Will check the patch in detail later. Basically it should work > but need > check whether we need extra fixes elsewhere. > > >