On 5/10/2016 4:42 PM, Michael S. Tsirkin wrote:
> On Tue, May 10, 2016 at 08:07:00AM +0000, Xie, Huawei wrote:
>> On 5/10/2016 3:56 PM, Michael S. Tsirkin wrote:
>>> On Tue, May 10, 2016 at 07:24:10AM +0000, Xie, Huawei wrote:
>>>> On 5/10/2016 2:08 AM, Yuanhan Liu wrote:
>>>>> On Mon, May 09, 2016 at 04:47:02PM +0000, Xie, Huawei wrote:
>>>>>> On 5/7/2016 2:36 PM, Yuanhan Liu wrote:
>>>>>>> +static void *
>>>>>>> +vhost_user_client_reconnect(void *arg)
>>>>>>> +{
>>>>>>> +       struct reconnect_info *reconn = arg;
>>>>>>> +       int ret;
>>>>>>> +
>>>>>>> +       RTE_LOG(ERR, VHOST_CONFIG, "reconnecting...\n");
>>>>>>> +       while (1) {
>>>>>>> +               ret = connect(reconn->fd, (struct sockaddr 
>>>>>>> *)&reconn->un,
>>>>>>> +                               sizeof(reconn->un));
>>>>>>> +               if (ret == 0)
>>>>>>> +                       break;
>>>>>>> +               sleep(1);
>>>>>>> +       }
>>>>>>> +
>>>>>>> +       vhost_user_add_connection(reconn->fd, reconn->vsocket);
>>>>>>> +       free(reconn);
>>>>>>> +
>>>>>>> +       return NULL;
>>>>>>> +}
>>>>>>> +
>>>>>> We could create hundreds of vhost-user ports in OVS. Wihout connections
>>>>>> with QEMU established, those ports are just inactive. This works fine in
>>>>>> server mode.
>>>>>> With client modes, do we need to create hundreds of vhost threads? This
>>>>>> would be too interruptible.
>>>>>> How about we create only one thread and do the reconnections for all the
>>>>>> unconnected socket?
>>>>> Yes, good point and good suggestion. Will do it in v2.
>>>> Hi Michael:
>>>> This reminds me another irrelevant issue.
>>>> In OVS, currently for each vhost port, we create an unix domain socket,
>>>> and QEMU vhost proxy connects to this socket, and we use this to
>>>> identify the connection. This works fine but is our workaround,
>>>> otherwise we have no way to identify the connection.
>>>> Do you think if this is an issue?
>> Let us say vhost creates one unix domain socket, with path as "sockpath",
>> and two virtio ports in two VMS both connect to the same socket with the
>> following command line
>>     -chardev socket,id=char0,path=sockpath
>> How could vhost identify the connection?
> getpeername(2)?

getpeer name returns host/port? then it isn't useful.

The typical scenario in my mind is:
We create a OVS port with the name "port1", and when we receive an
virtio connection with ID "port1", we attach this virtio interface to
the OVS port "port1".


>
>
>> Workarounds:
>> vhost creates two unix domain sockets, with path as "sockpath1" and
>> "sockpath2" respectively,
>> and the virtio ports in two VMS respectively connect to "sockpath1" and
>> "sockpath2".
>>
>> If we have some name string from QEMU over vhost, as you mentioned, we
>> could create only one socket with path "sockpath".
>> User ensure that the names are unique, just as how they do with multiple
>> sockets.
>>
> Seems rather fragile.

>From the scenario above, it is enough. That is also how it works today
in DPDK OVS implementation with multiple sockets.
Any other idea?

>
>>> I'm sorry, I have trouble understanding what you wrote above.
>>> What is the issue you are trying to work around?
>>>
>>>> Do we have plan to support identification in VHOST_USER_MESSAGE? With
>>>> the identification, if vhost as server, we only need to create one
>>>> socket which receives multiple connections, and use the ID in the
>>>> message to identify the connection.
>>>>
>>>> /huawei
>>> Sending e.g. -name string from qemu over vhost might be useful
>>> for debugging, but I'm not sure it's a good idea to
>>> rely on it being unique.
>>>
>>>>> Thanks.
>>>>>
>>>>>   --yliu
>>>>>

Reply via email to