On Wed, 21 Jun 2017, Boris Ostrovsky wrote:
> >>> +
> >>> + mappass->reqcopy = *req;
> >>> + icsk = inet_csk(mappass->sock->sk);
> >>> + queue = &icsk->icsk_accept_queue;
> >>> + spin_lock(&queue->rskq_lock);
> >>> + data = queue->rskq_accept_head != NULL;
> >>> + spin_unlock(&queue->rskq_lock);
>
>>> +
>>> + mappass->reqcopy = *req;
>>> + icsk = inet_csk(mappass->sock->sk);
>>> + queue = &icsk->icsk_accept_queue;
>>> + spin_lock(&queue->rskq_lock);
>>> + data = queue->rskq_accept_head != NULL;
>>> + spin_unlock(&queue->rskq_lock);
>> What is the purpose of the queue lock here?
On Tue, 20 Jun 2017, Boris Ostrovsky wrote:
> > @@ -499,6 +521,55 @@ static int pvcalls_back_accept(struct xenbus_device
> > *dev,
> > static int pvcalls_back_poll(struct xenbus_device *dev,
> > struct xen_pvcalls_request *req)
> > {
> > + struct pvcalls_fedata *fedata
> @@ -499,6 +521,55 @@ static int pvcalls_back_accept(struct xenbus_device *dev,
> static int pvcalls_back_poll(struct xenbus_device *dev,
>struct xen_pvcalls_request *req)
> {
> + struct pvcalls_fedata *fedata;
> + struct sockpass_mapping *mappass;
> + st
Implement poll on passive sockets by requesting a delayed response with
mappass->reqcopy, and reply back when there is data on the passive
socket.
Poll on active socket is unimplemented as by the spec, as the frontend
should just wait for events and check the indexes on the indexes page.
Only sup