On Wed, Feb 8, 2017 at 2:00 AM, Razvan Cojocaru <rcojoc...@bitdefender.com>
wrote:

> It is currently possible for the guest to lock when subscribing
> to synchronous vm_events if max_vcpus is larger than the
> number of available ring buffer slots. This patch no longer
> blocks already paused VCPUs, fixing the issue for this use
> case.
>
> Signed-off-by: Razvan Cojocaru <rcojoc...@bitdefender.com>
> ---
>  xen/common/vm_event.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
> index 82ce8f1..2005a64 100644
> --- a/xen/common/vm_event.c
> +++ b/xen/common/vm_event.c
> @@ -316,7 +316,8 @@ void vm_event_put_request(struct domain *d,
>       * See the comments above wake_blocked() for more information
>       * on how this mechanism works to avoid waiting. */
>      avail_req = vm_event_ring_available(ved);
> -    if( current->domain == d && avail_req < d->max_vcpus )
> +    if( current->domain == d && avail_req < d->max_vcpus &&
> +        !atomic_read( &current->vm_event_pause_count ) )
>          vm_event_mark_and_pause(current, ved);
>

Hi Razvan,
I would also like to have the change made in this patch that unblocks the
vCPUs as soon as a spot opens up on the ring. Doing just what this patch
has will not solve the problem if there are asynchronous events used.

Tamas
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

Reply via email to