Am 21.12.2017 um 15:29 schrieb Michael S. Tsirkin:
> Backends don't need to know what frontend requested a reset,
> and notifying then from virtio_error is messy because
> virtio_error itself might be invoked from backend.
>
> Let's just set the status directly.
>
> Cc: qemu-sta...@nongnu.org
> Reported-by: Ilya Maximets <i.maxim...@samsung.com>
> Signed-off-by: Michael S. Tsirkin <m...@redhat.com>
> ---
>  hw/virtio/virtio.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> index ad564b0..d6002ee 100644
> --- a/hw/virtio/virtio.c
> +++ b/hw/virtio/virtio.c
> @@ -2469,7 +2469,7 @@ void GCC_FMT_ATTR(2, 3) virtio_error(VirtIODevice 
> *vdev, const char *fmt, ...)
>      va_end(ap);
>  
>      if (virtio_vdev_has_feature(vdev, VIRTIO_F_VERSION_1)) {
> -        virtio_set_status(vdev, vdev->status | VIRTIO_CONFIG_S_NEEDS_RESET);
> +        vdev->status = vdev->status | VIRTIO_CONFIG_S_NEEDS_RESET;
>          virtio_notify_config(vdev);
>      }
>  


Is it possible that this patch introduces a stall in I/O and a deadlock on a 
drain all?

I have seen Qemu VMs being I/O stalled and deadlocking on a vm stop command in

blk_drain_all. This happened after a longer storage outage.


I am asking just theoretically because I have seen this behaviour first when we

backported this patch in our stable 2.9 branch.


Thank you,

Peter



Reply via email to