Hi,

I have been experimenting with the QEMU Userspace NVMe driver on s390x architecture. I have noticed an issue when assigning multiple virtqueues and multiple iothreads to the block device. The driver works well with a single iothread, but when using more than one iothread we can hit a problem where 2 iothreads can update the completion queue head doorbell register with the same value within microseconds of each other. As far as I understand this would be an invalid doorbell write as per NVMe spec (for eg spec version 1.4 section 5.2.1 defines this as an invalid write). This causes the NVMe device not to post any further completions. As far i understand this doesn't seem to be specific to s390x architecture.

I would appreciate some guidance on this to see if there is some known limitations with the userspace NVMe driver and multi queue/multi iothread? This is an example xml snippet i used to define the nvme block device

...

<disk type='nvme' device='disk'>
      <driver name='qemu' type='raw' queues='8' packed='on'>
            <iothreads>
                  <iothread id='1'/>
            </iothreads>
      </driver>
      <source type='pci' managed='yes' namespace='1'>
            <address domain='0x0004' bus='0x00' slot='0x00' function='0x0'/>
      </source>
      <target dev='vde' bus='virtio'/>
      <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0002'/>

</disk>
....

Appreciate any help on this!

Thanks
Farhan


Reply via email to