On Thu, Sep 09, 2021 at 06:00:36AM +0000, John Johnson wrote:

> > On Sep 7, 2021, at 10:24 AM, John Levon <[email protected]> wrote:
> > 
> > On Mon, Aug 16, 2021 at 09:42:42AM -0700, Elena Ufimtseva wrote:
> > 
> >> +int vfio_user_region_write(VFIODevice *vbasedev, uint32_t index,
> >> +                           uint64_t offset, uint32_t count, void *data)
> >> +{
> >> +    g_autofree VFIOUserRegionRW *msgp = NULL;
> >> +    int size = sizeof(*msgp) + count;
> >> +
> >> +    msgp = g_malloc0(size);
> >> +    vfio_user_request_msg(&msgp->hdr, VFIO_USER_REGION_WRITE, size,
> >> +                          VFIO_USER_NO_REPLY);
> > 
> > Mirroring 
> > https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_oracle_qemu_issues_10&d=DwIGaQ&c=s883GpUCOChKOHiocYtGcg&r=v7SNLJqx7b9Vfc7ZO82Wg4nnZ8O5XkACFQ30bVKxotI&m=PJ390CfKPdTFUffSi02whMSqey2en8OJv7dm9VAQKI0&s=Mfp1xRKET3LEucEeZwUVUvSJ3V0zzGEktOzFwMsTfEE&e=
> >   here for visibility:
> > 
> > Currently, vfio_user_region_write uses VFIO_USER_NO_REPLY unconditionally,
> > meaning essentially all writes are posted. But that shouldn't be the case, 
> > for
> > example for PCI config space, where it's expected that writes will wait for 
> > an
> > ack before the VCPU continues.
> 
>       I’m not sure following the PCI spec (mem writes posted, config & IO
> are not) completely solves the issue if the device uses sparse mmap.  A store
> to went over the socket can be passed by a load that goes directly to memory,
> which could break a driver that assumes a load completion means older stores
> to the same device have also completed.

Sure, but sparse mmaps are under the device's control - so wouldn't that be
something of a "don't do that" scenario?

regards
john

Reply via email to