On Thu, 29 Feb 2024 09:38:29 +0000
Peter Maydell <peter.mayd...@linaro.org> wrote:

> On Wed, 28 Feb 2024 at 19:07, Heinrich Schuchardt
> <heinrich.schucha...@canonical.com> wrote:
> >
> > On 28.02.24 19:39, Peter Maydell wrote:  
> > > The limitation to a page dates back to commit 6d16c2f88f2a in 2009,
> > > which was the first implementation of this function. I don't think
> > > there's a particular reason for that value beyond that it was
> > > probably a convenient value that was assumed to be likely "big enough".
> > >
> > > I think the idea with this bounce-buffer has always been that this
> > > isn't really a code path we expected to end up in very often --
> > > it's supposed to be for when devices are doing DMA, which they
> > > will typically be doing to memory (backed by host RAM), not
> > > devices (backed by MMIO and needing a bounce buffer). So the
> > > whole mechanism is a bit "last fallback to stop things breaking
> > > entirely".
> > >
> > > The address_space_map() API says that it's allowed to return
> > > a subset of the range you ask for, so if the virtio code doesn't
> > > cope with the minimum being set to TARGET_PAGE_SIZE then either
> > > we need to fix that virtio code or we need to change the API
> > > of this function. (But I think you will also get a reduced
> > > range if you try to use it across a boundary between normal
> > > host-memory-backed RAM and a device MemoryRegion.)  
> >
> > If we allow a bounce buffer only to be used once (via the in_use flag),
> > why do we allow only a single bounce buffer?
> >
> > Could address_space_map() allocate a new bounce buffer on every call and
> > address_space_unmap() deallocate it?
> >
> > Isn't the design with a single bounce buffer bound to fail with a
> > multi-threaded client as collision can be expected?  
> 
> Yeah, I don't suppose multi-threaded was particularly expected.
> Again, this is really a "handle the case where the guest does
> something silly" setup, which is why only one bounce buffer.
> 
> Why is your guest ending up in the bounce-buffer path?

Happens for me with emulated CXL memory. I think the case I saw
was split descriptors in virtio via address space caches
https://elixir.bootlin.com/qemu/latest/source/hw/virtio/virtio.c#L4043

One bounce buffer is in use for the outer loop and another for the descriptors
it is pointing to.

Matthias' series makes this work fine.  I need to circle back and check
how big a cache this needs.  I'm carrying a silly size because of the
side effect of the address space bug here
https://lore.kernel.org/qemu-devel/20240215142817.1904-1-jonathan.came...@huawei.com/#t
and can probably set it to much less than my currently 1GiB.

Jonathan

> 
> -- PMM
> 


Reply via email to