On Mon, Jul 15, 2019 at 06:43:12PM +1000, Alexey Kardashevskiy wrote: > > e.g. if you have a DMA that supports 40-bit DMA addressing we could > > always treat it as if supports 32-bit addressing,and I thought the > > powerpc code does that, > > powerpc does that and this is what the patchset is changing as people > complained that 2GB DMA window has bad effects on AMD GPUs (cannot allocate > enough buffers) and 40/100Gbit devices (lower performance), I do not have > the details handy.
Make sense. I'm just surprised about the complains from the habalabs folks, which sounded like a 40something bit DMA mask did not work at all for them on power9, which did not fit my reading of the code. > > > as the DMA API now relies on that. > > Relies on what precisely? If a device cannot do full 64bit, then it has to > be no more than just 32bit? The fact that if say you iommu only supports mode that return up to 32-bit iova and a driver sets a 48 or 64-bit mask you still return success instead of letting the driver handle the failure and set a 32-bit mask in the fallback code. As said I think the powerpc code is fine based on my reading from it. > > Did I miss > > something and it explicitly rejected that (in which case I didn't spot > > the fix in this series), or is this just an optimization to handle these > > devices more optimally, in which case maybe the changelog could be > > improved a bit. > > > 4/4 did this essentially: As long as the above is fine (which I think it is) just make it a little mor clear that this is a simple optimization, not a bug fix for DMA API usage.