On 6/28/22 16:09, Michael Schmitz wrote:
On 29/06/22 09:50, Arnd Bergmann wrote:
On Tue, Jun 28, 2022 at 11:03 PM Michael Schmitz
wrote:
On 28/06/22 19:03, Geert Uytterhoeven wrote:
The driver allocates bounce buffers using kmalloc if it hits an
unaligned data buffer - can such buffers still
On 6/9/22 10:54, John Garry wrote:
ok, but do you have a system where the UFS host controller is behind an
IOMMU? I had the impression that UFS controllers would be mostly found
in embedded systems and IOMMUs are not as common on there.
Modern phones have an IOMMU. Below one can find an exampl
On 6/9/22 01:00, John Garry wrote:
On 08/06/2022 22:07, Bart Van Assche wrote:
On 6/8/22 10:50, John Garry wrote:
Please note that this limit only applies if we have an IOMMU enabled
for the scsi host dma device. Otherwise we are limited by dma direct
or swiotlb max mapping size, as before
On 6/8/22 10:50, John Garry wrote:
Please note that this limit only applies if we have an IOMMU enabled for
the scsi host dma device. Otherwise we are limited by dma direct or
swiotlb max mapping size, as before.
SCSI host bus adapters that support 64-bit DMA may support much larger
transfer
On 6/6/22 02:30, John Garry wrote:
+ if (dma_dev->dma_mask) {
+ shost->max_sectors = min_t(unsigned int, shost->max_sectors,
+ dma_opt_mapping_size(dma_dev) >> SECTOR_SHIFT);
+ }
Since IOVA_RANGE_CACHE_MAX_SIZE = 6 this limits max_sectors
On 6/6/22 02:30, John Garry wrote:
+::
+
+ size_t
+ dma_opt_mapping_size(struct device *dev);
+
+Returns the maximum optimal size of a mapping for the device. Mapping large
+buffers may take longer so device drivers are advised to limit total DMA
+streaming mappings length to the retu
On 6/6/22 02:30, John Garry via iommu wrote:
+unsigned long iova_rcache_range(void)
+{
+ return PAGE_SIZE << (IOVA_RANGE_CACHE_MAX_SIZE - 1);
+}
My understanding is that iova cache entries may be smaller than
IOVA_RANGE_CACHE_MAX_SIZE and hence that even if code that uses the DMA
mappin
On 6/6/22 02:30, John Garry wrote:
As reported in [0], DMA mappings whose size exceeds the IOMMU IOVA caching
limit may see a big performance hit.
This series introduces a new DMA mapping API, dma_opt_mapping_size(), so
that drivers may know this limit when performance is a factor in the
mapping
On Mon, 2019-04-08 at 09:23 -0600, Alex Williamson wrote:
> Loading modules is privileged:
>
> $ modprobe vfio-pci
> modprobe: ERROR: could not insert 'vfio_pci': Operation not permitted
>
> Granting a device to a user for device assignment purposes is also a
> privileged operation. Can you desc
On Sun, 2019-04-07 at 17:31 -0600, Alex Williamson wrote:
> It's not possible to do what you want with this configuration. An IOMMU
> group represents the smallest set of devices that are isolated from
> other sets of devices and is also therefore the minimum granularity we
> can assign devices to
On 4/7/19 2:06 PM, Alex Williamson wrote:
On Sun, 7 Apr 2019 12:10:38 -0700
Bart Van Assche wrote:
If I tell qemu to use PCI pass-through for a PCI adapter and next load the
lpfc driver for an lpfc adapter that has not been passed through to any VM
a kernel bug is hit. Do you perhaps know
Hi Jiang,
If I tell qemu to use PCI pass-through for a PCI adapter and next load the
lpfc driver for an lpfc adapter that has not been passed through to any VM
a kernel bug is hit. Do you perhaps know whether it should be possible to
a load kernel driver in this scenario? If so, do you know what
t the attached patches? These three patches are a
splitup of the single patch at the start of this e-mail thread.
Thanks,
Bart.From a6fe3a6db80f2bc359e049b72e13aa171fff6ffa Mon Sep 17 00:00:00 2001
From: Bart Van Assche
Date: Wed, 11 Jan 2017 13:31:42 -0800
Subject: [PATCH 1/3] treewide: Move dma_o
On Wed, 2017-01-11 at 07:48 +0100, Greg Kroah-Hartman wrote:
> On Tue, Jan 10, 2017 at 04:56:41PM -0800, Bart Van Assche wrote:
> > Several RDMA drivers, e.g. drivers/infiniband/hw/qib, use the CPU to
> > transfer data between memory and PCIe adapter. Because of performance
&g
On Wed, 2017-01-11 at 07:46 +0100, Greg Kroah-Hartman wrote:
> On Tue, Jan 10, 2017 at 04:56:41PM -0800, Bart Van Assche wrote:
> > Several RDMA drivers, e.g. drivers/infiniband/hw/qib, use the CPU to
> > transfer data between memory and PCIe adapter. Because of performance
&g
ops->/intel_dma_ops./' arch/ia64/kernel/pci-dma.c
sed -i -e 's/static const struct dma_map_ops sn_dma_ops/static struct
dma_map_ops sn_dma_ops/' arch/ia64/sn/pci/pci_dma.c
Signed-off-by: Bart Van Assche
Reviewed-by: Christoph Hellwig
Cc: Aurelien Jacquiot
Cc: Catalin Marinas
C
dma_map_ops pointer. Additionally, introduce the function
set_dma_ops() that will be used by a later patch in this series.
Signed-off-by: Bart Van Assche
Cc: Greg Kroah-Hartman
Cc: Aurelien Jacquiot
Cc: Catalin Marinas
Cc: Chris Zankel
Cc: David Howells
Cc: David S. Miller
Cc: Fenghua Yu
Cc: Geert
17 matches
Mail list logo