On Fri, Mar 30, 2018 at 01:24:45PM +0530, Nipun Gupta wrote:
> With each bus implementing its own DMA configuration callback,
> there is no need for bus to explicitly have force_dma in its
> global structure. This patch modifies of_dma_configure API to
> accept an input parameter which specifies if
Currently GART writes one page entry at a time. More optimal would be to
aggregate the writes and flush BUS buffer in the end, this gives map/unmap
10-40% (depending on size of mapping) performance boost compared to a
flushing after each entry update.
Signed-off-by: Dmitry Osipenko
---
drivers/i
GART driver wasn't ever been utilized in upstream, but finally this should
change sometime soon with Tegra's DRM driver rework. In general GART driver
works fine, though there are couple things that could be improved.
Dmitry Osipenko (4):
iommu/tegra: gart: Add debugging facility
iommu/tegra:
It must return the number of unmapped bytes on success, returning 0 means
that unmapping failed and in result only one page is unmapped.
Signed-off-by: Dmitry Osipenko
---
drivers/iommu/tegra-gart.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/iommu/tegra-gart.c b/
Page mapping could overwritten by an accident (a bug). We can catch this
case by checking 'VALID' bit of GART's page entry prior to mapping of a
page. Since that check introduces a small performance impact, it should be
enabled explicitly using new GART's kernel module 'debug' parameter.
Signed-of
GART has a fixed aperture size, hence the number of pages is constant.
Signed-off-by: Dmitry Osipenko
---
drivers/iommu/tegra-gart.c | 14 --
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/drivers/iommu/tegra-gart.c b/drivers/iommu/tegra-gart.c
index 89ec24c6952c..4a06
The use of "correctly mapped" here is misleading, since it can give the
wrong expectation in the case that the memory *should* have been mapped
from the per-device pool, but doing so failed for other reasons.
Signed-off-by: Robin Murphy
---
drivers/base/dma-coherent.c | 5 +++--
1 file changed,
On Mon, Apr 09, 2018 at 06:59:08PM +0200, Jacopo Mondi wrote:
> I'm still a bit puzzled on what happens if dma_mmap_from_dev_coherent() fails.
> Does a dma_mmap_from_dev_coherent() failure guarantee anyhow that the
> successive virt_to_page() isn't problematic as it is today?
> Or is it the
>
Postpone calling virt_to_page() translation on memory locations not
guaranteed to be backed by a struct page.
This patch fixes a specific issue of SH architecture configured with
SPARSEMEM memory model, when mapping buffers allocated with the memblock
APIs at system initialization time, and thus n
On Mon, Apr 09, 2018 at 04:06:15PM +0300, Laurent Pinchart wrote:
> Hello,
>
> On Monday, 9 April 2018 14:11:22 EEST Robin Murphy wrote:
> > On 09/04/18 08:25, jacopo mondi wrote:
> > > Hi Robin, Laurent,
> > >
> > > a long time passed, sorry about this.
> > >
> > > On Wed, Nov 15, 2017 at
Hi Rich,
On Monday, 9 April 2018 18:11:13 EEST Rich Felker wrote:
> On Mon, Apr 09, 2018 at 04:06:15PM +0300, Laurent Pinchart wrote:
> > On Monday, 9 April 2018 14:11:22 EEST Robin Murphy wrote:
> >> On 09/04/18 08:25, jacopo mondi wrote:
> >>> Hi Robin, Laurent,
> >>>
> >>> a long time pas
Hello,
On Monday, 9 April 2018 14:11:22 EEST Robin Murphy wrote:
> On 09/04/18 08:25, jacopo mondi wrote:
> > Hi Robin, Laurent,
> >
> > a long time passed, sorry about this.
> >
> > On Wed, Nov 15, 2017 at 01:38:23PM +, Robin Murphy wrote:
> >> On 14/11/17 17:08, Jacopo Mondi wrote:
>
Hi Jacopo,
On 09/04/18 08:25, jacopo mondi wrote:
Hi Robin, Laurent,
a long time passed, sorry about this.
On Wed, Nov 15, 2017 at 01:38:23PM +, Robin Murphy wrote:
On 14/11/17 17:08, Jacopo Mondi wrote:
On SH4 architecture, with SPARSEMEM memory model, translating page to
pfn hangs
Hello,
May we have
e89f5b370153 ("dma-mapping: Don't clear GFP_ZERO in dma_alloc_attrs")
back-ported to 4.16 kernel as it fixes:
57bf5a8 ("dma-mapping: clear harmful GFP_* flags in common code").
For more info about introduced problem see this thread:
http://lists.infradead.org/pipermail/linux-s
Hi all,
this patch fixes a regression in the x86 swiotlb conversion. This mostly
happend because swiotlb_dma_support does the wrong thing (and did so for
a long time) and we switched x86 to use it.
There are a few others users of swiotlb_dma_supported that also look
rather broken, but I'll take
swiotlb_alloc calls dma_direct_alloc, which can satisfy lower than 32-bit
dma mask requests using GFP_DMA if the architecture supports it. Various
x86 drivers rely on that, so we need to support that. At the same time
the whole kernel expects 32-bit dma mask to just work, so the other magic
in sw
Hi Robin, Laurent,
a long time passed, sorry about this.
On Wed, Nov 15, 2017 at 01:38:23PM +, Robin Murphy wrote:
> On 14/11/17 17:08, Jacopo Mondi wrote:
> >On SH4 architecture, with SPARSEMEM memory model, translating page to
> >pfn hangs the CPU. Post-pone translation to pfn after
> >d
17 matches
Mail list logo