d for single pages.
This patch tries to skip CMA allocations of single pages and lets
them go through normal page allocations. This would save resource
in the CMA area for further more CMA allocations.
Signed-off-by: Nicolin Chen
---
kernel/dma/direct.c | 8 ++--
1 file changed, 6 insertions(
Hi Robin,
Thanks for the comments.
On Thu, Nov 01, 2018 at 02:07:55PM +, Robin Murphy wrote:
> On 31/10/2018 20:03, Nicolin Chen wrote:
> > The addresses within a single page are always contiguous, so it's
> > not so necessary to allocate one single page from CMA area. Si
On Thu, Nov 01, 2018 at 07:32:39PM +, Robin Murphy wrote:
> > On Thu, Nov 01, 2018 at 02:07:55PM +, Robin Murphy wrote:
> > > On 31/10/2018 20:03, Nicolin Chen wrote:
> > > > The addresses within a single page are always contiguous, so it's
> > >
manual
memset after page/sg allocations, using the length of scatterlist.
My test result of a 2.5MB size allocation shows iommu_dma_alloc()
takes 46% less time, reduced from averagely 925 usec to 500 usec.
Signed-off-by: Nicolin Chen
---
drivers/iommu/dma-iommu.c | 18 ++
1 file
On Fri, Nov 02, 2018 at 04:54:07PM +, Robin Murphy wrote:
> On 01/11/2018 21:35, Nicolin Chen wrote:
> > The __GFP_ZERO will be passed down to the generic page allocation
> > routine which zeros everything page by page. This is safe to be a
> > generic way but no
On Fri, Nov 02, 2018 at 07:35:42AM +0100, Christoph Hellwig wrote:
> On Thu, Nov 01, 2018 at 02:07:55PM +, Robin Murphy wrote:
> > On 31/10/2018 20:03, Nicolin Chen wrote:
> >> The addresses within a single page are always contiguous, so it's
> >> not so neces
Hi Christoph,
On Sun, Nov 04, 2018 at 07:50:01AM -0800, Christoph Hellwig wrote:
> On Thu, Nov 01, 2018 at 02:35:00PM -0700, Nicolin Chen wrote:
> > The __GFP_ZERO will be passed down to the generic page allocation
> > routine which zeros everything page by page. This is safe to b
Hi Robin,
On Tue, Nov 06, 2018 at 06:27:39PM +, Robin Murphy wrote:
> > I re-ran the test to get some accuracy within the function and got:
> > 1) pages = __iommu_dma_alloc_pages(count, alloc_sizes >> PAGE_SHIFT, gfp);
> > // reduced from 422 usec to 56 usec == 366 usec less
> > 2) if (!(p
Robin? Christ?
On Mon, Nov 05, 2018 at 02:40:50PM -0800, Nicolin Chen wrote:
> On Fri, Nov 02, 2018 at 07:35:42AM +0100, Christoph Hellwig wrote:
> > On Thu, Nov 01, 2018 at 02:07:55PM +, Robin Murphy wrote:
> > > On 31/10/2018 20:03, Nicolin Chen wrote:
> > >>
On Tue, Nov 20, 2018 at 10:20:10AM +0100, Christoph Hellwig wrote:
> On Mon, Nov 05, 2018 at 02:40:51PM -0800, Nicolin Chen wrote:
> > > > In general, this seems to make sense to me. It does represent a
> > > > theoretical
> > > > change in behaviour fo
reduce CMA fragmentations resulted from trivial allocations.
Signed-off-by: Nicolin Chen
---
Robin/Christoph,
I have some personal priority to submit this patch. I understand
you might have other plan to clean up the code first. Just would
it be possible for you to review and apply this one if it doesn'
Hi Christoph
On Mon, Feb 04, 2019 at 09:23:07AM +0100, Christoph Hellwig wrote:
> On Tue, Jan 15, 2019 at 01:51:40PM -0800, Nicolin Chen wrote:
> > The addresses within a single page are always contiguous, so it's
> > not so necessary to allocate one single page from CMA are
Hi Christoph,
On Wed, Feb 06, 2019 at 08:07:26AM +0100, Christoph Hellwig wrote:
> On Tue, Feb 05, 2019 at 03:05:30PM -0800, Nicolin Chen wrote:
> > > And my other concern is that this skips allocating from the per-device
> > > pool, which drivers might rely on.
> >
rmal pages unless the device
has its own CMA area. This would save resources from the CMA area
for more CMA allocations. And it'd also reduce CMA fragmentations
resulted from trivial allocations.
Signed-off-by: Nicolin Chen
---
kernel/dma/contiguous.c | 22 +++---
1 file changed
igned-off-by: Nicolin Chen
---
Tony,
Would you please test and verify? Thanks!
kernel/dma/contiguous.c | 22 +++---
1 file changed, 3 insertions(+), 19 deletions(-)
diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c
index 09074bd04793..b2a87905846d 100644
On Tue, Feb 26, 2019 at 11:35:44PM +, Robin Murphy wrote:
> On 2019-02-26 8:23 pm, Nicolin Chen wrote:
> > This reverts commit d222e42e88168fd67e6d131984b86477af1fc256.
> >
> > The original change breaks omap dss:
> > omapdss_dispc 58001000.dispc:
> >
This would save resources
from the CMA area for more CMA allocations. And it'd also reduce
CMA fragmentations resulted from trivial allocations.
Also, it updates the API and its callers so as to pass gfp flags.
Signed-off-by: Nicolin Chen
---
arch/arm/mm/dma-mapping.c | 5 ++---
ar
This would save resources
from the CMA area for more CMA allocations. And it'd also reduce
CMA fragmentations resulted from trivial allocations.
Also, it updates the API and its callers so as to pass gfp flags.
Signed-off-by: Nicolin Chen
---
Changelog
v1->v2:
* Removed one ';'
Hi Catalin,
Thank you for the review. And I realized that the free() path
is missing too.
On Tue, Mar 19, 2019 at 02:43:01PM +, Catalin Marinas wrote:
> On Tue, Mar 05, 2019 at 10:32:02AM -0800, Nicolin Chen wrote:
> > The addresses within a single page are always contiguous, so it&
Hi Catalin,
On Fri, Mar 22, 2019 at 10:57:13AM +, Catalin Marinas wrote:
> > > Do you have any numbers to back this up? You don't seem to address
> > > dma_direct_alloc() either but, as I said above, it's not trivial since
> > > some platforms expect certain physical range for DMA allocations.
On Mon, Mar 25, 2019 at 12:14:37PM +, Catalin Marinas wrote:
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index fcdb23e8d2fc..8955ba6f52fc 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -111,8 +111,7 @@ struct page *__dma_direct_alloc_pages(struct device *dev
allocations. Per Robin's suggestion, let's
stuff alloc_pages()/free_page() fallbacks to those callers before having
PATCH-5.
Nicolin Chen (5):
ARM: dma-mapping: Add fallback normal page allocations
dma-remap: Run alloc_pages() on failure
iommu: amd_iommu: Add fallback normal page a
The CMA allocation will skip allocations of single pages to save CMA
resource. This requires its callers to rebound those page allocations
from normal area.
So this patch moves the alloc_pages() call to the fallback routines.
Signed-off-by: Nicolin Chen
---
kernel/dma/remap.c | 2 +-
1 file
The CMA allocation will skip allocations of single pages to save CMA
resource. This requires its callers to rebound those page allocations
from normal area. So this patch adds fallback routines.
Signed-off-by: Nicolin Chen
---
arch/arm/mm/dma-mapping.c | 13 ++---
1 file changed, 10
rmal pages unless the device
has its own CMA area. This would save resources from the CMA area
for more CMA allocations. And it'd also reduce CMA fragmentations
resulted from trivial allocations.
Signed-off-by: Nicolin Chen
---
kernel/dma/contiguous.c | 22 +++---
1 file changed
The cma allocation will skip allocations of single pages to save CMA
resource. This requires its callers to rebound those page allocations
from normal area. So this patch adds fallback routines.
Signed-off-by: Nicolin Chen
---
arch/arm64/mm/dma-mapping.c | 19 ---
1 file changed
alloc_pages() as its first round allocation.
This's in reverse order than other callers. So the alloc_pages()
added by this change becomes a second fallback, though it likely
won't succeed since the alloc_pages() has failed once.
Signed-off-by: Nicolin Chen
---
dri
On Tue, Mar 26, 2019 at 03:49:56PM -0700, Nicolin Chen wrote:
> @@ -116,7 +116,7 @@ int __init dma_atomic_pool_init(gfp_t gfp, pgprot_t prot)
> if (dev_get_cma_area(NULL))
> page = dma_alloc_from_contiguous(NULL,
alloc_pages() as its first round allocation.
This's in reverse order than other callers. So the alloc_pages()
added by this change becomes a second fallback, though it likely
won't succeed since the alloc_pages() has failed once.
Signed-off-by: Nicolin Chen
---
dri
The CMA allocation will skip allocations of single pages to save CMA
resource. This requires its callers to rebound those page allocations
from normal area. So this patch adds fallback routines.
Signed-off-by: Nicolin Chen
---
arch/arm/mm/dma-mapping.c | 13 ++---
1 file changed, 10
The CMA allocation will skip allocations of single pages to save CMA
resource. This requires its callers to rebound those page allocations
from normal area.
So this patch moves the alloc_pages() call to the fallback routines.
Signed-off-by: Nicolin Chen
---
Changlog
v1->v2:
* PATC
rmal pages unless the device
has its own CMA area. This would save resources from the CMA area
for more CMA allocations. And it'd also reduce CMA fragmentations
resulted from trivial allocations.
Signed-off-by: Nicolin Chen
---
kernel/dma/contiguous.c | 22 +++---
1 file changed
allocations. Per Robin's suggestion, let's
stuff alloc_pages()/free_page() fallbacks to those callers before having
PATCH-5.
Changlog
v1->v2:
* PATCH-2: Initialized page pointer to NULL
Nicolin Chen (5):
ARM: dma-mapping: Add fallback normal page allocations
dma-remap: Run all
The cma allocation will skip allocations of single pages to save CMA
resource. This requires its callers to rebound those page allocations
from normal area. So this patch adds fallback routines.
Signed-off-by: Nicolin Chen
---
arch/arm64/mm/dma-mapping.c | 19 ---
1 file changed
On Wed, Mar 27, 2019 at 09:08:21AM +0100, Christoph Hellwig wrote:
> On Tue, Mar 26, 2019 at 04:01:26PM -0700, Nicolin Chen wrote:
> > This series of patches try to save single pages from CMA area bypassing
> > all CMA single page alloctions and allocating normal pages inst
Hi all,
I recently ran a 4GB+ allocation test case with my downstream
older-version kernel, and found two possible bugs. I then saw
the mainline code, yet don't find them getting fixed.
However, I am not 100% sure that they are real practical bugs
because I later figured out that my use case was
even safe to apply a 4GB boundary here, which was
added a decade ago to work for up-to-4GB mappings at that time.
This patch updates the default segment_boundary_mask by aligning
it with dma_mask.
Signed-off-by: Nicolin Chen
---
include/linux/dma-mapping.h | 2 +-
1 file changed, 1 insertion(+),
Hi Robin,
Thank you for the inputs.
On Mon, Mar 16, 2020 at 12:12:08PM +, Robin Murphy wrote:
> On 2020-03-14 12:00 am, Nicolin Chen wrote:
> > More and more drivers set dma_masks above DMA_BIT_MAKS(32) while
> > only a handful of drivers call dma_set_seg_boundary(). This mean
On Mon, Mar 16, 2020 at 01:16:16PM +, Robin Murphy wrote:
> On 2020-03-16 12:46 pm, Christoph Hellwig wrote:
> > On Mon, Mar 16, 2020 at 12:12:08PM +, Robin Murphy wrote:
> > > On 2020-03-14 12:00 am, Nicolin Chen wrote:
> > > > More and more drivers set d
Hi Christoph,
On Mon, Mar 16, 2020 at 01:48:50PM +0100, Christoph Hellwig wrote:
> On Fri, Mar 13, 2020 at 05:00:07PM -0700, Nicolin Chen wrote:
> > @@ -736,7 +736,7 @@ static inline unsigned long dma_get_seg_boundary(struct
> > device *dev)
> > {
> > if (dev-&
memory outside
the scatter list, which might lead to some random kernel panic
after DMA overwrites that faulty IOVA space.
So this patch sets default segment_boundary_mask to ULONG_MAX.
Signed-off-by: Nicolin Chen
---
include/linux/dma-mapping.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
di
On Mon, Apr 06, 2020 at 02:48:13PM +0100, Robin Murphy wrote:
> On 2020-04-05 1:51 am, Nicolin Chen wrote:
> > The default segment_boundary_mask was set to DMA_BIT_MAKS(32)
> > a decade ago by referencing SCSI/block subsystem, as a 32-bit
> > mask was good enough fo
ly limits those devices capable of 32+ bits addressing.
So this patch sets default segment_boundary_mask to ULONG_MAX.
Signed-off-by: Nicolin Chen
---
Changelog:
v1->v2
* Followed Robin's comments to revise the commit message by
dropping one paragraph of not-entirely-true justificatio
Hi Robin/Christoph,
This v2 was sent a while ago. I know that we had a concern,
yet will we have a closure whether merging it or not?
Thanks!
Nic
On Mon, Apr 06, 2020 at 02:06:43PM -0700, Nicolin Chen wrote:
> The default segment_boundary_mask was set to DMA_BIT_MAKS(32)
> a decade
27;d have to dig deeper:
> >
> > commit dd3dcede9fa0a0b661ac1f24843f4a1b1317fdb6
> > Author: Nicolin Chen
> > Date: Wed May 29 17:54:25 2019 -0700
> >
> > dma-contiguous: fix !CONFIG_DMA_CMA version of dma_{alloc,
> > free}_contiguous()
> yes CON
Hello Hillf,
On Mon, Aug 19, 2019 at 12:38:38AM +0200, Tobias Klausmann wrote:
>
> On 18.08.19 05:13, Hillf Danton wrote:
> > On Sat, 17 Aug 2019 00:42:48 +0200 Tobias Klausmann wrote:
> > > Hi Nicolin,
> > >
> > > On 17.08.19 00:25, Nicolin Chen wrote:
&g
On Fri, Aug 23, 2019 at 09:49:46PM +0900, Masahiro Yamada wrote:
> On Tue, May 7, 2019 at 7:36 AM Nicolin Chen wrote:
> >
> > The addresses within a single page are always contiguous, so it's
> > not so necessary to always allocate one single page from CMA area.
&g
-size" property is added to DT bindings, this
patch reads and applies to va_size as an input virtual address width.
Signed-off-by: Nicolin Chen
---
drivers/iommu/arm-smmu.c | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/
This series of patches add an optional DT property to allow an SoC to
specify how many bits being physically connected to its SMMU instance,
depending on the SoC design.
Nicolin Chen (2):
dt-bindings: arm-smmu: Add an optional "input-address-size" property
iommu/arm-smmu: Read optio
decision, this patch adds an optional
property to specify how many input bits being physically connected.
Signed-off-by: Nicolin Chen
---
Documentation/devicetree/bindings/iommu/arm,smmu.txt | 7 +++
1 file changed, 7 insertions(+)
diff --git a/Documentation/devicetree/bindings/iommu/arm
On Fri, Oct 11, 2019 at 10:16:28AM +0100, Robin Murphy wrote:
> On 2019-10-11 4:46 am, Nicolin Chen wrote:
> > This series of patches add an optional DT property to allow an SoC to
> > specify how many bits being physically connected to its SMMU instance,
> > depending on the
When testing with ethernet downloading, "EMEM address decode error"
happens due to race condition between map() and unmap() functions.
This patch adds a spin lock to protect content within as->[count]
and as->pts[pde] references, since a function call might be atomic.
Signed-off-
According to Tegra X1 (Tegra210) TRM, the reset value of xusb_hostr
field (bit [7:0]) should be 0x7a. So this patch simply corrects it.
Signed-off-by: Nicolin Chen
---
drivers/memory/tegra/tegra210.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/memory/tegra
references with the
macros defined with SMMU_PTE_SHIFT.
Signed-off-by: Nicolin Chen
---
drivers/iommu/tegra-smmu.c | 14 ++
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/drivers/iommu/tegra-smmu.c b/drivers/iommu/tegra-smmu.c
index 63a147b623e6..5594b47a88bf 100644
--- a
Hi all,
This series of patches are some small fixes for tegra-smmu, mainly
tested Tegra210 with downstream kernel. As we only enabled limited
clients for Tegra210 on mainline tree, I am not sure how critical
these fixes are, so not CCing stable tree.
Nicolin Chen (4):
memory: tegra: Correct
IOVA might not be always 4KB aligned. So tegra_smmu_iova_to_phys
function needs to add on the lower 12-bit offset from input iova.
Signed-off-by: Nicolin Chen
---
drivers/iommu/tegra-smmu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/iommu/tegra-smmu.c b/drivers
Hi all,
According to the routine of iommu_dma_alloc(), it allocates an iova
then does iommu_map() to map the iova to a physical address of new
allocated pages. However, in remoteproc_core.c, I see its code try
to iommu_map() without having an alloc_iova() or alloc_iova_fast().
Is it safe to do so
Thanks for the reply Robin.
On Wed, Apr 10, 2019 at 10:20:38AM +0100, Robin Murphy wrote:
> On 09/04/2019 23:47, Nicolin Chen wrote:
> > According to the routine of iommu_dma_alloc(), it allocates an iova
> > then does iommu_map() to map the iova to a physical address of new
>
Hi Christoph,
On Wed, Apr 24, 2019 at 05:06:38PM +0200, Christoph Hellwig wrote:
> On Tue, Mar 26, 2019 at 04:01:27PM -0700, Nicolin Chen wrote:
> > page = dma_alloc_from_contiguous(dev, count, order, gfp & __GFP_NOWARN);
> > + if (!page)
> > + page
On Wed, Apr 24, 2019 at 09:26:52PM +0200, Christoph Hellwig wrote:
> On Wed, Apr 24, 2019 at 11:33:11AM -0700, Nicolin Chen wrote:
> > I feel it's similar to my previous set, which did most of these
> > internally except the renaming part. But Catalin had a concern
> >
On Wed, Apr 24, 2019 at 05:06:38PM +0200, Christoph Hellwig wrote:
> > + if (!dma_release_from_contiguous(dev, page, count))
> > + __free_pages(page, get_order(size));
>
> Same for dma_release_from_contiguous - drop the _from, pass the
> actual size, and
"RFC/RFT".
Please check their commit messages for detail.
Nicolin Chen (2):
dma-contiguous: Simplify dma_*_from_contiguous() function calls
dma-contiguous: Use fallback alloc_pages for single pages
arch/arm/mm/dma-mapping.c | 14 +++-
arch/arm64/mm/dma-mapping.c| 11 ++
fail the check, might end up in the fallback path.
Suggested-by: Christoph Hellwig
Signed-off-by: Nicolin Chen
---
arch/arm/mm/dma-mapping.c | 14 -
arch/arm64/mm/dma-mapping.c| 11 +++
arch/xtensa/kernel/pci-dma.c | 19 +++-
drivers/iommu/amd_iommu.c | 20 -
l CMA area in case that a
device does not have its own CMA area. This'd save resources from
the CMA global area for more CMA allocations, and also reduce CMA
fragmentations resulted from trivial allocations.
Signed-off-by: Nicolin Chen
---
kernel/dma/contiguous.c | 11 ++-
1 f
On Tue, Apr 30, 2019 at 05:18:33PM +0200, Christoph Hellwig wrote:
> On Tue, Apr 30, 2019 at 01:37:54PM +0100, Robin Murphy wrote:
> > On 30/04/2019 11:56, Christoph Hellwig wrote:
> >> So while I really, really like this cleanup it turns out it isn't
> >> actually safe for arm :( arm remaps the C
changelog.
Nicolin Chen (2):
dma-contiguous: Abstract dma_{alloc,free}_contiguous()
dma-contiguous: Use fallback alloc_pages for single pages
include/linux/dma-contiguous.h | 10 ++
kernel/dma/contiguous.c| 57 ++
kernel/dma/direct.c
_from_contiguous() might be
complicated, this patch just implements these two new functions to
kernel/dma/direct.c only as an initial step.
Suggested-by: Christoph Hellwig
Signed-off-by: Nicolin Chen
---
Changelog
v1->v2:
* Added new functions beside the old ones so we can replace callers
l CMA area in case that a
device does not have its own CMA area. This'd save resources from
the CMA global area for more CMA allocations, and also reduce CMA
fragmentations resulted from trivial allocations.
Signed-off-by: Nicolin Chen
---
kernel/dma/contiguous.c | 11 ++-
1 f
On Wed, May 08, 2019 at 02:52:54PM +0200, Christoph Hellwig wrote:
> modulo a trivial comment typo I found this looks fine to me. I plan
> to apply it with that fixed up around -rc2 time when I open the
> dma mapping tree opens for the the 5.3 merge window, unless someone
> finds an issue until th
On Thu, May 23, 2019 at 08:59:30PM -0600, dann frazier wrote:
> > > diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c
> > > index b2a87905846d..21f39a6cb04f 100644
> > > --- a/kernel/dma/contiguous.c
> > > +++ b/kernel/dma/contiguous.c
> > > @@ -214,6 +214,54 @@ bool dma_release_from_c
l CMA area in case that a
device does not have its own CMA area. This'd save resources from
the CMA global area for more CMA allocations, and also reduce CMA
fragmentations resulted from trivial allocations.
Signed-off-by: Nicolin Chen
---
kernel/dma/contiguous.c | 11 ++-
1 f
_from_contiguous() might be
complicated, this patch just implements these two new functions to
kernel/dma/direct.c only as an initial step.
Suggested-by: Christoph Hellwig
Signed-off-by: Nicolin Chen
---
Changelog
v2->v3:
* Added missing "static inline" in header file to fix buil
changelog.
Nicolin Chen (2):
dma-contiguous: Abstract dma_{alloc,free}_contiguous()
dma-contiguous: Use fallback alloc_pages for single pages
include/linux/dma-contiguous.h | 11 +++
kernel/dma/contiguous.c| 57 ++
kernel/dma/direct.c
Hi Ira,
On Fri, May 24, 2019 at 09:16:19AM -0700, Ira Weiny wrote:
> On Thu, May 23, 2019 at 09:06:33PM -0700, Nicolin Chen wrote:
> > The addresses within a single page are always contiguous, so it's
> > not so necessary to always allocate one single page from CMA area.
&g
Hi Nathan,
On Wed, May 29, 2019 at 11:35:46AM -0700, Nathan Chancellor wrote:
> This commit is causing boot failures in QEMU on x86_64 defconfig:
>
> https://travis-ci.com/ClangBuiltLinux/continuous-integration/jobs/203825363
>
> Attached is a bisect log and a boot log with GCC (just to show it
On Tue, May 28, 2019 at 08:04:24AM +0200, Christoph Hellwig wrote:
> Thanks,
>
> applied to dma-mapping for-next.
>
> Can you also send a conversion of drivers/iommu/dma-iommu.c to your
> new helpers against this tree?
>
> http://git.infradead.org/users/hch/dma-mapping.git/shortlog/refs/heads/fo
the rootfs from the below link:
https://github.com/ClangBuiltLinux/continuous-integration/raw/master/images/x86_64/rootfs.ext4
Fixes: fdaeec198ada ("dma-contiguous: add dma_{alloc,free}_contiguous()
helpers")
Reported-by: Nathan Chancellor
Signed-off-by: Nicolin Chen
---
include/lin
This patch replaces dma_{alloc,release}_from_contiguous() with
dma_{alloc,free}_contiguous() to simplify those function calls.
Signed-off-by: Nicolin Chen
---
drivers/iommu/dma-iommu.c | 14 --
1 file changed, 4 insertions(+), 10 deletions(-)
diff --git a/drivers/iommu/dma-iommu.c
s patch just casts the cur_len in the calculation to a
size_t type to fix the overflow issue, as it's not necessary
to change the type of cur_len after all.
Fixes: 809eac54cdd6 ("iommu/dma: Implement scatterlist segment merging")
Cc: sta...@vger.kernel.org
Signed-off-by: Nicolin Chen
-
ch just casts the cur_len in the calculation to a
> > > size_t type to fix the overflow issue, as it's not necessary
> > > to change the type of cur_len after all.
> > >
> > > Fixes: 809eac54cdd6 ("iommu/dma: Implement scatterlist segment merging")
&g
already assume that any single segment must be no longer than
> max_len to begin with, this can easily be addressed by reshuffling the
> comparison.
>
> Fixes: 809eac54cdd6 ("iommu/dma: Implement scatterlist segment merging")
> Reported-by: Nicolin Chen
> Signed-off-b
aw the conversation there. Sorry for not replying yet.
May we discuss there since there are full logs available?
Thanks
Nicolin
>
>
> On Fri, 24 May 2019 at 01:08, Nicolin Chen wrote:
> >
> > Both dma_alloc_from_contiguous() and dma_release_from_contiguous()
> > are
Sorry to ping this but it's been a while.
Robin, did you get a chance to resend your version?
Thanks
Nicolin
On Tue, Jul 02, 2019 at 02:04:01PM -0700, Nicolin Chen wrote:
> On Tue, Jul 02, 2019 at 11:40:02AM +0100, Robin Murphy wrote:
> > On reflection, I don't really thi
.
This patch adds a cma_align to take care of cma_alloc() and prevent
the align from being overwritten.
Fixes: fdaeec198ada ("dma-contiguous: add dma_{alloc,free}_contiguous()
helpers")
Reported-by: Dafna Hirschfeld
Signed-off-by: Nicolin Chen
---
kernel/dma/contiguous.c | 9 +--
() was page
aligned before the right-shifting operation, while the new API
dma_free_contiguous() forgets to have PAGE_ALIGN() at the size.
So this patch simply adds it to prevent any corner case.
Fixes: fdaeec198ada ("dma-contiguous: add dma_{alloc,free}_contiguous()
helpers")
Signed-off-
There are two obvious bugs in these two functions. So having
two patches to fix them.
Nicolin Chen (2):
dma-contiguous: do not overwrite align in dma_alloc_contiguous()
dma-contiguous: page-align the size in dma_free_contiguous()
kernel/dma/contiguous.c | 12 +++-
1 file changed, 7
On Thu, Jul 25, 2019 at 07:31:05PM +0200, Dafna Hirschfeld wrote:
> On Thu, 2019-07-25 at 09:50 -0700, Nicolin Chen wrote:
> > On Thu, Jul 25, 2019 at 01:06:42PM -0300, Ezequiel Garcia wrote:
> > > I can't find a way to forward-redirect from Gmail, so I'm Ccing Dafna
On Fri, Jul 26, 2019 at 08:28:49AM +0200, Christoph Hellwig wrote:
> On Thu, Jul 25, 2019 at 04:39:58PM -0700, Nicolin Chen wrote:
> > The dma_alloc_contiguous() limits align at CONFIG_CMA_ALIGNMENT for
> > cma_alloc() however it does not restore it for the fallback routine.
>
There are two obvious bugs in these two functions. So having
two patches to fix them.
Changlog
v1->v2:
* PATCH-1: Confine cma_align inside the if-condition.
* PATCH-1: Updated commit message to be precise for the corner case.
* PATCH-2: Added Reviewed-by from Christoph.
Nicolin Chen
() was page
aligned before the right-shifting operation, while the new API
dma_free_contiguous() forgets to have PAGE_ALIGN() at the size.
So this patch simply adds it to prevent any corner case.
Fixes: fdaeec198ada ("dma-contiguous: add dma_{alloc,free}_contiguous()
helpers")
Signed-off-
CONFIG_CMA_ALIGNMENT.
This patch adds a cma_align to take care of cma_alloc() and prevent
the align from being overwritten.
Fixes: fdaeec198ada ("dma-contiguous: add dma_{alloc,free}_contiguous()
helpers")
Reported-by: Dafna Hirschfeld
Signed-off-by: Nicolin Chen
---
kernel/dma/co
Hi Robin,
On Tue, Aug 06, 2019 at 04:49:01PM +0100, Robin Murphy wrote:
> Hi Joerg,
>
> On 06/08/2019 16:25, Joerg Roedel wrote:
> > Hi Robin,
> >
> > On Mon, Jul 29, 2019 at 05:46:00PM +0100, Robin Murphy wrote:
> > > Since scatterlist dimensions are all unsigned ints, in the relatively
> > > r
ses[0])
> + nvidia_smmu->bases[0] = smmu->base;
> +
> + return nvidia_smmu->bases[inst] + (page << smmu->pgshift);
> +}
Not critical -- just a nit: why not put the bases[0] in init()?
Everything else looks good to me:
Reviewed-by: Nicolin Chen
__
On Sun, Jun 28, 2020 at 07:28:38PM -0700, Krishna Reddy wrote:
> Add global/context fault hooks to allow NVIDIA SMMU implementation
> handle faults across multiple SMMUs.
>
> Signed-off-by: Krishna Reddy
> +static irqreturn_t nvidia_smmu_global_fault_inst(int irq,
> +
On Mon, Jun 29, 2020 at 10:49:31PM +, Krishna Reddy wrote:
> >> + if (!nvidia_smmu->bases[0])
> >> + nvidia_smmu->bases[0] = smmu->base;
> >> +
> >> + return nvidia_smmu->bases[inst] + (page << smmu->pgshift); }
>
> >Not critical -- just a nit: why not put the bases[0] in i
gt;
> Signed-off-by: Krishna Reddy
Reviewed-by: Nicolin Chen
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
On Mon, Jun 29, 2020 at 05:10:51PM -0700, Krishna Reddy wrote:
> Add global/context fault hooks to allow NVIDIA SMMU implementation
> handle faults across multiple SMMUs.
>
> Signed-off-by: Krishna Reddy
Reviewed-by: Nicolin Chen
___
i
lers and handle interrupts across the two ARM MMU-500s that
> are programmed identically.
>
> Signed-off-by: Krishna Reddy
Reviewed-by: Nicolin Chen
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu
On Tue, Jul 07, 2020 at 10:00:13PM -0700, Krishna Reddy wrote:
> Move TLB timeout and spin count macros to header file to
> allow using the same from vendor specific implementations.
>
> Signed-off-by: Krishna Reddy
Reviewed-by:
mming the two ARM MMU-500s
> that must be programmed identically.
>
> The third ARM MMU-500 instance is supported by standard
> arm-smmu.c driver itself.
>
> Signed-off-by: Krishna Reddy
Reviewed-by: Nicolin Chen
___
iommu mailing li
1 - 100 of 374 matches
Mail list logo