On Tue, Nov 28, 2017 at 11:34:42AM -0800, Maran Wilson wrote:
> This PoC patch enables Qemu to use that same entry point for booting KVM
> guests.
Nice. I do a a lot of -kernel boots in qemu/kvm for testing, and
speeding this further up would be great.
___
On Fri, Dec 08, 2017 at 02:24:24PM -0600, Bjorn Helgaas wrote:
> I'd rather change pcie_flr() so you could *always* call it, and it
> would return 0, -ENOTTY, or whatever, based on whether FLR is
> supported. Is that feasible?
>
> I don't like the "Can I do this? Ok, do this" style of interfaces.
On Wed, Dec 13, 2017 at 03:24:21PM -0600, Bjorn Helgaas wrote:
> Prior to a60a2b73ba69, we had
>
> int pcie_flr(struct pci_dev *dev, int probe);
>
> like all the other reset methods. AFAICT, the addition of
> pcie_has_flr() was to optimize the path slightly because when drivers
> call pcie_flr
Use the dma-noncoherent dev_is_dma_coherent helper instead of the home
grown variant. Note that both are always initialized to the same
value in arch_setup_dma_ops.
Signed-off-by: Christoph Hellwig
Reviewed-by: Julien Grall
Reviewed-by: Stefano Stabellini
---
arch/arm/include/asm/dma
Hi Xen maintainers and friends,
please take a look at this series that cleans up the parts of swiotlb-xen
that deal with non-coherent caches.
Boris and Juergen, can you take a look at patch 8, which touches x86
a as well?
Changes since v2:
- further dma_cache_maint improvements
- split the pre
There is no need to wrap the common version, just wire them up directly.
Signed-off-by: Christoph Hellwig
Reviewed-by: Stefano Stabellini
---
drivers/xen/swiotlb-xen.c | 29 ++---
1 file changed, 2 insertions(+), 27 deletions(-)
diff --git a/drivers/xen/swiotlb-xen.c b
Copy the arm64 code that uses the dma-direct/swiotlb helpers for DMA
on-coherent devices.
Signed-off-by: Christoph Hellwig
---
arch/arm/include/asm/device.h| 3 -
arch/arm/include/asm/xen/page-coherent.h | 72 +---
arch/arm/mm/dma-mapping.c| 8
Shared the duplicate arm/arm64 code in include/xen/arm/page-coherent.h.
Signed-off-by: Christoph Hellwig
---
arch/arm/include/asm/xen/page-coherent.h | 75
arch/arm64/include/asm/xen/page-coherent.h | 75
include/xen/arm/page-coherent.h
arm and arm64 can just use xen_swiotlb_dma_ops directly like x86, no
need for a pointer indirection.
Signed-off-by: Christoph Hellwig
Reviewed-by: Julien Grall
Reviewed-by: Stefano Stabellini
---
arch/arm/mm/dma-mapping.c| 3 ++-
arch/arm/xen/mm.c| 4
arch/arm64/mm/dma
x86 currently calls alloc_pages, but using dma-direct works as well
there, with the added benefit of using the CMA pool if available.
The biggest advantage is of course to remove a pointless bit of
architecture specific code.
Signed-off-by: Christoph Hellwig
Reviewed-by: Stefano Stabellini
Calculate the required operation in the caller, and pass it directly
instead of recalculating it for each page, and use simple arithmetics
to get from the physical address to Xen page size aligned chunks.
Signed-off-by: Christoph Hellwig
Reviewed-by: Stefano Stabellini
---
arch/arm/xen/mm.c
These routines are only used by swiotlb-xen, which cannot be modular.
Signed-off-by: Christoph Hellwig
Reviewed-by: Stefano Stabellini
---
arch/arm/xen/mm.c | 2 --
arch/x86/xen/mmu_pv.c | 2 --
2 files changed, 4 deletions(-)
diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
index
xen_dma_map_page uses a different and more complicated check for foreign
pages than the other three cache maintainance helpers. Switch it to the
simpler pfn_valid method a well, and document the scheme with a single
improved comment in xen_dma_map_page.
Signed-off-by: Christoph Hellwig
Reviewed
for the local cache
maintainance. The pfn_valid checks remain on the dma address as in
the old code, even if that looks a little funny.
Signed-off-by: Christoph Hellwig
Reviewed-by: Stefano Stabellini
---
arch/arm/xen/mm.c| 64 ++
arch/x86/include/asm
The only thing left of page-coherent.h is two functions implemented by
the architecture for non-coherent DMA support that are never called for
fully coherent architectures. Just move the prototypes for those to
swiotlb-xen.h instead.
Signed-off-by: Christoph Hellwig
Reviewed-by: Stefano
No need for a no-op wrapper.
Signed-off-by: Christoph Hellwig
Reviewed-by: Stefano Stabellini
---
drivers/xen/swiotlb-xen.c | 15 ---
1 file changed, 4 insertions(+), 11 deletions(-)
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 95911ff9c11c
Now that the Xen special cases are gone nothing worth mentioning is
left in the arm64 file, so switch to use the
asm-generic version instead.
Signed-off-by: Christoph Hellwig
Acked-by: Will Deacon
Reviewed-by: Stefano Stabellini
---
arch/arm64/include/asm/Kbuild| 1 +
arch/arm64
On Fri, Aug 30, 2019 at 07:40:42PM -0700, Stefano Stabellini wrote:
> + Juergen, Boris
>
> On Fri, 30 Aug 2019, Christoph Hellwig wrote:
> > Can we take a step back and figure out what we want to do here?
> >
> > AFAICS this function allocates memory for the swiot
On Tue, Sep 03, 2019 at 06:25:54PM -0400, Boris Ostrovsky wrote:
> > If I am reading __dma_direct_alloc_pages() correctly there is a path
> > that will force us to use GFP_DMA32 and as Juergen pointed out in
> > another message that may not be desirable.
Yes, it will add GFP_DMA32. So I guess for
Hi Xen maintainers and friends,
please take a look at this series that cleans up the parts of swiotlb-xen
that deal with non-coherent caches.
Changes since v3:
- don't use dma_direct_alloc on x86
Changes since v2:
- further dma_cache_maint improvements
- split the previous patch 1 into 3 pat
Copy the arm64 code that uses the dma-direct/swiotlb helpers for DMA
on-coherent devices.
Signed-off-by: Christoph Hellwig
---
arch/arm/include/asm/device.h| 3 -
arch/arm/include/asm/xen/page-coherent.h | 72 +---
arch/arm/mm/dma-mapping.c| 8
xen_dma_map_page uses a different and more complicated check for foreign
pages than the other three cache maintainance helpers. Switch it to the
simpler pfn_valid method a well, and document the scheme with a single
improved comment in xen_dma_map_page.
Signed-off-by: Christoph Hellwig
Reviewed
Use the dma-noncoherent dev_is_dma_coherent helper instead of the home
grown variant. Note that both are always initialized to the same
value in arch_setup_dma_ops.
Signed-off-by: Christoph Hellwig
Reviewed-by: Julien Grall
Reviewed-by: Stefano Stabellini
---
arch/arm/include/asm/dma
There is no need to wrap the common version, just wire them up directly.
Signed-off-by: Christoph Hellwig
Reviewed-by: Stefano Stabellini
---
drivers/xen/swiotlb-xen.c | 29 ++---
1 file changed, 2 insertions(+), 27 deletions(-)
diff --git a/drivers/xen/swiotlb-xen.c b
Calculate the required operation in the caller, and pass it directly
instead of recalculating it for each page, and use simple arithmetics
to get from the physical address to Xen page size aligned chunks.
Signed-off-by: Christoph Hellwig
Reviewed-by: Stefano Stabellini
---
arch/arm/xen/mm.c
arm and arm64 can just use xen_swiotlb_dma_ops directly like x86, no
need for a pointer indirection.
Signed-off-by: Christoph Hellwig
Reviewed-by: Julien Grall
Reviewed-by: Stefano Stabellini
---
arch/arm/mm/dma-mapping.c| 3 ++-
arch/arm/xen/mm.c| 4
arch/arm64/mm/dma
Shared the duplicate arm/arm64 code in include/xen/arm/page-coherent.h.
Signed-off-by: Christoph Hellwig
---
arch/arm/include/asm/xen/page-coherent.h | 75
arch/arm64/include/asm/xen/page-coherent.h | 75
include/xen/arm/page-coherent.h
for the local cache
maintainance. The pfn_valid checks remain on the dma address as in
the old code, even if that looks a little funny.
Signed-off-by: Christoph Hellwig
Reviewed-by: Stefano Stabellini
---
arch/arm/xen/mm.c| 64 +++-
arch/x86/include
Now that the Xen special cases are gone nothing worth mentioning is
left in the arm64 file, so switch to use the
asm-generic version instead.
Signed-off-by: Christoph Hellwig
Acked-by: Will Deacon
Reviewed-by: Stefano Stabellini
---
arch/arm64/include/asm/Kbuild| 1 +
arch/arm64
These routines are only used by swiotlb-xen, which cannot be modular.
Signed-off-by: Christoph Hellwig
Reviewed-by: Stefano Stabellini
---
arch/arm/xen/mm.c | 2 --
arch/x86/xen/mmu_pv.c | 2 --
2 files changed, 4 deletions(-)
diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
index
No need for a no-op wrapper.
Signed-off-by: Christoph Hellwig
Reviewed-by: Stefano Stabellini
---
drivers/xen/swiotlb-xen.c | 15 ---
1 file changed, 4 insertions(+), 11 deletions(-)
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index f81031f0c1c7
On Fri, Sep 06, 2019 at 09:52:12AM -0400, Boris Ostrovsky wrote:
> We need nop definitions of these two for x86.
>
> Everything builds now but that's probably because the calls are under
> 'if (!dev_is_dma_coherent(dev))' which is always false so compiler
> optimized is out. I don't think we shoul
Applied to the dma-mapping tree.
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
y Shevchenko (1):
dma-mapping: fix filename references
Christoph Hellwig (34):
unicore32: remove the unused pgprot_dmacoherent define
arm-nommu: remove the unused pgprot_dmacoherent define
dma-mapping: remove arch_dma_mmap_pgprot
dma-mapping: make dma_atomic_pool_init
Please don't add your private flag to page-flags.h. The whole point of
the private flag is that you can use it in any way you want withou
touching the common code.
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/ma
On Wed, Jan 16, 2019 at 07:30:02AM +0100, Gerd Hoffmann wrote:
> Hi,
>
> > + if (!dma_map_sg(dev->dev, xen_obj->sgt->sgl, xen_obj->sgt->nents,
> > + DMA_BIDIRECTIONAL)) {
> > + ret = -EFAULT;
> > + goto fail_free_sgt;
> > + }
>
> Hmm, so it seems the ar
Does this fix your problem?
diff --git a/arch/arm/include/asm/xen/page-coherent.h
b/arch/arm/include/asm/xen/page-coherent.h
index b3ef061d8b74..2c403e7c782d 100644
--- a/arch/arm/include/asm/xen/page-coherent.h
+++ b/arch/arm/include/asm/xen/page-coherent.h
@@ -1 +1,95 @@
+/* SPDX-License-Identi
On Wed, Jan 16, 2019 at 06:43:29AM +, Oleksandr Andrushchenko wrote:
> > This whole issue keeps getting more and more confusing.
> Well, I don't really do DMA here, but instead the buffers in
> question are shared with other Xen domain, so effectively it
> could be thought of some sort of DMA h
On Thu, Jan 17, 2019 at 11:43:49AM +, Julien Grall wrote:
> Looking at the change for arm64, you will always call dma-direct API. In
> previous Linux version, xen-swiotlb will call dev->archdata.dev_dma_ops (a
> copy of dev->dma_ops before setting Xen DMA ops) if not NULL. Does it mean
> we exp
well to fix this problem.
Fixes: 356da6d0cd ("dma-mapping: bypass indirect calls for dma-direct")
Reported-by: Julien Grall
Signed-off-by: Christoph Hellwig
---
arch/arm/include/asm/xen/page-coherent.h | 94 +
arch/arm64/include/asm/device.h| 3 -
[full quote deleted, please take a little more care when quoting]
On Fri, Jan 18, 2019 at 04:44:23PM -0800, Stefano Stabellini wrote:
> > #ifdef CONFIG_XEN
> > - if (xen_initial_domain()) {
> > - dev->archdata.dev_dma_ops = dev->dma_ops;
> > + if (xen_initial_domain())
> >
On Mon, Jan 21, 2019 at 03:56:29PM -0800, Stefano Stabellini wrote:
> > Where should we pick this up? I could pick it up through the dma-mapping
> > tree given that is where the problem is introduced, but the Xen or arm64
> > trees would also fit.
>
> I am happy for you to carry it in the dma-map
On Thu, Jan 31, 2019 at 01:44:15PM -0800, Stefano Stabellini wrote:
> The alternative would be to turn xenmem_reservation_scrub_page into a
> regular function (not a static inline)?
All that is a moot point until said currently out of tree module gets
submitted for inclusion anyway.
_
On Fri, Feb 01, 2019 at 08:38:43AM +, Oleksandr Andrushchenko wrote:
> On 2/1/19 10:27 AM, Christoph Hellwig wrote:
> > On Thu, Jan 31, 2019 at 01:44:15PM -0800, Stefano Stabellini wrote:
> >> The alternative would be to turn xenmem_reservation_scrub_page into a
> >&
Hi Xen maintainers and friends,
please take a look at this series that cleans up the parts of swiotlb-xen
that deal with non-coherent caches.
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-dev
Use the dma-noncoherent dev_is_dma_coherent helper instead of the home
grown variant.
Signed-off-by: Christoph Hellwig
---
arch/arm/include/asm/dma-mapping.h | 6 --
arch/arm/xen/mm.c| 12 ++--
arch/arm64/include/asm/dma-mapping.h | 9 -
3 files
Instead of taking apart the dma address in both callers do it inside
dma_cache_maint itself.
Signed-off-by: Christoph Hellwig
---
arch/arm/xen/mm.c | 10 ++
1 file changed, 6 insertions(+), 4 deletions(-)
diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
index 90574d89d0d4
No need for a no-op wrapper.
Signed-off-by: Christoph Hellwig
---
drivers/xen/swiotlb-xen.c | 15 ---
1 file changed, 4 insertions(+), 11 deletions(-)
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index c3c383033ae4..b6b9c4c1b397 100644
--- a/drivers/xen
Reuse the arm64 code that uses the dma-direct/swiotlb helpers for DMA
non-coherent devices.
Signed-off-by: Christoph Hellwig
---
arch/arm/Kconfig | 4 +
arch/arm/include/asm/device.h | 3 -
arch/arm/include/asm/xen/page-coherent.h | 93
arm and arm64 can just use xen_swiotlb_dma_ops directly like x86, no
need for a pointer indirection.
Signed-off-by: Christoph Hellwig
---
arch/arm/mm/dma-mapping.c| 3 ++-
arch/arm/xen/mm.c| 4
arch/arm64/mm/dma-mapping.c | 3 ++-
include/xen/arm/hypervisor.h | 2 --
4
xen_dma_map_page uses a different and more complicated check for
foreign pages than the other three cache maintainance helpers.
Switch it to the simpler pfn_vali method a well.
Signed-off-by: Christoph Hellwig
---
include/xen/page-coherent.h | 9 ++---
1 file changed, 2 insertions(+), 7
x86 currently calls alloc_pages, but using dma-direct works as well
there, with the added benefit of using the CMA pool if available.
The biggest advantage is of course to remove a pointless bit of
architecture specific code.
Signed-off-by: Christoph Hellwig
---
arch/x86/include/asm/xen/page
Merge the various page-coherent.h files into a single one that either
provides prototypes or stubs depending on the need for cache
maintainance.
For extra benefits alo include in the file
actually implementing the interfaces provided.
Signed-off-by: Christoph Hellwig
---
arch/arm/include/asm
These routines are only used by swiotlb-xen, which cannot be modular.
Signed-off-by: Christoph Hellwig
---
arch/arm/xen/mm.c | 2 --
arch/x86/xen/mmu_pv.c | 2 --
2 files changed, 4 deletions(-)
diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
index 388a45002bad..a59980f1aa54 100644
the value returned from it. Instead we now have
Xen wrappers for the arch_sync_dma_for_{device,cpu} helpers that call
the special Xen versions of those routines for foreign pages.
Signed-off-by: Christoph Hellwig
---
arch/arm/xen/mm.c | 47 ++---
drivers/xen/swiotlb-xen.c
Now that the Xen special cases are gone nothing worth mentioning is
left in the arm64 file, so switch to use the
asm-generic version instead.
Signed-off-by: Christoph Hellwig
---
arch/arm64/include/asm/Kbuild| 1 +
arch/arm64/include/asm/dma-mapping.h | 22 --
arch
On Fri, Aug 16, 2019 at 02:37:58PM +0100, Robin Murphy wrote:
> On 16/08/2019 14:00, Christoph Hellwig wrote:
>> Instead of taking apart the dma address in both callers do it inside
>> dma_cache_maint itself.
>>
>> Signed-off-by: Christoph Hellwig
>>
On Fri, Aug 16, 2019 at 11:40:43PM +0100, Julien Grall wrote:
> I am not sure I agree with this rename. The implementation of the helpers
> are very Arm specific as this is assuming Dom0 is 1:1 mapped.
>
> This was necessary due to the lack of IOMMU on Arm platforms back then.
> But this is now a
On Mon, Aug 19, 2019 at 12:45:17PM +0100, Julien Grall wrote:
> On 8/16/19 2:00 PM, Christoph Hellwig wrote:
>> +static inline void xen_dma_map_page(struct device *hwdev, struct page *page,
>> + dma_addr_t dev_addr, unsigned long offset, size_t size,
>> + en
Use the dma-noncoherent dev_is_dma_coherent helper instead of the home
grown variant. Note that both are always initialized to the same
value in arch_setup_dma_ops.
Signed-off-by: Christoph Hellwig
Reviewed-by: Julien Grall
---
arch/arm/include/asm/dma-mapping.h | 6 --
arch/arm/xen
Reuse the arm64 code that uses the dma-direct/swiotlb helpers for DMA
non-coherent devices.
Signed-off-by: Christoph Hellwig
---
arch/arm/include/asm/device.h | 3 -
arch/arm/include/asm/xen/page-coherent.h | 93 --
arch/arm/mm/dma-mapping.c
Hi Xen maintainers and friends,
please take a look at this series that cleans up the parts of swiotlb-xen
that deal with non-coherent caches.
Changes since v1:
- rewrite dma_cache_maint to be much simpler
- improve various comments and commit logs
- remove page-coherent.h entirely
___
arm and arm64 can just use xen_swiotlb_dma_ops directly like x86, no
need for a pointer indirection.
Signed-off-by: Christoph Hellwig
Reviewed-by: Julien Grall
---
arch/arm/mm/dma-mapping.c| 3 ++-
arch/arm/xen/mm.c| 4
arch/arm64/mm/dma-mapping.c | 3 ++-
include/xen/arm
Calculate the required operation in the caller, and pass it directly
instead of recalculating it for each page, and use simple arithmetics
to get from the physical address to Xen page size aligned chunks.
Signed-off-by: Christoph Hellwig
---
arch/arm/xen/mm.c | 62
The only thing left of page-coherent.h is two functions implemented by
the architecture for non-coherent DMA support that are never called for
fully coherent architectures. Just move the prototypes for those to
swiotlb-xen.h instead.
Signed-off-by: Christoph Hellwig
---
arch/arm/include/asm
for the local cache
maintainance. The pfn_valid checks remain on the dma address as in
the old code, even if that looks a little funny.
Signed-off-by: Christoph Hellwig
---
arch/arm/xen/mm.c| 64 ++
arch/x86/include/asm/xen/page-coherent.h | 11
x86 currently calls alloc_pages, but using dma-direct works as well
there, with the added benefit of using the CMA pool if available.
The biggest advantage is of course to remove a pointless bit of
architecture specific code.
Signed-off-by: Christoph Hellwig
---
arch/x86/include/asm/xen/page
Now that the Xen special cases are gone nothing worth mentioning is
left in the arm64 file, so switch to use the
asm-generic version instead.
Signed-off-by: Christoph Hellwig
Acked-by: Will Deacon
---
arch/arm64/include/asm/Kbuild| 1 +
arch/arm64/include/asm/dma-mapping.h | 22
xen_dma_map_page uses a different and more complicated check for foreign
pages than the other three cache maintainance helpers. Switch it to the
simpler pfn_valid method a well, and document the scheme with a single
improved comment in xen_dma_map_page.
Signed-off-by: Christoph Hellwig
These routines are only used by swiotlb-xen, which cannot be modular.
Signed-off-by: Christoph Hellwig
---
arch/arm/xen/mm.c | 2 --
arch/x86/xen/mmu_pv.c | 2 --
2 files changed, 4 deletions(-)
diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
index 9b3a6c0ca681..b7d53415532b 100644
No need for a no-op wrapper.
Signed-off-by: Christoph Hellwig
---
drivers/xen/swiotlb-xen.c | 15 ---
1 file changed, 4 insertions(+), 11 deletions(-)
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 95911ff9c11c..384304a77020 100644
--- a/drivers/xen
On Mon, Aug 26, 2019 at 07:00:44PM -0700, Stefano Stabellini wrote:
> On Fri, 16 Aug 2019, Christoph Hellwig wrote:
> > Hi Xen maintainers and friends,
> >
> > please take a look at this series that cleans up the parts of swiotlb-xen
> > that deal with non-coheren
And this was still buggy I think, it really needs some real Xen/Arm
testing which I can't do. Hopefully better version below:
--
From 5ad4b6e291dbb49f65480c9b769414931cbd485a Mon Sep 17 00:00:00 2001
From: Christoph Hellwig
Date: Wed, 24 Jul 2019 15:26:08 +0200
Subject: xen/arm: sim
Can we take a step back and figure out what we want to do here?
AFAICS this function allocates memory for the swiotlb-xen buffer,
and that means it must be <= 32-bit addressable to satisfy the DMA API
guarantees. That means we generally want to use GFP_DMA32 everywhere
that exists, but on systems
On Thu, Feb 14, 2019 at 07:03:38AM +0100, Juergen Gross wrote:
> > The thing which is different between Xen PV guests and most others (all
> > others(?), now that Lguest and UML have been dropped) is that what Linux
> > thinks of as PFN $N isn't necessarily adjacent to PFN $N+1 in system
> > physic
On Fri, Feb 15, 2019 at 11:07:22AM -0500, Michael Labriola wrote:
> > > But the latter text seems to agree with that. So what is the actual
> > > problem that started this discussion?
> > >
> >
> > See https://lists.xen.org/archives/html/xen-devel/2019-02/threads.html#00818
>
> I believe the actu
On Tue, Feb 19, 2019 at 09:30:33PM -0800, ira.we...@intel.com wrote:
> From: Ira Weiny
>
> Resending these as I had only 1 minor comment which I believe we have covered
> in this series. I was anticipating these going through the mm tree as they
> depend on a cleanup patch there and the IB chang
On Tue, Mar 05, 2019 at 09:41:46AM +, Julien Grall wrote:
> On Xen, dma_addr_t will always be 64-bit while the phys_addr_t will depend
> on the MMU type. So we may have phys_addr_t smaller than dma_addr_t from
> the kernel point of view.
How can dma_addr_t on arm have value > 32-bit when phy
On Sat, Mar 09, 2019 at 09:37:32AM +0800, Ming Lei wrote:
> xen_biovec_phys_mergeable() only needs .bv_page of the 2nd bio bvec
> for checking if the two bvecs can be merged, so pass page to
> xen_biovec_phys_mergeable() directly.
Looks fine:
Reviewed-by: Christop
On Sat, Mar 09, 2019 at 09:37:33AM +0800, Ming Lei wrote:
> For normal filesystem IO, each page is added via blk_add_page(),
> in which bvec(page) merge has been handled already, and basically
> not possible to merge two adjacent bvecs in one bio.
>
> So not try to merge two adjacent bvecs in blk_
On Fri, Mar 08, 2019 at 05:25:57PM +, Julien Grall wrote:
> In the common case, Dom0 also contains the PV backend drivers. Those
> drivers may directly use the guest buffer in the DMA request (so a copy is
> avoided). To avoid using a bounce buffer too much, xen-swiotlb will find
> the host
-xen implementation
details.
Signed-off-by: Christoph Hellwig
---
drivers/xen/swiotlb-xen.c | 84 +--
1 file changed, 28 insertions(+), 56 deletions(-)
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 97a55c225593..9a951504dc12 100644
Hi all,
below are a couple of cleanups for swiotlb-xen.c. They were done in
preparation of eventually using the dma-noncoherent.h cache flushing
hooks, but that final goal will need some major work to the arm32 code
first. Until then I think these patches might be better in mainline
than in my l
We can simply loop over the segments and map them, removing lots of
duplicate code.
Signed-off-by: Christoph Hellwig
---
drivers/xen/swiotlb-xen.c | 68 ++-
1 file changed, 10 insertions(+), 58 deletions(-)
diff --git a/drivers/xen/swiotlb-xen.c b/drivers
Just drop two pointless _attrs prefixes to make the code a little
more grep-able.
Signed-off-by: Christoph Hellwig
---
drivers/xen/swiotlb-xen.c | 17 +++--
1 file changed, 7 insertions(+), 10 deletions(-)
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index
Refactor the code a bit to make further changes easier.
Signed-off-by: Christoph Hellwig
---
drivers/xen/swiotlb-xen.c | 31 ---
1 file changed, 16 insertions(+), 15 deletions(-)
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 9a951504dc12
On Thu, Nov 07, 2019 at 08:06:08PM +, Jason Gunthorpe wrote:
> >
> > enum mmu_range_notifier_event {
> > MMU_NOTIFY_RELEASE,
> > };
> >
> > ...assuming that we stay with "mmu_range_notifier" as a core name for this
> > whole thing.
> >
> > Also, it is best moved down to be next to the n
Looks good,
Reviewed-by: Christoph Hellwig
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
ifier_ops *ops)
Odd indentation - we usuall do two tabs (my preference) or align after
the opening brace.
> + * This function must be paired with mmu_interval_notifier_insert(). It
> cannot be
line > 80 chars.
Otherwise this looks good and very similar to what I reviewed earlier:
Reviewed-by: Christoph Hellwig
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Looks good,
Reviewed-by: Christoph Hellwig
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
Looks good:
Reviewed-by: Christoph Hellwig
___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel
On Fri, Nov 23, 2018 at 07:55:11AM +0100, Christoph Hellwig wrote:
> On Thu, Nov 22, 2018 at 09:55:25AM -0800, Linus Torvalds wrote:
> > No, the big immediate benefit of allowing "return -EINVAL" etc is
> > simply legibility and error avoidance.
>
> Well, I can
On Wed, Nov 28, 2018 at 11:19:15AM -0800, Linus Torvalds wrote:
> Let me just paste it back in here:
>
> "Which is what we ALREADY do for these exact reasons. If the DMA
> mappings means that you'd need to add one more page to that list of
> reserved pages, then so be it."
>
> So no, I'm not at
On Thu, Nov 29, 2018 at 09:44:05AM -0800, Linus Torvalds wrote:
> No. Really. If there's no iotlb, then you just mark that one page
> reserved. It simply doesn't get used. It doesn't mean you suddenly
> need a swiotlb.
Sure, we could just skip that page entirely based on dma_to_phys.
> But whatev
On Thu, Nov 29, 2018 at 10:53:32AM -0800, Linus Torvalds wrote:
> Most of the high-performance IO is already using SG lists anyway, no?
> Disk/networking/whatever.
Networking basically never uses S/G lists. Block I/O mostly uses it,
and graphics / media seems to have a fair amount of S/G uses, in
Arm already returns (~(dma_addr_t)0x0) on mapping failures, so we can
switch over to returning DMA_MAPPING_ERROR and let the core dma-mapping
code handle the rest.
Signed-off-by: Christoph Hellwig
---
arch/arm/common/dmabounce.c | 12 +++---
arch/arm/include/asm/dma-iommu.h | 2
m the dma_map* routines this values means
they will generally not be pointed to actual memory.
Once the default value is added here we can start removing the
various mapping_error methods and just rely on this generic check.
Signed-off-by: Christoph Hellwig
---
include/linux/dma-mapping.h | 6 ++
1
Error reporting for the dma_map_single and dma_map_page operations is
currently a mess. Both APIs directly return the dma_addr_t to be used for
the DMA, with a magic error escape that is specific to the instance and
checked by another method provided. This has a few downsides:
- the error check
The CCIO iommu code already returns (~(dma_addr_t)0x0) on mapping
failures, so we can switch over to returning DMA_MAPPING_ERROR and let
the core dma-mapping code handle the rest.
Signed-off-by: Christoph Hellwig
---
drivers/parisc/ccio-dma.c | 10 +-
1 file changed, 1 insertion(+), 9
1 - 100 of 1174 matches
Mail list logo