On 2020/10/22 上午11:54, Liu, Yi L wrote:
Hi Jason,
From: Jason Wang
Sent: Thursday, October 22, 2020 10:56 AM
[...]
If you(Intel) don't have plan to do vDPA, you should not prevent other vendors
from implementing PASID capable hardware through non-VFIO subsystem/uAPI
on top of your SIOV arc
Hi Jason,
> From: Jason Wang
> Sent: Thursday, October 22, 2020 10:56 AM
>
[...]
> If you(Intel) don't have plan to do vDPA, you should not prevent other vendors
> from implementing PASID capable hardware through non-VFIO subsystem/uAPI
> on top of your SIOV architecture. Isn't it?
yes, that's
On 2020/10/22 上午1:51, Raj, Ashok wrote:
On Wed, Oct 21, 2020 at 08:48:29AM -0300, Jason Gunthorpe wrote:
On Tue, Oct 20, 2020 at 01:27:13PM -0700, Raj, Ashok wrote:
On Tue, Oct 20, 2020 at 05:14:03PM -0300, Jason Gunthorpe wrote:
On Tue, Oct 20, 2020 at 01:08:44PM -0700, Raj, Ashok wrote:
On
On Wed, Oct 21, 2020 at 08:32:18PM -0300, Jason Gunthorpe wrote:
> On Wed, Oct 21, 2020 at 01:03:15PM -0700, Raj, Ashok wrote:
>
> > I'm not sure why you tie in IDXD and VDPA here. How IDXD uses native
> > SVM is orthogonal to how we achieve mdev passthrough to guest and
> > vSVM.
>
> Everyone as
On Wed, Oct 21, 2020 at 01:03:15PM -0700, Raj, Ashok wrote:
> I'm not sure why you tie in IDXD and VDPA here. How IDXD uses native
> SVM is orthogonal to how we achieve mdev passthrough to guest and
> vSVM.
Everyone assumes that vIOMMU and SIOV aka PASID is going to be needed
on the VDPA side as
On 10/21/20 12:18 PM, Arvind Sankar wrote:
> On Wed, Oct 21, 2020 at 05:28:33PM +0200, Daniel Kiper wrote:
>> On Mon, Oct 19, 2020 at 01:18:22PM -0400, Arvind Sankar wrote:
>>> On Mon, Oct 19, 2020 at 04:51:53PM +0200, Daniel Kiper wrote:
On Fri, Oct 16, 2020 at 04:51:51PM -0400, Arvind Sankar
On Wed, Oct 21, 2020 at 03:24:42PM -0300, Jason Gunthorpe wrote:
>
> > Contrary to your argument, vDPA went with a half blown device only
> > iommu user without considering existing abstractions like containers
>
> VDPA IOMMU was done *for Intel*, as the kind of half-architected thing
> you are
Hello,
syzbot found the following issue on:
HEAD commit:c4d6fe73 Merge tag 'xarray-5.9' of git://git.infradead.org..
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=14862ff050
kernel config: https://syzkaller.appspot.com/x/.config?x=7d790573d3e379c4
das
On Wed, Oct 21, 2020 at 08:48:29AM -0300, Jason Gunthorpe wrote:
> On Tue, Oct 20, 2020 at 01:27:13PM -0700, Raj, Ashok wrote:
> > On Tue, Oct 20, 2020 at 05:14:03PM -0300, Jason Gunthorpe wrote:
> > > On Tue, Oct 20, 2020 at 01:08:44PM -0700, Raj, Ashok wrote:
> > > > On Tue, Oct 20, 2020 at 04:55
Using two distinct DMA zones turned out to be problematic. Here's an
attempt go back to a saner default.
I tested this on both a RPi4 and QEMU.
---
Changes since v3:
- Drop patch adding define in dma-mapping
- Address small review changes
- Update Ard's patch
- Add new patch removing example
On Wed, Oct 21, 2020 at 05:28:33PM +0200, Daniel Kiper wrote:
> On Mon, Oct 19, 2020 at 01:18:22PM -0400, Arvind Sankar wrote:
> > On Mon, Oct 19, 2020 at 04:51:53PM +0200, Daniel Kiper wrote:
> > > On Fri, Oct 16, 2020 at 04:51:51PM -0400, Arvind Sankar wrote:
> > > > On Thu, Oct 15, 2020 at 08:26
On Mon, Oct 19, 2020 at 01:18:22PM -0400, Arvind Sankar wrote:
> On Mon, Oct 19, 2020 at 04:51:53PM +0200, Daniel Kiper wrote:
> > On Fri, Oct 16, 2020 at 04:51:51PM -0400, Arvind Sankar wrote:
> > > On Thu, Oct 15, 2020 at 08:26:54PM +0200, Daniel Kiper wrote:
> > > >
> > > > I am discussing with
On Mon, Sep 28, 2020 at 02:38:34PM -0700, Jacob Pan wrote:
> Users of an ioasid_set may not keep track of all the IOASIDs allocated
> under the set. When collective actions are needed for each IOASIDs, it
> is useful to iterate over all the IOASIDs within the set. For example,
> when the ioasid_set
We can't really list every setup in common code. On top of that they are
unlikely to stay true for long as things change in the arch trees
independently of this comment.
Suggested-by: Christoph Hellwig
Signed-off-by: Nicolas Saenz Julienne
---
include/linux/mmzone.h | 20
1
On Mon, Sep 28, 2020 at 02:38:35PM -0700, Jacob Pan wrote:
> There can be multiple users of an IOASID, each user could have hardware
> contexts associated with the IOASID. In order to align lifecycles,
> reference counting is introduced in this patch. It is expected that when
> an IOASID is being f
On Mon, Sep 28, 2020 at 02:38:30PM -0700, Jacob Pan wrote:
> IOASID private data can be cleared by ioasid_attach_data() with a NULL
> data pointer. A common use case is for a caller to free the data
> afterward. ioasid_attach_data() calls synchronize_rcu() before return
> such that free data can be
On Wed, Oct 21, 2020 at 10:51:46AM -0700, Raj, Ashok wrote:
> > If they didn't plan to use it, bit of a strawman argument, right?
>
> This doesn't need to continue like the debates :-) Pun intended :-)
>
> I don't think it makes any sense to have an abstract strawman argument
> design discussion
On 2020-10-19 12:30, Chao Hao wrote:
MTK_IOMMU driver writes one page entry and does tlb flush at a time
currently. More optimal would be to aggregate the writes and flush
BUS buffer in the end.
That's exactly what iommu_iotlb_gather_add_page() is meant to achieve.
Rather than jumping straight
On Tue, Oct 20, 2020 at 01:27:13PM -0700, Raj, Ashok wrote:
> On Tue, Oct 20, 2020 at 05:14:03PM -0300, Jason Gunthorpe wrote:
> > On Tue, Oct 20, 2020 at 01:08:44PM -0700, Raj, Ashok wrote:
> > > On Tue, Oct 20, 2020 at 04:55:57PM -0300, Jason Gunthorpe wrote:
> > > > On Tue, Oct 20, 2020 at 12:51
On Mon, Sep 28, 2020 at 02:38:33PM -0700, Jacob Pan wrote:
> Each ioasid_set is given a quota during allocation. As system
> administrators balance resources among VMs, we shall support the
> adjustment of quota at runtime. The new quota cannot be less than the
> outstanding IOASIDs already allocat
On Mon, Sep 28, 2020 at 02:38:32PM -0700, Jacob Pan wrote:
> ioasid_set was introduced as an arbitrary token that is shared by a
> group of IOASIDs. For example, two IOASIDs allocated via the same
> ioasid_set pointer belong to the same set.
>
> For guest SVA usages, system-wide IOASID resources n
On Mon, Sep 28, 2020 at 02:38:31PM -0700, Jacob Pan wrote:
> IOASID is a system-wide resource that could vary on different systems.
> The default capacity is 20 bits as defined in the PCI-E specifications.
> This patch adds a function to allow adjusting system IOASID capacity.
> For VT-d this is se
zone_dma_bits's initialization happens earlier that it's actually
needed, in arm64_memblock_init(). So move it into the more suitable
zone_sizes_init().
Signed-off-by: Nicolas Saenz Julienne
---
arch/arm64/mm/init.c | 7 ++-
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/arch/
Introduce of_dma_get_max_cpu_address(), which provides the highest CPU
physical address addressable by all DMA masters in the system. It's
specially useful for setting memory zones sizes at early boot time.
Signed-off-by: Nicolas Saenz Julienne
---
Changes since v3:
- use u64 with cpu_end
Cha
Introduce a test for of_dma_get_max_cup_address(), it uses the same DT
data as the rest of dma-ranges unit tests.
Signed-off-by: Nicolas Saenz Julienne
---
Changes since v3:
- Remove HAS_DMA guards
drivers/of/unittest.c | 18 ++
1 file changed, 18 insertions(+)
diff --git a/d
crashkernel might reserve memory located in ZONE_DMA. We plan to delay
ZONE_DMA's initialization after unflattening the devicetree and ACPI's
boot table initialization, so move it later in the boot process.
Specifically into mem_init(), this is the last place crashkernel will be
able to reserve the
From: Ard Biesheuvel
We recently introduced a 1 GB sized ZONE_DMA to cater for platforms
incorporating masters that can address less than 32 bits of DMA, in
particular the Raspberry Pi 4, which has 4 or 8 GB of DRAM, but has
peripherals that can only address up to 1 GB (and its PCIe host
bridge c
We recently introduced a 1 GB sized ZONE_DMA to cater for platforms
incorporating masters that can address less than 32 bits of DMA, in
particular the Raspberry Pi 4, which has 4 or 8 GB of DRAM, but has
peripherals that can only address up to 1 GB (and its PCIe host
bridge can only access the bott
28 matches
Mail list logo