Pasid table memory allocation could return failure due to memory
shortage. Limit the pasid table size to 1MiB because current 8MiB
contiguous physical memory allocation can be hard to come by. W/o
a PASID table, the device could continue to work with only shared
virtual memory impacted. So, let's g
Hi Jean,
On 2018/8/31 21:34, Jean-Philippe Brucker wrote:
On 27/08/18 09:06, Xu Zaibo wrote:
+struct vfio_iommu_type1_bind_process {
+__u32flags;
+#define VFIO_IOMMU_BIND_PID(1 << 0)
+__u32pasid;
As I am doing some works on the SVA patch set. I just consider why the
use
Hi Christoph,
On Mon, Aug 27, 2018 at 04:50:27PM +0200, Christoph Hellwig wrote:
> Subject: [RFC] merge dma_direct_ops and dma_noncoherent_ops
>
> While most architectures are either always or never dma coherent for a
> given build, the arm, arm64, mips and soon arc architectures can have
> diffe
Hi Christoph,
On Mon, Aug 27, 2018 at 04:50:29PM +0200, Christoph Hellwig wrote:
> Various architectures support both coherent and non-coherent dma on
> a per-device basis. Move the dma_noncoherent flag from mips the
> mips archdata field to struct device proper to prepare the
> infrastructure fo
This addresses a v4.19-rc1 regression in the PL111 DRM driver
in drivers/gpu/pl111/*
The driver uses the CMA KMS helpers and will thus at some
point call down to dma_alloc_attrs() to allocate a chunk
of contigous DMA memory for the framebuffer.
It appears that in v4.18, it was OK that this (and o
Hi Jean-Philippe,
On 08/31/2018 03:20 PM, Jean-Philippe Brucker wrote:
> On 23/08/18 13:17, Eric Auger wrote:
>> if (ste->s1_cfg) {
>> -BUG_ON(ste_live);
>
> Scary! :) The current code assumes that it can make modifications to the
> STE in any order and enable translation after a
Hi Jean-Philippe,
On 08/31/2018 03:17 PM, Jean-Philippe Brucker wrote:
> On 23/08/18 13:17, Eric Auger wrote:
>> +/**
>> + * Translation cache invalidation information, contains generic IOMMU
>> + * data which can be parsed based on model ID by model specific drivers.
>> + * Since the invalidation
Hi Jean-Philippe,
On 08/31/2018 03:11 PM, Jean-Philippe Brucker wrote:
> Hi Eric,
>
> On 23/08/18 16:25, Auger Eric wrote:
>>> +int iommu_bind_guest_stage(struct iommu_domain *domain, struct device *dev,
>>> + struct iommu_guest_stage_config *cfg)
>
> About the name change f
Hi Zaibo,
On 27/08/18 09:06, Xu Zaibo wrote:
>> +struct vfio_iommu_type1_bind_process {
>> + __u32 flags;
>> +#define VFIO_IOMMU_BIND_PID (1 << 0)
>> + __u32 pasid;
> As I am doing some works on the SVA patch set. I just consider why the
> user space need this pasid.
> Maybe, is
On 23/08/18 13:17, Eric Auger wrote:
> if (ste->s1_cfg) {
> - BUG_ON(ste_live);
Scary! :) The current code assumes that it can make modifications to the
STE in any order and enable translation after a sync. So far I haven't
been able to find anything that violates this rule in th
On 23/08/18 13:17, Eric Auger wrote:
> +/**
> + * Translation cache invalidation information, contains generic IOMMU
> + * data which can be parsed based on model ID by model specific drivers.
> + * Since the invalidation of second level page tables are included in the
> + * unmap operation, this i
Hi Eric,
On 23/08/18 16:25, Auger Eric wrote:
>> +int iommu_bind_guest_stage(struct iommu_domain *domain, struct device *dev,
>> + struct iommu_guest_stage_config *cfg)
About the name change from iommu_bind_pasid_table: is the intent to
reuse this API for SMMUv2, which suppo
Hi Joerg/Robin,
Can you please let me know when these patches will be applied onto the tree.
Is there anything else pending from my side.
Thanks,
Nipun
> -Original Message-
> From: Nipun Gupta
> Sent: Monday, July 9, 2018 4:48 PM
> To: robin.mur...@arm.com; will.dea...@arm.com; robh...@k
On Fri, Aug 31, 2018 at 10:26:14AM +0200, Linus Walleij wrote:
> This addresses a v4.19-rc1 regression in the PL111 DRM driver
> in drivers/gpu/pl111/*
>
> The driver uses the CMA KMS helpers and will thus at some
> point call down to dma_alloc_attrs() to allocate a chunk
> of contigous DMA memory
Hi Rob,
On 8/30/2018 6:13 AM, Rob Herring wrote:
On Wed, Aug 29, 2018 at 6:23 AM Vivek Gautam
wrote:
On Wed, Aug 29, 2018 at 2:05 PM Vivek Gautam
wrote:
Hi Rob,
On 8/29/2018 2:04 AM, Rob Herring wrote:
On Mon, Aug 27, 2018 at 04:25:50PM +0530, Vivek Gautam wrote:
Add bindings doc for Qc
This addresses a v4.19-rc1 regression in the PL111 DRM driver
in drivers/gpu/pl111/*
The driver uses the CMA KMS helpers and will thus at some
point call down to dma_alloc_attrs() to allocate a chunk
of contigous DMA memory for the framebuffer.
It appears that in v4.18, it was OK that this (and o
For kdump kernel, when SME is enabled, the acpi table and dmi table will need
to be remapped without the memory encryption mask. So we have to strengthen
the logic in early_memremap_pgprot_adjust(), which makes us have an opportunity
to adjust the memory encryption mask.
Signed-off-by: Lianbo Jian
When SME is enabled in the first kernel, we will allocate unencrypted pages
for kdump in order to be able to boot the kdump kernel like kexec.
Signed-off-by: Lianbo Jiang
---
kernel/kexec_core.c | 12
1 file changed, 12 insertions(+)
diff --git a/kernel/kexec_core.c b/kernel/kexec_
In kdump kernel, we need to dump the old memory into vmcore file,if SME
is enabled in the first kernel, we have to remap the old memory with the
memory encryption mask, which will be automatically decrypted when we
read from DRAM.
For SME kdump, there are two cases that doesn't support:
In kdump kernel, it will copy the device table of IOMMU from the old device
table, which is encrypted when SME is enabled in the first kernel. So we
have to remap the old device table with the memory encryption mask.
Signed-off-by: Lianbo Jiang
---
drivers/iommu/amd_iommu_init.c | 14 +++
When SME is enabled on AMD machine, the memory is encrypted in the first
kernel. In this case, SME also needs to be enabled in kdump kernel, and
we have to remap the old memory with the memory encryption mask.
Signed-off-by: Lianbo Jiang
---
arch/x86/include/asm/io.h | 3 +++
arch/x86/mm/iorema
When SME is enabled on AMD machine, we also need to support kdump. Because
the memory is encrypted in the first kernel, we will remap the old memory
to the kdump kernel for dumping data, and SME is also enabled in the kdump
kernel, otherwise the old memory can not be decrypted.
For the kdump, it i
22 matches
Mail list logo