On 12/4/2021 2:59 AM, Michael Kelley (LINUX) wrote:
+
+/*
+ * hv_map_memory - map memory to extra space in the AMD SEV-SNP Isolation VM.
+ */
+void *hv_map_memory(void *addr, unsigned long size)
+{
+ unsigned long *pfns = kcalloc(size / HV_HYP_PAGE_SIZE,
This should be just PAGE_SIZE, as t
On 12/4/2021 3:17 AM, Michael Kelley (LINUX) wrote:
+static void __init hyperv_iommu_swiotlb_init(void)
+{
+ unsigned long hyperv_io_tlb_size;
+ void *hyperv_io_tlb_start;
+
+ /*
+* Allocate Hyper-V swiotlb bounce buffer at early place
+* to reserve large cont
On 12/4/2021 4:06 AM, Tom Lendacky wrote:
Hi Tom:
Thanks for your test. Could you help to test the following
patch and check whether it can fix the issue.
The patch is mangled. Is the only difference where
set_memory_decrypted() is called?
I de-mangled the patch. No more stack traces
On Fri, 03 Dec 2021 14:40:24 +0800, Yong Wu wrote:
> If a platform's larb support gals, there will be some larbs have a one
> more "gals" clock while the others still only need "apb"/"smi" clocks.
> then the minItems is 2 and the maxItems is 3.
>
> Fixes: 27bb0e42855a ("dt-bindings: memory: mediat
On 12/3/21 1:11 PM, Tom Lendacky wrote:
On 12/3/21 5:20 AM, Tianyu Lan wrote:
On 12/2/2021 10:42 PM, Tom Lendacky wrote:
On 12/1/21 10:02 AM, Tianyu Lan wrote:
From: Tianyu Lan
In Isolation VM with AMD SEV, bounce buffer needs to be accessed via
extra address space which is above shared_gpa_
From: Tianyu Lan Sent: Wednesday, December 1, 2021 8:03 AM
>
> hyperv Isolation VM requires bounce buffer support to copy
> data from/to encrypted memory and so enable swiotlb force
> mode to use swiotlb bounce buffer for DMA transaction.
>
> In Isolation VM with AMD SEV, the bounce buffer needs
On 12/3/21 5:20 AM, Tianyu Lan wrote:
On 12/2/2021 10:42 PM, Tom Lendacky wrote:
On 12/1/21 10:02 AM, Tianyu Lan wrote:
From: Tianyu Lan
In Isolation VM with AMD SEV, bounce buffer needs to be accessed via
extra address space which is above shared_gpa_boundary (E.G 39 bit
address line) report
From: Tianyu Lan Sent: Wednesday, December 1, 2021 8:03 AM
>
> In Isolation VM, all shared memory with host needs to mark visible
> to host via hvcall. vmbus_establish_gpadl() has already done it for
> netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
> pagebuffer() stills need
On 12/3/21 11:03, Robin Murphy wrote:
> On 2021-12-03 15:47, Ross Philipson wrote:
>> On 12/2/21 12:26, Robin Murphy wrote:
>>> On 2021-08-27 14:28, Ross Philipson wrote:
>>> [...]
+IOMMU Configuration
+---
+
+When doing a Secure Launch, the IOMMU should always b
On Fri, Dec 03, 2021 at 04:07:58PM +0100, Thomas Gleixner wrote:
> Jason,
>
> On Thu, Dec 02 2021 at 20:37, Jason Gunthorpe wrote:
> > On Thu, Dec 02, 2021 at 11:31:11PM +0100, Thomas Gleixner wrote:
> >> >> Of course we can store them in pci_dev.dev.msi.data.store. Either with a
> >> >> dedicated
On 2021-12-03 15:47, Ross Philipson wrote:
On 12/2/21 12:26, Robin Murphy wrote:
On 2021-08-27 14:28, Ross Philipson wrote:
[...]
+IOMMU Configuration
+---
+
+When doing a Secure Launch, the IOMMU should always be enabled and
the drivers
+loaded. However, IOMMU passthrough mode
On 12/2/21 12:26, Robin Murphy wrote:
> On 2021-08-27 14:28, Ross Philipson wrote:
> [...]
>> +IOMMU Configuration
>> +---
>> +
>> +When doing a Secure Launch, the IOMMU should always be enabled and
>> the drivers
>> +loaded. However, IOMMU passthrough mode should never be used. Thi
Jason,
On Thu, Dec 02 2021 at 20:37, Jason Gunthorpe wrote:
> On Thu, Dec 02, 2021 at 11:31:11PM +0100, Thomas Gleixner wrote:
>> >> Of course we can store them in pci_dev.dev.msi.data.store. Either with a
>> >> dedicated xarray or by partitioning the xarray space. Both have their
>> >> pro and co
Hi Eric,
This series brings the IOMMU part of HW nested paging support
in the SMMUv3.
The SMMUv3 driver is adapted to support 2 nested stages.
The IOMMU API is extended to convey the guest stage 1
configuration and the hook is implemented in the SMMUv3 driver.
This allows the guest to own the
On 2021-11-25 07:35, Tomasz Figa wrote:
Hi Robin,
On Tue, Nov 23, 2021 at 8:59 PM Robin Murphy wrote:
On 2021-11-23 11:21, Hsin-Yi Wang wrote:
Default IO_TLB_SEGSIZE (128) slabs may be not enough for some use cases.
This series adds support to customize io_tlb_segsize for each
restricted-dma
Hi, Eric
On 2021/10/27 下午6:44, Eric Auger wrote:
This series brings the IOMMU part of HW nested paging support
in the SMMUv3.
The SMMUv3 driver is adapted to support 2 nested stages.
The IOMMU API is extended to convey the guest stage 1
configuration and the hook is implemented in the SMMUv3
On 12/2/2021 10:43 PM, Wei Liu wrote:
On Wed, Dec 01, 2021 at 11:02:54AM -0500, Tianyu Lan wrote:
[...]
diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c
index 46df59aeaa06..30fd0600b008 100644
--- a/arch/x86/xen/pci-swiotlb-xen.c
+++ b/arch/x86/xen/pci-swiotlb-xen.c
@
On 12/2/2021 10:39 PM, Wei Liu wrote:
+static bool hyperv_cc_platform_has(enum cc_attr attr)
+{
+#ifdef CONFIG_HYPERV
+ if (attr == CC_ATTR_GUEST_MEM_ENCRYPT)
+ return true;
+ else
+ return false;
This can be simplified as
return attr == CC_ATTR_G
On 12/2/2021 10:42 PM, Tom Lendacky wrote:
On 12/1/21 10:02 AM, Tianyu Lan wrote:
From: Tianyu Lan
In Isolation VM with AMD SEV, bounce buffer needs to be accessed via
extra address space which is above shared_gpa_boundary (E.G 39 bit
address line) reported by Hyper-V CPUID ISOLATION_CONFIG.
19 matches
Mail list logo