PCI pass-thru devices in a Hyper-V VM are represented as a VMBus
device and as a PCI device. The coherence of the VMbus device is
set based on the VMbus node in ACPI, but the PCI device has no
ACPI node and defaults to not hardware coherent. This results
in extra software coherence management ove
VMbus synthetic devices are not represented in the ACPI DSDT -- only
the top level VMbus device is represented. As a result, on ARM64
coherence information in the _CCA method is not specified for
synthetic devices, so they default to not hardware coherent.
Drivers for some of these synthetic device
Hyper-V VMs have VMbus synthetic devices and PCI pass-thru devices that are
added
dynamically via the VMbus protocol and are not represented in the ACPI DSDT.
Only
the top level VMbus node exists in the DSDT. As such, on ARM64 these devices
don't
pick up coherence information and default to not
PCI pass-thru devices in a Hyper-V VM are represented as a VMBus
device and as a PCI device. The coherence of the VMbus device is
set based on the VMbus node in ACPI, but the PCI device has no
ACPI node and defaults to not hardware coherent. This results
in extra software coherence management ove
VMbus synthetic devices are not represented in the ACPI DSDT -- only
the top level VMbus device is represented. As a result, on ARM64
coherence information in the _CCA method is not specified for
synthetic devices, so they default to not hardware coherent.
Drivers for some of these synthetic device
Hyper-V VMs have VMbus synthetic devices and PCI pass-thru devices that are
added
dynamically via the VMbus protocol and are not represented in the ACPI DSDT.
Only
the top level VMbus node exists in the DSDT. As such, on ARM64 these devices
don't
pick up coherence information and default to not
VMbus synthetic devices are not represented in the ACPI DSDT -- only
the top level VMbus device is represented. As a result, on ARM64
coherence information in the _CCA method is not specified for
synthetic devices, so they default to not hardware coherent.
Drivers for some of these synthetic device
PCI pass-thru devices in a Hyper-V VM are represented as a VMBus
device and as a PCI device. The coherence of the VMbus device is
set based on the VMbus node in ACPI, but the PCI device has no
ACPI node and defaults to not hardware coherent. This results
in extra software coherence management ove
Export acpi_get_dma_attr() so that it can be used by the Hyper-V
VMbus driver, which may be built as a module. The related function
acpi_dma_configure_id() is already exported.
Signed-off-by: Michael Kelley
---
drivers/acpi/scan.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/acpi/
Add a wrapper function to set dma_coherent, avoiding the need for
complex #ifdef's when setting it in architecture independent code.
Signed-off-by: Michael Kelley
---
include/linux/dma-map-ops.h | 9 +
1 file changed, 9 insertions(+)
diff --git a/include/linux/dma-map-ops.h b/include/li
[Resend to fix an email address typo for Bjorn Helgaas]
Hyper-V VMs have VMbus synthetic devices and PCI pass-thru devices that are
added
dynamically via the VMbus protocol and are not represented in the ACPI DSDT.
Only
the top level VMbus node exists in the DSDT. As such, on ARM64 these devices
Add a wrapper function to set dma_coherent, avoiding the need for
complex #ifdef's when setting it in architecture independent code.
Signed-off-by: Michael Kelley
---
include/linux/dma-map-ops.h | 9 +
1 file changed, 9 insertions(+)
diff --git a/include/linux/dma-map-ops.h b/include/li
PCI pass-thru devices in a Hyper-V VM are represented as a VMBus
device and as a PCI device. The coherence of the VMbus device is
set based on the VMbus node in ACPI, but the PCI device has no
ACPI node and defaults to not hardware coherent. This results
in extra software coherence management ove
Export acpi_get_dma_attr() so that it can be used by the Hyper-V
VMbus driver, which may be built as a module. The related function
acpi_dma_configure_id() is already exported.
Signed-off-by: Michael Kelley
---
drivers/acpi/scan.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/acpi/
Hyper-V VMs have VMbus synthetic devices and PCI pass-thru devices that are
added
dynamically via the VMbus protocol and are not represented in the ACPI DSDT.
Only
the top level VMbus node exists in the DSDT. As such, on ARM64 these devices
don't
pick up coherence information and default to not
VMbus synthetic devices are not represented in the ACPI DSDT -- only
the top level VMbus device is represented. As a result, on ARM64
coherence information in the _CCA method is not specified for
synthetic devices, so they default to not hardware coherent.
Drivers for some of these synthetic device
From: Tianyu Lan Sent: Tuesday, September 14, 2021 6:39
AM
>
> In Isolation VM, all shared memory with host needs to mark visible
> to host via hvcall. vmbus_establish_gpadl() has already done it for
> netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
> pagebuffer() stills nee
From: Tianyu Lan Sent: Tuesday, September 14, 2021 6:39 AM
>
> In Isolation VM, all shared memory with host needs to mark visible
> to host via hvcall. vmbus_establish_gpadl() has already done it for
> storvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
> mpb_desc() still needs t
From: Tianyu Lan Sent: Tuesday, September 14, 2021 6:39 AM
>
> hyperv Isolation VM requires bounce buffer support to copy
> data from/to encrypted memory and so enable swiotlb force
> mode to use swiotlb bounce buffer for DMA transaction.
>
> In Isolation VM with AMD SEV, the bounce buffer needs
From: Tianyu Lan Sent: Tuesday, September 14, 2021 6:39 AM
>
> In Isolation VM with AMD SEV, bounce buffer needs to be accessed via
> extra address space which is above shared_gpa_boundary
> (E.G 39 bit address line) reported by Hyper-V CPUID ISOLATION_CONFIG.
> The access physical address will b
From: Tianyu Lan Sent: Tuesday, September 14, 2021 6:39 AM
>
> The monitor pages in the CHANNELMSG_INITIATE_CONTACT msg are shared
> with host in Isolation VM and so it's necessary to use hvcall to set
> them visible to host. In Isolation VM with AMD SEV SNP, the access
> address should be in the
From: Tianyu Lan Sent: Tuesday, September 14, 2021 6:39 AM
>
> Hyperv provides GHCB protocol to write Synthetic Interrupt
> Controller MSR registers in Isolation VM with AMD SEV SNP
> and these registers are emulated by hypervisor directly.
> Hyperv requires to write SINTx MSR registers twice. Fi
From: Tianyu Lan Sent: Tuesday, September 14, 2021 6:39 AM
>
> Mark vmbus ring buffer visible with set_memory_decrypted() when
> establish gpadl handle.
>
> Signed-off-by: Tianyu Lan
> ---
> Change sincv v4
> * Change gpadl handle in netvsc and uio driver from u32 to
> struct vmbu
From: Tianyu Lan Sent: Thursday, September 2, 2021 6:36 AM
>
> On 9/2/2021 8:23 AM, Michael Kelley wrote:
> >> + } else {
> >> + pages_wraparound = kcalloc(page_cnt * 2 - 1,
> >> + sizeof(struct page *),
> >> + GFP_
From: Christoph Hellwig Sent: Thursday, September 2, 2021 1:00 AM
>
> On Tue, Aug 31, 2021 at 05:16:19PM +, Michael Kelley wrote:
> > As a quick overview, I think there are four places where the
> > shared_gpa_boundary must be applied to adjust the guest physical
> > address that is used. Ea
From: Michael Kelley Sent: Wednesday, September 1,
2021 7:34 PM
[snip]
> > +int netvsc_dma_map(struct hv_device *hv_dev,
> > + struct hv_netvsc_packet *packet,
> > + struct hv_page_buffer *pb)
> > +{
> > + u32 page_count = packet->cp_partial ?
> > + packet
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM
>
> Hyperv provides GHCB protocol to write Synthetic Interrupt
> Controller MSR registers in Isolation VM with AMD SEV SNP
> and these registers are emulated by hypervisor directly.
> Hyperv requires to write SINTx MSR registers twice. First
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM
>
> In Isolation VM, all shared memory with host needs to mark visible
> to host via hvcall. vmbus_establish_gpadl() has already done it for
> netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
> pagebuffer() stills need to
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM
>
Per previous comment, the Subject line tag should be "scsi: storvsc: "
> In Isolation VM, all shared memory with host needs to mark visible
> to host via hvcall. vmbus_establish_gpadl() has already done it for
> storvsc rx/tx ring buffer
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM
>
> hyperv Isolation VM requires bounce buffer support to copy
> data from/to encrypted memory and so enable swiotlb force
> mode to use swiotlb bounce buffer for DMA transaction.
>
> In Isolation VM with AMD SEV, the bounce buffer needs to
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM
>
Subject tag should be "Drivers: hv: vmbus: "
> VMbus ring buffer are shared with host and it's need to
> be accessed via extra address space of Isolation VM with
> AMD SNP support. This patch is to map the ring buffer
> address in extra
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM
>
Subject line tag should be "Drivers: hv: vmbus:"
> The monitor pages in the CHANNELMSG_INITIATE_CONTACT msg are shared
> with host in Isolation VM and so it's necessary to use hvcall to set
> them visible to host. In Isolation VM with AM
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM
>
Subject line tag should probably be "x86/hyperv:" since the majority
of the code added is under arch/x86.
> hyperv provides ghcb hvcall to handle VMBus
> HVCALL_SIGNAL_EVENT and HVCALL_POST_MESSAGE
> msg in SNP Isolation VM. Add such sup
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM
>
> Mark vmbus ring buffer visible with set_memory_decrypted() when
> establish gpadl handle.
>
> Signed-off-by: Tianyu Lan
> ---
> Change since v3:
>* Change vmbus_teardown_gpadl() parameter and put gpadl handle,
>buffer a
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM
>
> Add new hvcall guest address host visibility support to mark
> memory visible to host. Call it inside set_memory_decrypted
> /encrypted(). Add HYPERVISOR feature check in the
> hv_is_isolation_supported() to optimize in non-virtualizatio
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM
>
> Hyper-V exposes shared memory boundary via cpuid
> HYPERV_CPUID_ISOLATION_CONFIG and store it in the
> shared_gpa_boundary of ms_hyperv struct. This prepares
> to share memory with host for SNP guest.
>
> Signed-off-by: Tianyu Lan
> --
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM
>
> Hyperv exposes GHCB page via SEV ES GHCB MSR for SNP guest
> to communicate with hypervisor. Map GHCB page for all
> cpus to read/write MSR register and submit hvcall request
> via ghcb page.
>
> Signed-off-by: Tianyu Lan
> ---
> Chagn
From: Christoph Hellwig Sent: Monday, August 30, 2021 5:01 AM
>
> Sorry for the delayed answer, but I look at the vmap_pfn usage in the
> previous version and tried to come up with a better version. This
> mostly untested branch:
>
> http://git.infradead.org/users/hch/misc.git/shortlog/refs/hea
From: Tianyu Lan Sent: Friday, August 20, 2021 11:04 AM
>
> On 8/21/2021 12:08 AM, Michael Kelley wrote:
> }
> >>> The whole approach here is to do dma remapping on each individual page
> >>> of the I/O buffer. But wouldn't it be possible to use dma_map_sg() to map
> >>> each scatt
From: Tianyu Lan Sent: Friday, August 20, 2021 8:20 AM
>
> On 8/20/2021 2:17 AM, Michael Kelley wrote:
> > From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM
> >
> > I'm not clear on why payload->range.offset needs to be set again.
> > Even after the dma mapping is done, doesn't the offset i
From: h...@lst.de Sent: Thursday, August 19, 2021 9:33 PM
>
> On Thu, Aug 19, 2021 at 06:17:40PM +, Michael Kelley wrote:
> > >
> > > @@ -1824,6 +1848,13 @@ static int storvsc_queuecommand(struct Scsi_Host
> > > *host, struct scsi_cmnd *scmnd)
> > > payload->range.len = length;
> >
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM
>
Subject line tag should be "scsi: storvsc:"
> In Isolation VM, all shared memory with host needs to mark visible
> to host via hvcall. vmbus_establish_gpadl() has already done it for
> storvsc rx/tx ring buffer. The page buffer used by vm
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM
>
The Subject line tag should be "hv_netvsc:".
> In Isolation VM, all shared memory with host needs to mark visible
> to host via hvcall. vmbus_establish_gpadl() has already done it for
> netvsc rx/tx ring buffer. The page buffer used by vm
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM
>
> Hyper-V Isolation VM requires bounce buffer support to copy
> data from/to encrypted memory and so enable swiotlb force
> mode to use swiotlb bounce buffer for DMA transaction.
>
> In Isolation VM with AMD SEV, the bounce buffer needs to
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM
>
> VMbus ring buffer are shared with host and it's need to
s/it's need/it needs/
> be accessed via extra address space of Isolation VM with
> SNP support. This patch is to map the ring buffer
> address in extra address space via ioremap().
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM
>
> Hyper-V provides two kinds of Isolation VMs. VBS(Virtualization-based
> security) and AMD SEV-SNP unenlightened Isolation VMs. This patchset
> is to add support for these Isolation VM support in Linux.
>
A general comment about this ser
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM
>
> The monitor pages in the CHANNELMSG_INITIATE_CONTACT msg are shared
> with host in Isolation VM and so it's necessary to use hvcall to set
> them visible to host. In Isolation VM with AMD SEV SNP, the access
> address should be in the ext
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM
>
> Hyper-V provides ghcb hvcall to handle VMBus
> HVCALL_SIGNAL_EVENT and HVCALL_POST_MESSAGE
> msg in SNP Isolation VM. Add such support.
>
> Signed-off-by: Tianyu Lan
> ---
> arch/x86/hyperv/ivm.c | 43
From: Michael Kelley Sent: Friday, August 13, 2021
12:31 PM
> To: Tianyu Lan ; KY Srinivasan ;
> Haiyang Zhang ;
> Stephen Hemminger ; wei@kernel.org; Dexuan Cui
> ;
> t...@linutronix.de; mi...@redhat.com; b...@alien8.de; x...@kernel.org;
> h...@zytor.com; dave.han...@linux.intel.com;
> l.
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM
> Subject: [PATCH V3 05/13] HV: Add Write/Read MSR registers via ghcb page
See previous comments about tag in the Subject line.
> Hyper-V provides GHCB protocol to write Synthetic Interrupt
> Controller MSR registers in Isolation VM with AMD
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM
> Subject: [PATCH V3 04/13] HV: Mark vmbus ring buffer visible to host in
> Isolation VM
>
Use tag "Drivers: hv: vmbus:" in the Subject line.
> Mark vmbus ring buffer visible with set_memory_decrypted() when
> establish gpadl handle.
>
>
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM
[snip]
> diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> index ad8a5c586a35..1e4a0882820a 100644
> --- a/arch/x86/mm/pat/set_memory.c
> +++ b/arch/x86/mm/pat/set_memory.c
> @@ -29,6 +29,8 @@
> #include
> #includ
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM
> Subject: [PATCH V3 03/13] x86/HV: Add new hvcall guest address host
> visibility support
Use "x86/hyperv:" tag in the Subject line.
>
> From: Tianyu Lan
>
> Add new hvcall guest address host visibility support to mark
> memory visible
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM
> Subject: [PATCH V3 02/13] x86/HV: Initialize shared memory boundary in the
> Isolation VM.
As with Patch 1, use the "x86/hyperv:" tag in the Subject line.
>
> From: Tianyu Lan
>
> Hyper-V exposes shared memory boundary via cpuid
> HYPE
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM
> Subject: [PATCH V3 01/13] x86/HV: Initialize GHCB page in Isolation VM
The subject line tag on patches under arch/x86/hyperv is generally
"x86/hyperv:".
There's some variation in the spelling of "hyperv", but let's go with the all
lowercas
From: Wei Liu Sent: Wednesday, February 3, 2021 7:05 AM
>
> Just like MSI/MSI-X, IO-APIC interrupts are remapped by Microsoft
> Hypervisor when Linux runs as the root partition. Implement an IRQ
> domain to handle mapping and unmapping of IO-APIC interrupts.
>
> Signed-off-by: Wei Liu
> ---
> v
From: Wei Liu Sent: Wednesday, February 3, 2021 4:47 AM
>
> On Wed, Jan 27, 2021 at 05:47:08AM +, Michael Kelley wrote:
> > From: Wei Liu Sent: Wednesday, January 20, 2021 4:01 AM
> > >
> > > Just like MSI/MSI-X, IO-APIC interrupts are remapped by Microsoft
> > > Hypervisor when Linux runs a
From: Wei Liu Sent: Wednesday, January 20, 2021 4:01 AM
>
> Just like MSI/MSI-X, IO-APIC interrupts are remapped by Microsoft
> Hypervisor when Linux runs as the root partition. Implement an IRQ
> domain to handle mapping and unmapping of IO-APIC interrupts.
>
> Signed-off-by: Wei Liu
> ---
>
From: Wei Liu Sent: Wednesday, January 20, 2021 4:01 AM
>
> The IOMMU code needs more work. We're sure for now the IRQ remapping
> hooks are not applicable when Linux is the root partition.
>
> Signed-off-by: Wei Liu
> Acked-by: Joerg Roedel
> Reviewed-by: Vitaly Kuznetsov
> ---
> drivers/io
From: Christoph Hellwig Sent: Wednesday, April 8, 2020 4:59 AM
>
> The pgprot argument to __vmalloc is always PROT_KERNEL now, so remove
> it.
>
> Signed-off-by: Christoph Hellwig
> ---
> arch/x86/hyperv/hv_init.c | 3 +--
> arch/x86/include/asm/kvm_host.h| 3 +--
> arch
From: Christoph Hellwig Sent: Wednesday, April 8, 2020 4:59 AM
>
> Use the designated helper for allocating executable kernel memory, and
> remove the now unused PAGE_KERNEL_RX define.
>
> Signed-off-by: Christoph Hellwig
> ---
> arch/x86/hyperv/hv_init.c| 2 +-
> arch/x86/include/
From: Wei Hu Sent: Tuesday, October 22, 2019 4:11 AM
>
> On Hyper-V, Generation 1 VMs can directly use VM's physical memory for
> their framebuffers. This can improve the efficiency of framebuffer and
> overall performence for VM. The physical memory assigned to framebuffer
> must be contiguous.
From: Boqun Feng Sent: Wednesday, October 16, 2019 5:57
PM
>
> Currently hyperv-iommu is implemented in a x86 specific way, for
> example, apic is used. So make the HYPERV_IOMMU Kconfig depend on X86
> as a preparation for enabling HyperV on architecture other than x86.
>
> Cc: Lan Tianyu
> Cc
From: lantianyu1...@gmail.com Sent: Wednesday,
February 27, 2019 6:54 AM
>
> On the bare metal, enabling X2APIC mode requires interrupt remapping
> function which helps to deliver irq to cpu with 32-bit APIC ID.
> Hyper-V doesn't provide interrupt remapping function so far and Hyper-V
> MSI prot
From: Tianyu Lan Sent: Wednesday, February 27, 2019
6:54 AM
>
> Hyper-V doesn't provide irq remapping for IO-APIC. To enable x2apic,
> set x2apic destination mode to physcial mode when x2apic is available
> and Hyper-V IOMMU driver makes sure cpus assigned with IO-APIC irqs have
> 8-bit APIC id.
From: Tianyu Lan Sent: Friday, February 22, 2019
4:12 AM
>
> On the bare metal, enabling X2APIC mode requires interrupt remapping
> function which helps to deliver irq to cpu with 32-bit APIC ID.
> Hyper-V doesn't provide interrupt remapping function so far and Hyper-V
> MSI protocol already sup
From: tianyu@microsoft.com Sent: Friday,
February 22, 2019 4:12 AM
>
> On the bare metal, enabling X2APIC mode requires interrupt remapping
> function which helps to deliver irq to cpu with 32-bit APIC ID.
> Hyper-V doesn't provide interrupt remapping function so far and Hyper-V
> MSI protoc
From: lantianyu1...@gmail.com Sent: Monday, February
11, 2019 6:20 AM
> + /*
> + * Hyper-V doesn't provide irq remapping function for
> + * IO-APIC and so IO-APIC only accepts 8-bit APIC ID.
> + * Cpu's APIC ID is read from ACPI MADT table and APIC IDs
> + * in the MADT ta
From: lantianyu1...@gmail.com Sent: Monday, February
11, 2019 6:20 AM
>
> Hyper-V doesn't provide irq remapping for IO-APIC. To enable x2apic,
> set x2apic destination mode to physcial mode when x2apic is available
> and Hyper-V IOMMU driver makes sure cpus assigned with IO-APIC irqs have
> 8-bi
From: lantianyu1...@gmail.com Sent: Saturday,
February 2, 2019 5:15 AM
>
> Hyper-V doesn't provide irq remapping for IO-APIC. To enable x2apic,
> set x2apic destination mode to physcial mode when x2apic is available
> and Hyper-V IOMMU driver makes sure cpus assigned with IO-APIC irqs have
> 8-
From: lantianyu1...@gmail.com Sent: Saturday,
February 2, 2019 5:15 AM
I have a couple more comments
>
> +config HYPERV_IOMMU
> + bool "Hyper-V IRQ Remapping Support"
> + depends on HYPERV
> + select IOMMU_API
> + help
> + Hyper-V stub IOMMU driver provides IRQ Rem
From: lantianyu1...@gmail.com Sent: Saturday,
February 2, 2019 5:15 AM
>
> +/*
> + * According 82093AA IO-APIC spec , IO APIC has a 24-entry Interrupt
> + * Redirection Table.
> + */
> +#define IOAPIC_REMAPPING_ENTRY 24
The other unstated assumption here is that Hyper-v guest VMs
have only a si
72 matches
Mail list logo