[PATCH v3 2/2] PCI: hv: Propagate coherence from VMbus device to PCI device

2022-03-24 Thread Michael Kelley via iommu
PCI pass-thru devices in a Hyper-V VM are represented as a VMBus device and as a PCI device. The coherence of the VMbus device is set based on the VMbus node in ACPI, but the PCI device has no ACPI node and defaults to not hardware coherent. This results in extra software coherence management ove

[PATCH v3 1/2] Drivers: hv: vmbus: Propagate VMbus coherence to each VMbus device

2022-03-24 Thread Michael Kelley via iommu
VMbus synthetic devices are not represented in the ACPI DSDT -- only the top level VMbus device is represented. As a result, on ARM64 coherence information in the _CCA method is not specified for synthetic devices, so they default to not hardware coherent. Drivers for some of these synthetic device

[PATCH v3 0/2] Fix coherence for VMbus and PCI pass-thru devices in Hyper-V VM

2022-03-24 Thread Michael Kelley via iommu
Hyper-V VMs have VMbus synthetic devices and PCI pass-thru devices that are added dynamically via the VMbus protocol and are not represented in the ACPI DSDT. Only the top level VMbus node exists in the DSDT. As such, on ARM64 these devices don't pick up coherence information and default to not

[PATCH v2 2/2] PCI: hv: Propagate coherence from VMbus device to PCI device

2022-03-23 Thread Michael Kelley via iommu
PCI pass-thru devices in a Hyper-V VM are represented as a VMBus device and as a PCI device. The coherence of the VMbus device is set based on the VMbus node in ACPI, but the PCI device has no ACPI node and defaults to not hardware coherent. This results in extra software coherence management ove

[PATCH v2 1/2] Drivers: hv: vmbus: Propagate VMbus coherence to each VMbus device

2022-03-23 Thread Michael Kelley via iommu
VMbus synthetic devices are not represented in the ACPI DSDT -- only the top level VMbus device is represented. As a result, on ARM64 coherence information in the _CCA method is not specified for synthetic devices, so they default to not hardware coherent. Drivers for some of these synthetic device

[PATCH v2 0/2] Fix coherence for VMbus and PCI pass-thru devices in Hyper-V VM

2022-03-23 Thread Michael Kelley via iommu
Hyper-V VMs have VMbus synthetic devices and PCI pass-thru devices that are added dynamically via the VMbus protocol and are not represented in the ACPI DSDT. Only the top level VMbus node exists in the DSDT. As such, on ARM64 these devices don't pick up coherence information and default to not

[PATCH 3/4 RESEND] Drivers: hv: vmbus: Propagate VMbus coherence to each VMbus device

2022-03-17 Thread Michael Kelley via iommu
VMbus synthetic devices are not represented in the ACPI DSDT -- only the top level VMbus device is represented. As a result, on ARM64 coherence information in the _CCA method is not specified for synthetic devices, so they default to not hardware coherent. Drivers for some of these synthetic device

[PATCH 4/4 RESEND] PCI: hv: Propagate coherence from VMbus device to PCI device

2022-03-17 Thread Michael Kelley via iommu
PCI pass-thru devices in a Hyper-V VM are represented as a VMBus device and as a PCI device. The coherence of the VMbus device is set based on the VMbus node in ACPI, but the PCI device has no ACPI node and defaults to not hardware coherent. This results in extra software coherence management ove

[PATCH 1/4 RESEND] ACPI: scan: Export acpi_get_dma_attr()

2022-03-17 Thread Michael Kelley via iommu
Export acpi_get_dma_attr() so that it can be used by the Hyper-V VMbus driver, which may be built as a module. The related function acpi_dma_configure_id() is already exported. Signed-off-by: Michael Kelley --- drivers/acpi/scan.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/acpi/

[PATCH 2/4 RESEND] dma-mapping: Add wrapper function to set dma_coherent

2022-03-17 Thread Michael Kelley via iommu
Add a wrapper function to set dma_coherent, avoiding the need for complex #ifdef's when setting it in architecture independent code. Signed-off-by: Michael Kelley --- include/linux/dma-map-ops.h | 9 + 1 file changed, 9 insertions(+) diff --git a/include/linux/dma-map-ops.h b/include/li

[PATCH 0/4 RESEND] Fix coherence for VMbus and PCI pass-thru devices in Hyper-V VM

2022-03-17 Thread Michael Kelley via iommu
[Resend to fix an email address typo for Bjorn Helgaas] Hyper-V VMs have VMbus synthetic devices and PCI pass-thru devices that are added dynamically via the VMbus protocol and are not represented in the ACPI DSDT. Only the top level VMbus node exists in the DSDT. As such, on ARM64 these devices

[PATCH 2/4] dma-mapping: Add wrapper function to set dma_coherent

2022-03-17 Thread Michael Kelley via iommu
Add a wrapper function to set dma_coherent, avoiding the need for complex #ifdef's when setting it in architecture independent code. Signed-off-by: Michael Kelley --- include/linux/dma-map-ops.h | 9 + 1 file changed, 9 insertions(+) diff --git a/include/linux/dma-map-ops.h b/include/li

[PATCH 4/4] PCI: hv: Propagate coherence from VMbus device to PCI device

2022-03-17 Thread Michael Kelley via iommu
PCI pass-thru devices in a Hyper-V VM are represented as a VMBus device and as a PCI device. The coherence of the VMbus device is set based on the VMbus node in ACPI, but the PCI device has no ACPI node and defaults to not hardware coherent. This results in extra software coherence management ove

[PATCH 1/4] ACPI: scan: Export acpi_get_dma_attr()

2022-03-17 Thread Michael Kelley via iommu
Export acpi_get_dma_attr() so that it can be used by the Hyper-V VMbus driver, which may be built as a module. The related function acpi_dma_configure_id() is already exported. Signed-off-by: Michael Kelley --- drivers/acpi/scan.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/acpi/

[PATCH 0/4] Fix coherence for VMbus and PCI pass-thru devices in Hyper-V VM

2022-03-17 Thread Michael Kelley via iommu
Hyper-V VMs have VMbus synthetic devices and PCI pass-thru devices that are added dynamically via the VMbus protocol and are not represented in the ACPI DSDT. Only the top level VMbus node exists in the DSDT. As such, on ARM64 these devices don't pick up coherence information and default to not

[PATCH 3/4] Drivers: hv: vmbus: Propagate VMbus coherence to each VMbus device

2022-03-17 Thread Michael Kelley via iommu
VMbus synthetic devices are not represented in the ACPI DSDT -- only the top level VMbus device is represented. As a result, on ARM64 coherence information in the _CCA method is not specified for synthetic devices, so they default to not hardware coherent. Drivers for some of these synthetic device

RE: [PATCH V5 12/12] net: netvsc: Add Isolation VM support for netvsc driver

2021-09-15 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Tuesday, September 14, 2021 6:39 AM > > In Isolation VM, all shared memory with host needs to mark visible > to host via hvcall. vmbus_establish_gpadl() has already done it for > netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_ > pagebuffer() stills nee

RE: [PATCH V5 11/12] scsi: storvsc: Add Isolation VM support for storvsc driver

2021-09-15 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Tuesday, September 14, 2021 6:39 AM > > In Isolation VM, all shared memory with host needs to mark visible > to host via hvcall. vmbus_establish_gpadl() has already done it for > storvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_ > mpb_desc() still needs t

RE: [PATCH V5 10/12] hyperv/IOMMU: Enable swiotlb bounce buffer for Isolation VM

2021-09-15 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Tuesday, September 14, 2021 6:39 AM > > hyperv Isolation VM requires bounce buffer support to copy > data from/to encrypted memory and so enable swiotlb force > mode to use swiotlb bounce buffer for DMA transaction. > > In Isolation VM with AMD SEV, the bounce buffer needs

RE: [PATCH V5 09/12] x86/Swiotlb: Add Swiotlb bounce buffer remap function for HV IVM

2021-09-15 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Tuesday, September 14, 2021 6:39 AM > > In Isolation VM with AMD SEV, bounce buffer needs to be accessed via > extra address space which is above shared_gpa_boundary > (E.G 39 bit address line) reported by Hyper-V CPUID ISOLATION_CONFIG. > The access physical address will b

RE: [PATCH V5 07/12] Drivers: hv: vmbus: Add SNP support for VMbus channel initiate message

2021-09-15 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Tuesday, September 14, 2021 6:39 AM > > The monitor pages in the CHANNELMSG_INITIATE_CONTACT msg are shared > with host in Isolation VM and so it's necessary to use hvcall to set > them visible to host. In Isolation VM with AMD SEV SNP, the access > address should be in the

RE: [PATCH V5 05/12] x86/hyperv: Add Write/Read MSR registers via ghcb page

2021-09-15 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Tuesday, September 14, 2021 6:39 AM > > Hyperv provides GHCB protocol to write Synthetic Interrupt > Controller MSR registers in Isolation VM with AMD SEV SNP > and these registers are emulated by hypervisor directly. > Hyperv requires to write SINTx MSR registers twice. Fi

RE: [PATCH V5 04/12] Drivers: hv: vmbus: Mark vmbus ring buffer visible to host in Isolation VM

2021-09-15 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Tuesday, September 14, 2021 6:39 AM > > Mark vmbus ring buffer visible with set_memory_decrypted() when > establish gpadl handle. > > Signed-off-by: Tianyu Lan > --- > Change sincv v4 > * Change gpadl handle in netvsc and uio driver from u32 to > struct vmbu

RE: [PATCH V4 08/13] hyperv/vmbus: Initialize VMbus ring buffer for Isolation VM

2021-09-02 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Thursday, September 2, 2021 6:36 AM > > On 9/2/2021 8:23 AM, Michael Kelley wrote: > >> + } else { > >> + pages_wraparound = kcalloc(page_cnt * 2 - 1, > >> + sizeof(struct page *), > >> + GFP_

RE: [PATCH V4 00/13] x86/Hyper-V: Add Hyper-V Isolation VM support

2021-09-02 Thread Michael Kelley via iommu
From: Christoph Hellwig Sent: Thursday, September 2, 2021 1:00 AM > > On Tue, Aug 31, 2021 at 05:16:19PM +, Michael Kelley wrote: > > As a quick overview, I think there are four places where the > > shared_gpa_boundary must be applied to adjust the guest physical > > address that is used. Ea

RE: [PATCH V4 12/13] hv_netvsc: Add Isolation VM support for netvsc driver

2021-09-01 Thread Michael Kelley via iommu
From: Michael Kelley Sent: Wednesday, September 1, 2021 7:34 PM [snip] > > +int netvsc_dma_map(struct hv_device *hv_dev, > > + struct hv_netvsc_packet *packet, > > + struct hv_page_buffer *pb) > > +{ > > + u32 page_count = packet->cp_partial ? > > + packet

RE: [PATCH V4 05/13] hyperv: Add Write/Read MSR registers via ghcb page

2021-09-01 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM > > Hyperv provides GHCB protocol to write Synthetic Interrupt > Controller MSR registers in Isolation VM with AMD SEV SNP > and these registers are emulated by hypervisor directly. > Hyperv requires to write SINTx MSR registers twice. First

RE: [PATCH V4 12/13] hv_netvsc: Add Isolation VM support for netvsc driver

2021-09-01 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM > > In Isolation VM, all shared memory with host needs to mark visible > to host via hvcall. vmbus_establish_gpadl() has already done it for > netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_ > pagebuffer() stills need to

RE: [PATCH V4 13/13] hv_storvsc: Add Isolation VM support for storvsc driver

2021-09-01 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM > Per previous comment, the Subject line tag should be "scsi: storvsc: " > In Isolation VM, all shared memory with host needs to mark visible > to host via hvcall. vmbus_establish_gpadl() has already done it for > storvsc rx/tx ring buffer

RE: [PATCH V4 11/13] hyperv/IOMMU: Enable swiotlb bounce buffer for Isolation VM

2021-09-01 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM > > hyperv Isolation VM requires bounce buffer support to copy > data from/to encrypted memory and so enable swiotlb force > mode to use swiotlb bounce buffer for DMA transaction. > > In Isolation VM with AMD SEV, the bounce buffer needs to

RE: [PATCH V4 08/13] hyperv/vmbus: Initialize VMbus ring buffer for Isolation VM

2021-09-01 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM > Subject tag should be "Drivers: hv: vmbus: " > VMbus ring buffer are shared with host and it's need to > be accessed via extra address space of Isolation VM with > AMD SNP support. This patch is to map the ring buffer > address in extra

RE: [PATCH V4 07/13] hyperv/Vmbus: Add SNP support for VMbus channel initiate message

2021-09-01 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM > Subject line tag should be "Drivers: hv: vmbus:" > The monitor pages in the CHANNELMSG_INITIATE_CONTACT msg are shared > with host in Isolation VM and so it's necessary to use hvcall to set > them visible to host. In Isolation VM with AM

RE: [PATCH V4 06/13] hyperv: Add ghcb hvcall support for SNP VM

2021-09-01 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM > Subject line tag should probably be "x86/hyperv:" since the majority of the code added is under arch/x86. > hyperv provides ghcb hvcall to handle VMBus > HVCALL_SIGNAL_EVENT and HVCALL_POST_MESSAGE > msg in SNP Isolation VM. Add such sup

RE: [PATCH V4 04/13] hyperv: Mark vmbus ring buffer visible to host in Isolation VM

2021-09-01 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM > > Mark vmbus ring buffer visible with set_memory_decrypted() when > establish gpadl handle. > > Signed-off-by: Tianyu Lan > --- > Change since v3: >* Change vmbus_teardown_gpadl() parameter and put gpadl handle, >buffer a

RE: [PATCH V4 03/13] x86/hyperv: Add new hvcall guest address host visibility support

2021-09-01 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM > > Add new hvcall guest address host visibility support to mark > memory visible to host. Call it inside set_memory_decrypted > /encrypted(). Add HYPERVISOR feature check in the > hv_is_isolation_supported() to optimize in non-virtualizatio

RE: [PATCH V4 02/13] x86/hyperv: Initialize shared memory boundary in the Isolation VM.

2021-09-01 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM > > Hyper-V exposes shared memory boundary via cpuid > HYPERV_CPUID_ISOLATION_CONFIG and store it in the > shared_gpa_boundary of ms_hyperv struct. This prepares > to share memory with host for SNP guest. > > Signed-off-by: Tianyu Lan > --

RE: [PATCH V4 01/13] x86/hyperv: Initialize GHCB page in Isolation VM

2021-09-01 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Friday, August 27, 2021 10:21 AM > > Hyperv exposes GHCB page via SEV ES GHCB MSR for SNP guest > to communicate with hypervisor. Map GHCB page for all > cpus to read/write MSR register and submit hvcall request > via ghcb page. > > Signed-off-by: Tianyu Lan > --- > Chagn

RE: [PATCH V4 00/13] x86/Hyper-V: Add Hyper-V Isolation VM support

2021-08-31 Thread Michael Kelley via iommu
From: Christoph Hellwig Sent: Monday, August 30, 2021 5:01 AM > > Sorry for the delayed answer, but I look at the vmap_pfn usage in the > previous version and tried to come up with a better version. This > mostly untested branch: > > http://git.infradead.org/users/hch/misc.git/shortlog/refs/hea

RE: [PATCH V3 13/13] HV/Storvsc: Add Isolation VM support for storvsc driver

2021-08-20 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Friday, August 20, 2021 11:04 AM > > On 8/21/2021 12:08 AM, Michael Kelley wrote: > } > >>> The whole approach here is to do dma remapping on each individual page > >>> of the I/O buffer. But wouldn't it be possible to use dma_map_sg() to map > >>> each scatt

RE: [PATCH V3 13/13] HV/Storvsc: Add Isolation VM support for storvsc driver

2021-08-20 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Friday, August 20, 2021 8:20 AM > > On 8/20/2021 2:17 AM, Michael Kelley wrote: > > From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM > > > > I'm not clear on why payload->range.offset needs to be set again. > > Even after the dma mapping is done, doesn't the offset i

RE: [PATCH V3 13/13] HV/Storvsc: Add Isolation VM support for storvsc driver

2021-08-20 Thread Michael Kelley via iommu
From: h...@lst.de Sent: Thursday, August 19, 2021 9:33 PM > > On Thu, Aug 19, 2021 at 06:17:40PM +, Michael Kelley wrote: > > > > > > @@ -1824,6 +1848,13 @@ static int storvsc_queuecommand(struct Scsi_Host > > > *host, struct scsi_cmnd *scmnd) > > > payload->range.len = length; > >

RE: [PATCH V3 13/13] HV/Storvsc: Add Isolation VM support for storvsc driver

2021-08-19 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM > Subject line tag should be "scsi: storvsc:" > In Isolation VM, all shared memory with host needs to mark visible > to host via hvcall. vmbus_establish_gpadl() has already done it for > storvsc rx/tx ring buffer. The page buffer used by vm

RE: [PATCH V3 12/13] HV/Netvsc: Add Isolation VM support for netvsc driver

2021-08-19 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM > The Subject line tag should be "hv_netvsc:". > In Isolation VM, all shared memory with host needs to mark visible > to host via hvcall. vmbus_establish_gpadl() has already done it for > netvsc rx/tx ring buffer. The page buffer used by vm

RE: [PATCH V3 11/13] HV/IOMMU: Enable swiotlb bounce buffer for Isolation VM

2021-08-19 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM > > Hyper-V Isolation VM requires bounce buffer support to copy > data from/to encrypted memory and so enable swiotlb force > mode to use swiotlb bounce buffer for DMA transaction. > > In Isolation VM with AMD SEV, the bounce buffer needs to

RE: [PATCH V3 08/13] HV/Vmbus: Initialize VMbus ring buffer for Isolation VM

2021-08-16 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM > > VMbus ring buffer are shared with host and it's need to s/it's need/it needs/ > be accessed via extra address space of Isolation VM with > SNP support. This patch is to map the ring buffer > address in extra address space via ioremap().

RE: [PATCH V3 00/13] x86/Hyper-V: Add Hyper-V Isolation VM support

2021-08-16 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM > > Hyper-V provides two kinds of Isolation VMs. VBS(Virtualization-based > security) and AMD SEV-SNP unenlightened Isolation VMs. This patchset > is to add support for these Isolation VM support in Linux. > A general comment about this ser

RE: [PATCH V3 07/13] HV/Vmbus: Add SNP support for VMbus channel initiate message

2021-08-13 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM > > The monitor pages in the CHANNELMSG_INITIATE_CONTACT msg are shared > with host in Isolation VM and so it's necessary to use hvcall to set > them visible to host. In Isolation VM with AMD SEV SNP, the access > address should be in the ext

RE: [PATCH V3 06/13] HV: Add ghcb hvcall support for SNP VM

2021-08-13 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM > > Hyper-V provides ghcb hvcall to handle VMBus > HVCALL_SIGNAL_EVENT and HVCALL_POST_MESSAGE > msg in SNP Isolation VM. Add such support. > > Signed-off-by: Tianyu Lan > --- > arch/x86/hyperv/ivm.c | 43

RE: [PATCH V3 05/13] HV: Add Write/Read MSR registers via ghcb page

2021-08-13 Thread Michael Kelley via iommu
From: Michael Kelley Sent: Friday, August 13, 2021 12:31 PM > To: Tianyu Lan ; KY Srinivasan ; > Haiyang Zhang ; > Stephen Hemminger ; wei@kernel.org; Dexuan Cui > ; > t...@linutronix.de; mi...@redhat.com; b...@alien8.de; x...@kernel.org; > h...@zytor.com; dave.han...@linux.intel.com; > l.

RE: [PATCH V3 05/13] HV: Add Write/Read MSR registers via ghcb page

2021-08-13 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM > Subject: [PATCH V3 05/13] HV: Add Write/Read MSR registers via ghcb page See previous comments about tag in the Subject line. > Hyper-V provides GHCB protocol to write Synthetic Interrupt > Controller MSR registers in Isolation VM with AMD

RE: [PATCH V3 04/13] HV: Mark vmbus ring buffer visible to host in Isolation VM

2021-08-12 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM > Subject: [PATCH V3 04/13] HV: Mark vmbus ring buffer visible to host in > Isolation VM > Use tag "Drivers: hv: vmbus:" in the Subject line. > Mark vmbus ring buffer visible with set_memory_decrypted() when > establish gpadl handle. > >

RE: [PATCH V3 03/13] x86/HV: Add new hvcall guest address host visibility support

2021-08-12 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM [snip] > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c > index ad8a5c586a35..1e4a0882820a 100644 > --- a/arch/x86/mm/pat/set_memory.c > +++ b/arch/x86/mm/pat/set_memory.c > @@ -29,6 +29,8 @@ > #include > #includ

RE: [PATCH V3 03/13] x86/HV: Add new hvcall guest address host visibility support

2021-08-12 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM > Subject: [PATCH V3 03/13] x86/HV: Add new hvcall guest address host > visibility support Use "x86/hyperv:" tag in the Subject line. > > From: Tianyu Lan > > Add new hvcall guest address host visibility support to mark > memory visible

RE: [PATCH V3 02/13] x86/HV: Initialize shared memory boundary in the Isolation VM.

2021-08-12 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM > Subject: [PATCH V3 02/13] x86/HV: Initialize shared memory boundary in the > Isolation VM. As with Patch 1, use the "x86/hyperv:" tag in the Subject line. > > From: Tianyu Lan > > Hyper-V exposes shared memory boundary via cpuid > HYPE

RE: [PATCH V3 01/13] x86/HV: Initialize GHCB page in Isolation VM

2021-08-12 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Monday, August 9, 2021 10:56 AM > Subject: [PATCH V3 01/13] x86/HV: Initialize GHCB page in Isolation VM The subject line tag on patches under arch/x86/hyperv is generally "x86/hyperv:". There's some variation in the spelling of "hyperv", but let's go with the all lowercas

RE: [PATCH v6 16/16] iommu/hyperv: setup an IO-APIC IRQ remapping domain for root partition

2021-02-04 Thread Michael Kelley via iommu
From: Wei Liu Sent: Wednesday, February 3, 2021 7:05 AM > > Just like MSI/MSI-X, IO-APIC interrupts are remapped by Microsoft > Hypervisor when Linux runs as the root partition. Implement an IRQ > domain to handle mapping and unmapping of IO-APIC interrupts. > > Signed-off-by: Wei Liu > --- > v

RE: [PATCH v5 16/16] iommu/hyperv: setup an IO-APIC IRQ remapping domain for root partition

2021-02-04 Thread Michael Kelley via iommu
From: Wei Liu Sent: Wednesday, February 3, 2021 4:47 AM > > On Wed, Jan 27, 2021 at 05:47:08AM +, Michael Kelley wrote: > > From: Wei Liu Sent: Wednesday, January 20, 2021 4:01 AM > > > > > > Just like MSI/MSI-X, IO-APIC interrupts are remapped by Microsoft > > > Hypervisor when Linux runs a

RE: [PATCH v5 16/16] iommu/hyperv: setup an IO-APIC IRQ remapping domain for root partition

2021-01-26 Thread Michael Kelley via iommu
From: Wei Liu Sent: Wednesday, January 20, 2021 4:01 AM > > Just like MSI/MSI-X, IO-APIC interrupts are remapped by Microsoft > Hypervisor when Linux runs as the root partition. Implement an IRQ > domain to handle mapping and unmapping of IO-APIC interrupts. > > Signed-off-by: Wei Liu > --- >

RE: [PATCH v5 04/16] iommu/hyperv: don't setup IRQ remapping when running as root

2021-01-25 Thread Michael Kelley via iommu
From: Wei Liu Sent: Wednesday, January 20, 2021 4:01 AM > > The IOMMU code needs more work. We're sure for now the IRQ remapping > hooks are not applicable when Linux is the root partition. > > Signed-off-by: Wei Liu > Acked-by: Joerg Roedel > Reviewed-by: Vitaly Kuznetsov > --- > drivers/io

RE: [PATCH 20/28] mm: remove the pgprot argument to __vmalloc

2020-04-10 Thread Michael Kelley via iommu
From: Christoph Hellwig Sent: Wednesday, April 8, 2020 4:59 AM > > The pgprot argument to __vmalloc is always PROT_KERNEL now, so remove > it. > > Signed-off-by: Christoph Hellwig > --- > arch/x86/hyperv/hv_init.c | 3 +-- > arch/x86/include/asm/kvm_host.h| 3 +-- > arch

RE: [PATCH 01/28] x86/hyperv: use vmalloc_exec for the hypercall page

2020-04-10 Thread Michael Kelley via iommu
From: Christoph Hellwig Sent: Wednesday, April 8, 2020 4:59 AM > > Use the designated helper for allocating executable kernel memory, and > remove the now unused PAGE_KERNEL_RX define. > > Signed-off-by: Christoph Hellwig > --- > arch/x86/hyperv/hv_init.c| 2 +- > arch/x86/include/

RE: [PATCH] video: hyperv: hyperv_fb: Use physical memory for fb on HyperV Gen 1 VMs.

2019-11-01 Thread Michael Kelley via iommu
From: Wei Hu Sent: Tuesday, October 22, 2019 4:11 AM > > On Hyper-V, Generation 1 VMs can directly use VM's physical memory for > their framebuffers. This can improve the efficiency of framebuffer and > overall performence for VM. The physical memory assigned to framebuffer > must be contiguous.

RE: [PATCH] drivers: iommu: hyperv: Make HYPERV_IOMMU only available on x86

2019-10-17 Thread Michael Kelley via iommu
From: Boqun Feng Sent: Wednesday, October 16, 2019 5:57 PM > > Currently hyperv-iommu is implemented in a x86 specific way, for > example, apic is used. So make the HYPERV_IOMMU Kconfig depend on X86 > as a preparation for enabling HyperV on architecture other than x86. > > Cc: Lan Tianyu > Cc

RE: [PATCH V6 2/3] IOMMU/Hyper-V: Add Hyper-V stub IOMMU driver

2019-02-27 Thread Michael Kelley via iommu
From: lantianyu1...@gmail.com Sent: Wednesday, February 27, 2019 6:54 AM > > On the bare metal, enabling X2APIC mode requires interrupt remapping > function which helps to deliver irq to cpu with 32-bit APIC ID. > Hyper-V doesn't provide interrupt remapping function so far and Hyper-V > MSI prot

RE: [PATCH V6 1/3] x86/Hyper-V: Set x2apic destination mode to physical when x2apic is available

2019-02-27 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Wednesday, February 27, 2019 6:54 AM > > Hyper-V doesn't provide irq remapping for IO-APIC. To enable x2apic, > set x2apic destination mode to physcial mode when x2apic is available > and Hyper-V IOMMU driver makes sure cpus assigned with IO-APIC irqs have > 8-bit APIC id.

RE: [PATCH V5 0/3] x86/Hyper-V/IOMMU: Add Hyper-V IOMMU driver to support x2apic mode

2019-02-25 Thread Michael Kelley via iommu
From: Tianyu Lan Sent: Friday, February 22, 2019 4:12 AM > > On the bare metal, enabling X2APIC mode requires interrupt remapping > function which helps to deliver irq to cpu with 32-bit APIC ID. > Hyper-V doesn't provide interrupt remapping function so far and Hyper-V > MSI protocol already sup

RE: [PATCH V5 2/3] HYPERV/IOMMU: Add Hyper-V stub IOMMU driver

2019-02-22 Thread Michael Kelley via iommu
From: tianyu@microsoft.com Sent: Friday, February 22, 2019 4:12 AM > > On the bare metal, enabling X2APIC mode requires interrupt remapping > function which helps to deliver irq to cpu with 32-bit APIC ID. > Hyper-V doesn't provide interrupt remapping function so far and Hyper-V > MSI protoc

RE: [PATCH V4 2/3] HYPERV/IOMMU: Add Hyper-V stub IOMMU driver

2019-02-21 Thread Michael Kelley via iommu
From: lantianyu1...@gmail.com Sent: Monday, February 11, 2019 6:20 AM > + /* > + * Hyper-V doesn't provide irq remapping function for > + * IO-APIC and so IO-APIC only accepts 8-bit APIC ID. > + * Cpu's APIC ID is read from ACPI MADT table and APIC IDs > + * in the MADT ta

RE: [PATCH V4 1/3] x86/Hyper-V: Set x2apic destination mode to physical when x2apic is available

2019-02-21 Thread Michael Kelley via iommu
From: lantianyu1...@gmail.com Sent: Monday, February 11, 2019 6:20 AM > > Hyper-V doesn't provide irq remapping for IO-APIC. To enable x2apic, > set x2apic destination mode to physcial mode when x2apic is available > and Hyper-V IOMMU driver makes sure cpus assigned with IO-APIC irqs have > 8-bi

RE: [PATCH V2 1/3] x86/Hyper-V: Set x2apic destination mode to physical when x2apic is available

2019-02-03 Thread Michael Kelley via iommu
From: lantianyu1...@gmail.com Sent: Saturday, February 2, 2019 5:15 AM > > Hyper-V doesn't provide irq remapping for IO-APIC. To enable x2apic, > set x2apic destination mode to physcial mode when x2apic is available > and Hyper-V IOMMU driver makes sure cpus assigned with IO-APIC irqs have > 8-

RE: [PATCH V2 2/3] HYPERV/IOMMU: Add Hyper-V stub IOMMU driver

2019-02-03 Thread Michael Kelley via iommu
From: lantianyu1...@gmail.com Sent: Saturday, February 2, 2019 5:15 AM I have a couple more comments > > +config HYPERV_IOMMU > + bool "Hyper-V IRQ Remapping Support" > + depends on HYPERV > + select IOMMU_API > + help > + Hyper-V stub IOMMU driver provides IRQ Rem

RE: [PATCH V2 2/3] HYPERV/IOMMU: Add Hyper-V stub IOMMU driver

2019-02-03 Thread Michael Kelley via iommu
From: lantianyu1...@gmail.com Sent: Saturday, February 2, 2019 5:15 AM > > +/* > + * According 82093AA IO-APIC spec , IO APIC has a 24-entry Interrupt > + * Redirection Table. > + */ > +#define IOAPIC_REMAPPING_ENTRY 24 The other unstated assumption here is that Hyper-v guest VMs have only a si