On Thu, Apr 15, 2021 at 8:52 AM Ashish Kalra wrote:
>
> From: Ashish Kalra
>
> The series add support for AMD SEV guest live migration commands. To protect
> the
> confidentiality of an SEV protected guest memory while in transit we need to
> use the SEV commands defined in SEV API spec [1].
>
>
On Tue, Apr 13, 2021 at 4:47 AM Ashish Kalra wrote:
>
> On Mon, Apr 12, 2021 at 07:25:03PM -0700, Steve Rutherford wrote:
> > On Mon, Apr 12, 2021 at 6:48 PM Ashish Kalra wrote:
> > >
> > > On Mon, Apr 12, 2021 at 06:23:32PM -0700, Steve Rutherford wrote:
> >
On Mon, Apr 12, 2021 at 6:48 PM Ashish Kalra wrote:
>
> On Mon, Apr 12, 2021 at 06:23:32PM -0700, Steve Rutherford wrote:
> > On Mon, Apr 12, 2021 at 5:22 PM Steve Rutherford
> > wrote:
> > >
> > > On Mon, Apr 12, 2021 at 12:48 PM Ashish Kalra
> > &
On Mon, Apr 12, 2021 at 5:22 PM Steve Rutherford wrote:
>
> On Mon, Apr 12, 2021 at 12:48 PM Ashish Kalra wrote:
> >
> > From: Ashish Kalra
> >
> > Reset the host's shared pages list related to kernel
> > specific page encryption status settings bef
* If not booted using EFI, enable Live migration support.
> +*/
> + if (!efi_enabled(EFI_BOOT))
> + wrmsrl(MSR_KVM_SEV_LIVE_MIGRATION,
> + KVM_SEV_LIVE_MIGRATION_ENABLED);
> + } else {
> + pr_info("KVM enable live migration feature
> unsupported\n");
I might be misunderstanding this, but I'm not sure this log message is
correct: isn't the intention that the late initcall will be the one to
check if this should be enabled later in this case?
I have a similar question above about the log message after
"!efi_enabled(EFI_RUNTIME_SERVICES)": shouldn't that avoid logging if
!efi_enabled(EFI_BOOT) (since the wrmsl call already had been made
here?)
> + }
> +}
> +
> void __init mem_encrypt_free_decrypted_mem(void)
> {
> unsigned long vaddr, vaddr_end, npages;
> --
> 2.17.1
>
Other than these:
Reviewed-by: Steve Rutherford
On Mon, Apr 12, 2021 at 12:48 PM Ashish Kalra wrote:
>
> From: Ashish Kalra
>
> Reset the host's shared pages list related to kernel
> specific page encryption status settings before we load a
> new kernel by kexec. We cannot reset the complete
> shared pages list here as we need to retain the
>
h.complete_userspace_io = complete_hypercall_exit;
> + return 0;
> + }
> default:
> ret = -KVM_ENOSYS;
> break;
> diff --git a/include/uapi/linux/kvm_para.h b/include/uapi/linux/kvm_para.h
> index 8b86609849b9..847b83b75dc8 100644
> --- a/include/uapi/linux/kvm_para.h
> +++ b/include/uapi/linux/kvm_para.h
> @@ -29,6 +29,7 @@
> #define KVM_HC_CLOCK_PAIRING 9
> #define KVM_HC_SEND_IPI10
> #define KVM_HC_SCHED_YIELD 11
> +#define KVM_HC_PAGE_ENC_STATUS 12
>
> /*
> * hypercalls use architecture specific
> --
> 2.17.1
>
Reviewed-by: Steve Rutherford
Paolo Bonzini
> Cc: Joerg Roedel
> Cc: Borislav Petkov
> Cc: Tom Lendacky
> Cc: x...@kernel.org
> Cc: k...@vger.kernel.org
> Cc: linux-kernel@vger.kernel.org
> Reviewed-by: Steve Rutherford
> Signed-off-by: Brijesh Singh
> Signed-off-by: Ashish Kalra
> ---
> .
pat/set_memory.c
> index 16f878c26667..3576b583ac65 100644
> --- a/arch/x86/mm/pat/set_memory.c
> +++ b/arch/x86/mm/pat/set_memory.c
> @@ -27,6 +27,7 @@
> #include
> #include
> #include
> +#include
>
> #include "../mm_internal.h"
>
> @@ -2012,6 +2013,12 @@ static int __set_memory_enc_dec(unsigned long addr,
> int numpages, bool enc)
> */
> cpa_flush(&cpa, 0);
>
> + /* Notify hypervisor that a given memory range is mapped encrypted
> +* or decrypted. The hypervisor will use this information during the
> +* VM migration.
> +*/
> + page_encryption_changed(addr, numpages, enc);
> +
> return ret;
> }
>
> --
> 2.17.1
>
Reviewed-by: Steve Rutherford
r = -EINVAL;
> goto out;
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 29c25e641a0c..3a656d43fc6c 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -1759,6 +1759,15 @@ struct kvm_sev_receive_start {
> __u32 session_len;
> };
>
> +struct kvm_sev_receive_update_data {
> + __u64 hdr_uaddr;
> + __u32 hdr_len;
> + __u64 guest_uaddr;
> + __u32 guest_len;
> + __u64 trans_uaddr;
> + __u32 trans_len;
> +};
> +
> #define KVM_DEV_ASSIGN_ENABLE_IOMMU(1 << 0)
> #define KVM_DEV_ASSIGN_PCI_2_3 (1 << 1)
> #define KVM_DEV_ASSIGN_MASK_INTX (1 << 2)
> --
> 2.17.1
>
Reviewed-by: Steve Rutherford
eak;
> default:
> r = -EINVAL;
> goto out;
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index d45af34c31be..29c25e641a0c 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -1750,6 +1750,15 @@ struct kvm_sev_send_update_data {
> __u32 trans_len;
> };
>
> +struct kvm_sev_receive_start {
> + __u32 handle;
> + __u32 policy;
> + __u64 pdh_uaddr;
> + __u32 pdh_len;
> + __u64 session_uaddr;
> + __u32 session_len;
> +};
> +
> #define KVM_DEV_ASSIGN_ENABLE_IOMMU(1 << 0)
> #define KVM_DEV_ASSIGN_PCI_2_3 (1 << 1)
> #define KVM_DEV_ASSIGN_MASK_INTX (1 << 2)
> --
> 2.17.1
>
Reviewed-by: Steve Rutherford
M GUIDs */
> #define DELLEMC_EFI_RCI2_TABLE_GUIDEFI_GUID(0x2d9f28a2, 0xa886,
> 0x456a, 0x97, 0xa8, 0xf1, 0x1e, 0xf2, 0x4f, 0xf4, 0x55)
> +#define MEM_ENCRYPT_GUID EFI_GUID(0x0cf29b71, 0x9e51,
> 0x433a, 0xa3, 0xb7, 0x81, 0xf3, 0xab, 0x16, 0xb8, 0x75)
>
; KVM_FEATURE_POLL_CONTROL) |
> (1 << KVM_FEATURE_PV_SCHED_YIELD) |
> -(1 << KVM_FEATURE_ASYNC_PF_INT);
> +(1 << KVM_FEATURE_ASYNC_PF_INT) |
> +(1 << KVM_FEATURE_SEV_LIVE_MIGRATION);
>
> if (sched_info_on())
> entry->eax |= (1 << KVM_FEATURE_STEAL_TIME);
> --
> 2.17.1
>
Reviewed-by: Steve Rutherford
r = sev_send_update_data(kvm, &sev_cmd);
> break;
> + case KVM_SEV_SEND_FINISH:
> + r = sev_send_finish(kvm, &sev_cmd);
> + break;
> default:
> r = -EINVAL;
> goto out;
> --
> 2.17.1
>
Reviewed-by: Steve Rutherford
next-20210409]
> [cannot apply to crypto/master]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch]
>
> url:
> https://github
After completion of SEND_START, but before SEND_FINISH, the source VMM can
issue the SEND_CANCEL command to stop a migration. This is necessary so
that a cancelled migration can restart with a new target later.
Reviewed-by: Nathan Tempelman
Reviewed-by: Brijesh Singh
Signed-off-by: Steve
After completion of SEND_START, but before SEND_FINISH, the source VMM can
issue the SEND_CANCEL command to stop a migration. This is necessary so
that a cancelled migration can restart with a new target later.
Reviewed-by: Nathan Tempelman
Reviewed-by: Brijesh Singh
Signed-off-by: Steve
On Fri, Apr 9, 2021 at 1:14 AM Paolo Bonzini wrote:
>
> On 09/04/21 03:18, James Bottomley wrote:
> > If you want to share ASIDs you have to share the firmware that the
> > running VM has been attested to. Once the VM moves from LAUNCH to
> > RUNNING, the PSP won't allow the VMM to inject any mor
On Thu, Apr 8, 2021 at 3:27 PM Brijesh Singh wrote:
>
>
> On 4/1/21 8:44 PM, Steve Rutherford wrote:
> > After completion of SEND_START, but before SEND_FINISH, the source VMM can
> > issue the SEND_CANCEL command to stop a migration. This is necessary so
> > tha
On Thu, Apr 8, 2021 at 2:15 PM James Bottomley wrote:
>
> On Thu, 2021-04-08 at 12:48 -0700, Steve Rutherford wrote:
> > On Thu, Apr 8, 2021 at 10:43 AM James Bottomley
> > wrote:
> > > On Fri, 2021-04-02 at 16:20 +0200, Paolo Bonzini wrote:
> > > >
On Thu, Apr 8, 2021 at 10:43 AM James Bottomley wrote:
>
> On Fri, 2021-04-02 at 16:20 +0200, Paolo Bonzini wrote:
> > On 02/04/21 13:58, Ashish Kalra wrote:
> > > Hi Nathan,
> > >
> > > Will you be posting a corresponding Qemu patch for this ?
> >
> > Hi Ashish,
> >
> > as far as I know IBM is wo
On Tue, Apr 6, 2021 at 9:08 AM Ashish Kalra wrote:
>
> On Tue, Apr 06, 2021 at 03:48:20PM +, Sean Christopherson wrote:
> > On Mon, Apr 05, 2021, Ashish Kalra wrote:
> > > From: Ashish Kalra
> >
> > ...
> >
> > > diff --git a/arch/x86/include/asm/kvm_host.h
> > > b/arch/x86/include/asm/kvm_h
On Tue, Apr 6, 2021 at 7:00 AM Ashish Kalra wrote:
>
> Hello Paolo,
>
> On Tue, Apr 06, 2021 at 03:47:59PM +0200, Paolo Bonzini wrote:
> > On 06/04/21 15:26, Ashish Kalra wrote:
> > > > It's a little unintuitive to see KVM_MSR_RET_FILTERED here, since
> > > > userspace can make this happen on its
On Mon, Apr 5, 2021 at 7:20 AM Ashish Kalra wrote:
>
> From: Ashish Kalra
>
> The series add support for AMD SEV guest live migration commands. To protect
> the
> confidentiality of an SEV protected guest memory while in transit we need to
> use the SEV commands defined in SEV API spec [1].
>
>
On Mon, Apr 5, 2021 at 7:30 AM Ashish Kalra wrote:
>
> From: Ashish Kalra
>
> Add new KVM_FEATURE_SEV_LIVE_MIGRATION feature for guest to check
> for host-side support for SEV live migration. Also add a new custom
> MSR_KVM_SEV_LIVE_MIGRATION for guest to enable the SEV live migration
> feature.
rs,
> + .page_enc_status_hc = NULL,
>
> .msr_filter_changed = vmx_msr_filter_changed,
> .complete_emulated_msr = kvm_complete_insn_gp,
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index f7d12fca397b..ef5c77d59651 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -8273,6 +8273,18 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
> kvm_sched_yield(vcpu->kvm, a0);
> ret = 0;
> break;
> + case KVM_HC_PAGE_ENC_STATUS: {
> + int r;
> +
> + ret = -KVM_ENOSYS;
> + if (kvm_x86_ops.page_enc_status_hc) {
> + r = kvm_x86_ops.page_enc_status_hc(vcpu, a0, a1, a2);
> + if (r >= 0)
> + return r;
> + ret = r;
Style nit: Why not just set ret, and return ret if ret >=0?
This looks good. I just had a few nitpicks.
Reviewed-by: Steve Rutherford
On Mon, Apr 5, 2021 at 8:17 AM Peter Gonda wrote:
>
> Could this patch set include support for the SEND_CANCEL command?
>
That's separate from this patchset. I sent up an implementation last week.
After completion of SEND_START, but before SEND_FINISH, the source VMM can
issue the SEND_CANCEL command to stop a migration. This is necessary so
that a cancelled migration can restart with a new target later.
Signed-off-by: Steve Rutherford
---
.../virt/kvm/amd-memory-encryption.rst
On Fri, Mar 19, 2021 at 11:00 AM Ashish Kalra wrote:
>
> On Thu, Mar 11, 2021 at 12:48:07PM -0800, Steve Rutherford wrote:
> > On Thu, Mar 11, 2021 at 10:15 AM Ashish Kalra wrote:
> > >
> > > On Wed, Mar 03, 2021 at 06:54:41PM +, Will Deacon wrote:
> > >
son wrote:
> > > > On Fri, Feb 26, 2021, Ashish Kalra wrote:
> > > > > On Thu, Feb 25, 2021 at 02:59:27PM -0800, Steve Rutherford wrote:
> > > > > > On Thu, Feb 25, 2021 at 12:20 PM Ashish Kalra
> > > > > > wrote:
> > >
On Tue, Mar 9, 2021 at 7:42 PM Kalra, Ashish wrote:
>
>
>
> > On Mar 9, 2021, at 3:22 AM, Steve Rutherford wrote:
> >
> > On Mon, Mar 8, 2021 at 1:11 PM Brijesh Singh wrote:
> >>
> >>
> >>> On 3/8/21 1:51 PM, Sean Christopherson wrote:
>>>
> >>> Moving the non-KVM x86 folks to bcc, I don't they care about KVM details
> >>> at this
> >>> point.
> >>>
> >>> On Fri, Feb 26, 2021, Ashish Kalra wrote:
> >>>> On Thu, Feb 25, 2021 at 02:59:27PM -
cc, I don't they care about KVM details
> > > at this
> > > point.
> > >
> > > On Fri, Feb 26, 2021, Ashish Kalra wrote:
> > > > On Thu, Feb 25, 2021 at 02:59:27PM -0800, Steve Rutherford wrote:
> > > > > On Thu, Feb 25, 2021 at 12:
On Thu, Feb 25, 2021 at 2:59 PM Steve Rutherford wrote:
>
> On Thu, Feb 25, 2021 at 12:20 PM Ashish Kalra wrote:
> >
> > On Wed, Feb 24, 2021 at 10:22:33AM -0800, Sean Christopherson wrote:
> > > On Wed, Feb 24, 2021, Ashish Kalra wrote:
> > > > # S
On Thu, Feb 25, 2021 at 12:20 PM Ashish Kalra wrote:
>
> On Wed, Feb 24, 2021 at 10:22:33AM -0800, Sean Christopherson wrote:
> > On Wed, Feb 24, 2021, Ashish Kalra wrote:
> > > # Samples: 19K of event 'kvm:kvm_hypercall'
> > > # Event count (approx.): 19573
> > > #
> > > # Overhead Command
On Thu, Feb 25, 2021 at 6:57 AM Tom Lendacky wrote:
> >> +int svm_vm_copy_asid_to(struct kvm *kvm, unsigned int mirror_kvm_fd)
> >> +{
> >> + struct file *mirror_kvm_file;
> >> + struct kvm *mirror_kvm;
> >> + struct kvm_sev_info *mirror_kvm_sev;
> >> + unsigned int asid;
>
On Wed, Feb 24, 2021 at 9:37 AM Sean Christopherson wrote:
> > + unsigned int asid;
> > + int ret;
> > +
> > + if (!sev_guest(kvm))
> > + return -ENOTTY;
> > +
> > + mutex_lock(&kvm->lock);
> > +
> > + /* Mirrors of mirrors should work, but let's not get silly */
>
On Wed, Feb 24, 2021 at 1:00 AM Nathan Tempelman wrote:
>
> @@ -1186,6 +1195,10 @@ int svm_register_enc_region(struct kvm *kvm,
> if (!sev_guest(kvm))
> return -ENOTTY;
>
> + /* If kvm is mirroring encryption context it isn't responsible for it
> */
> + if (is_
On Wed, Feb 10, 2021 at 2:01 PM Steve Rutherford wrote:
>
> Hi Ashish,
>
> On Wed, Feb 10, 2021 at 12:37 PM Ashish Kalra wrote:
> >
> > Hello Steve,
> >
> > We can remove the implicit enabling of this live migration feature
> > from svm_vcpu_after_set_cp
Hi Ashish,
On Wed, Feb 10, 2021 at 12:37 PM Ashish Kalra wrote:
>
> Hello Steve,
>
> We can remove the implicit enabling of this live migration feature
> from svm_vcpu_after_set_cpuid() callback invoked afer KVM_SET_CPUID2
> ioctl, and let this feature flag be controlled by the userspace
> VMM/qe
> > >
> > > Continued response to your queries, especially related to userspace
> > > control of SEV live migration feature :
> > >
> > > On Fri, Feb 05, 2021 at 06:54:21PM -0800, Steve Rutherford wrote:
> > > > On Thu, Feb 4, 2021 at 7:08 PM A
On Thu, Feb 4, 2021 at 7:08 PM Ashish Kalra wrote:
>
> Hello Steve,
>
> On Thu, Feb 04, 2021 at 04:56:35PM -0800, Steve Rutherford wrote:
> > On Wed, Feb 3, 2021 at 4:39 PM Ashish Kalra wrote:
> > >
> > > From: Ashish Kalra
> > >
> > > Add n
On Wed, Feb 3, 2021 at 4:38 PM Ashish Kalra wrote:
>
> From: Brijesh Singh
>
> This hypercall is used by the SEV guest to notify a change in the page
> encryption status to the hypervisor. The hypercall should be invoked
> only when the encryption attribute is changed from encrypted -> decrypted
On Wed, Feb 3, 2021 at 4:39 PM Ashish Kalra wrote:
>
> From: Ashish Kalra
>
> Add new KVM_FEATURE_SEV_LIVE_MIGRATION feature for guest to check
> for host-side support for SEV live migration. Also add a new custom
> MSR_KVM_SEV_LIVE_MIGRATION for guest to enable the SEV live migration
> feature.
Forgot to ask this: is there an intention to support SEND_CANCEL in a
follow up patch?
On Tue, Dec 8, 2020 at 2:03 PM Ashish Kalra wrote:
>
> From: Ashish Kalra
>
> The series add support for AMD SEV guest live migration commands. To protect
> the
> confidentiality of an SEV protected guest me
Thu, Jan 07, 2021 at 09:26:25AM -0800, Sean Christopherson wrote:
> > > On Thu, Jan 07, 2021, Ashish Kalra wrote:
> > > > Hello Steve,
> > > >
> > > > On Wed, Jan 06, 2021 at 05:01:33PM -0800, Steve Rutherford wrote:
> > > > > Avoidi
On Thu, Jan 7, 2021 at 4:48 PM Ashish Kalra wrote:
>
> > On Thu, Jan 07, 2021 at 01:34:14AM +, Ashish Kalra wrote:
> > > Hello Steve,
> > >
> > > My thoughts here ...
> > >
> > > On Wed, Jan 06, 2021 at 05:01:33PM -0800, Steve Rutherford w
; >
> > > On Dec 18, 2020, at 1:40 PM, Dr. David Alan Gilbert
> > > wrote:
> > >
> > > * Ashish Kalra (ashish.ka...@amd.com) wrote:
> > > On Fri, Dec 11, 2020 at 10:55:42PM +, Ashish Kalra wrote:
> > > Hello All,
> > >
>
On Mon, Dec 7, 2020 at 12:42 PM Sean Christopherson wrote:
>
> On Sun, Dec 06, 2020, Paolo Bonzini wrote:
> > On 03/12/20 01:34, Sean Christopherson wrote:
> > > On Tue, Dec 01, 2020, Ashish Kalra wrote:
> > > > From: Brijesh Singh
> > > >
> > > > KVM hypercall framework relies on alternative fra
Are these likely to get merged into 5.9?
On Wed, Jun 3, 2020 at 3:14 PM Ashish Kalra wrote:
>
> Hello Steve,
>
> On Mon, Jun 01, 2020 at 01:02:23PM -0700, Steve Rutherford wrote:
> > On Mon, May 18, 2020 at 12:07 PM Ashish Kalra wrote:
> > >
> > > Hello
On Mon, May 18, 2020 at 12:07 PM Ashish Kalra wrote:
>
> Hello All,
>
> Any other feedback, review or comments on this patch-set ?
>
> Thanks,
> Ashish
>
> On Tue, May 05, 2020 at 09:13:49PM +, Ashish Kalra wrote:
> > From: Ashish Kalra
> >
> > The series add support for AMD SEV guest live mi
On Tue, May 5, 2020 at 2:21 PM Ashish Kalra wrote:
>
> From: Ashish Kalra
>
> Reset the host's page encryption bitmap related to kernel
> specific page encryption status settings before we load a
> new kernel by kexec. We cannot reset the complete
> page encryption bitmap here as we need to retai
y when the guest is booting, for the
> +* incoming VM(s) it is implied.
> +*/
> + sev_update_migration_flags(kvm, KVM_SEV_LIVE_MIGRATION_ENABLED);
> +
> bitmap_copy(sev->page_enc_bmap + BIT_WORD(gfn_start), bitmap,
> (gfn_end - gfn_start));
>
> --
> 2.17.1
>
Reviewed-by: Steve Rutherford
kvm_register_clock("primary cpu clock");
> pvclock_set_pvti_cpu0_va(hv_clock_boot);
> --
> 2.17.1
>
Reviewed-by: Steve Rutherford
> typedef struct {
> efi_guid_t guid;
> --
> 2.17.1
>
Have you gotten this GUID upstreamed into edk2?
Reviewed-by: Steve Rutherford
int npages,
> unsigned long sz = npages << PAGE_SHIFT;
> unsigned long vaddr_end, vaddr_next;
>
> + if (!sev_live_migration_enabled())
> + return;
> +
> vaddr_end = vaddr + sz;
>
> for (; vaddr < vaddr_end; vaddr = vaddr_next) {
> @@ -374,6 +379,12 @@ int __init early_set_memory_encrypted(unsigned long
> vaddr, unsigned long size)
> return early_set_memory_enc_dec(vaddr, size, true);
> }
>
> +void __init early_set_mem_enc_dec_hypercall(unsigned long vaddr, int npages,
> + bool enc)
> +{
> + set_memory_enc_dec_hypercall(vaddr, npages, enc);
> +}
> +
> /*
> * SME and SEV are very similar but they are not the same, so there are
> * times that the kernel will need to distinguish between SME and SEV. The
> --
> 2.17.1
>
Reviewed-by: Steve Rutherford
On Tue, May 5, 2020 at 2:18 PM Ashish Kalra wrote:
>
> From: Ashish Kalra
>
> Add support for static allocation of the unified Page encryption bitmap by
> extending kvm_arch_commit_memory_region() callack to add svm specific x86_ops
> which can read the userspace provided memory region/memslots a
vice fd */
> unsigned long pages_locked; /* Number of pages locked */
> struct list_head regions_list; /* List of registered regions */
> + bool live_migration_enabled;
> unsigned long *page_enc_bmap;
> unsigned long page_enc_bmap_size;
> };
> @@ -494,5 +495,6 @@ int svm_unregister_enc_region(struct kvm *kvm,
> void pre_sev_run(struct vcpu_svm *svm, int cpu);
> int __init sev_hardware_setup(void);
> void sev_hardware_teardown(void);
> +void sev_update_migration_flags(struct kvm *kvm, u64 data);
>
> #endif
> --
> 2.17.1
>
Reviewed-by: Steve Rutherford
f (kvm_x86_ops.set_page_enc_bitmap)
> + r = kvm_x86_ops.set_page_enc_bitmap(kvm, &bitmap);
> + break;
> + }
> default:
> r = -ENOTTY;
> }
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index af62f2afaa5d..2798b17484d0 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -1529,6 +1529,7 @@ struct kvm_pv_cmd {
> #define KVM_S390_PV_COMMAND_IOWR(KVMIO, 0xc5, struct kvm_pv_cmd)
>
> #define KVM_GET_PAGE_ENC_BITMAP_IOW(KVMIO, 0xc6, struct
> kvm_page_enc_bitmap)
> +#define KVM_SET_PAGE_ENC_BITMAP_IOW(KVMIO, 0xc7, struct
> kvm_page_enc_bitmap)
>
> /* Secure Encrypted Virtualization command */
> enum sev_cmd_id {
> --
> 2.17.1
>
Otherwise, this looks good to me. Thanks for merging the ioctls together.
Reviewed-by: Steve Rutherford
on (SEV)"
> : "Secure Memory Encryption (SME)");
> diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> index 59eca6a94ce7..9aaf1b6f5a1b 100644
> --- a/arch/x86/mm/pat/set_memory.c
> +++ b/arch/x86/mm/pat/set_memory.c
> @@ -27,6 +27,7 @@
> #include
> #include
> #include
> +#include
>
> #include "../mm_internal.h"
>
> @@ -2003,6 +2004,12 @@ static int __set_memory_enc_dec(unsigned long addr,
> int numpages, bool enc)
> */
> cpa_flush(&cpa, 0);
>
> + /* Notify hypervisor that a given memory range is mapped encrypted
> +* or decrypted. The hypervisor will use this information during the
> +* VM migration.
> +*/
> + page_encryption_changed(addr, numpages, enc);
> +
> return ret;
> }
>
> --
> 2.17.1
>
Reviewed-by: Steve Rutherford
> kvm_sched_yield(vcpu->kvm, a0);
> ret = 0;
> break;
> + case KVM_HC_PAGE_ENC_STATUS:
> + ret = -KVM_ENOSYS;
> + if (kvm_x86_ops.page_enc_status_hc)
> + ret = kvm_x86_ops.page_enc_status_hc(vcpu->kvm,
> + a0, a1, a2);
> + break;
> default:
> ret = -KVM_ENOSYS;
> break;
> diff --git a/include/uapi/linux/kvm_para.h b/include/uapi/linux/kvm_para.h
> index 8b86609849b9..847b83b75dc8 100644
> --- a/include/uapi/linux/kvm_para.h
> +++ b/include/uapi/linux/kvm_para.h
> @@ -29,6 +29,7 @@
> #define KVM_HC_CLOCK_PAIRING 9
> #define KVM_HC_SEND_IPI10
> #define KVM_HC_SCHED_YIELD 11
> +#define KVM_HC_PAGE_ENC_STATUS 12
>
> /*
> * hypercalls use architecture specific
> --
> 2.17.1
>
Reviewed-by: Steve Rutherford
On Tue, May 5, 2020 at 2:17 PM Ashish Kalra wrote:
>
> From: Brijesh Singh
>
> The ioctl can be used to retrieve page encryption bitmap for a given
> gfn range.
>
> Return the correct bitmap as per the number of pages being requested
> by the user. Ensure that we only copy bmap->num_pages bytes i
On Mon, Nov 27, 2017 at 3:58 AM, Paolo Bonzini wrote:
> On 26/11/2017 17:41, Filippo Sironi wrote:
>> ... that the guest should see.
>> Guest operating systems may check the microcode version to decide whether
>> to disable certain features that are known to be buggy up to certain
>> microcode ver
On Thu, Nov 16, 2017 at 6:41 AM, Tom Lendacky wrote:
> On 11/16/2017 4:02 AM, Borislav Petkov wrote:
>>
>> On Wed, Nov 15, 2017 at 03:57:13PM -0800, Steve Rutherford wrote:
>>>
>>> One piece that seems missing here is the handling of the vmm
>>> commun
One piece that seems missing here is the handling of the vmm
communication exception. What's the plan for non-automatic exits? In
particular, what's the plan for emulated devices that are currently
accessed through MMIO (e.g. the IOAPIC)?
Maybe I'm getting ahead of myself: What's the testing story
I'm not that familiar with the kernel's workqueues, but this seems
like the classic "callback outlives the memory it references"
use-after-free, where the process_srcu callback is outliving struct
kvm (which contains the srcu_struct). If that's right, then calling
srcu_barrier (which should wait fo
This issue seems generic to level triggered interrupts as well as RTC
interrupts. It looks like KVM hacks around the issue with level
triggered interrupts by clearing the remote IRR when an IRQ is
reconfigured. Seems like an (admittedly lossy) way to handle this
issue with the RTC-IRQ would be to f
On Thu, Aug 13, 2015 at 09:31:48AM +0200, Paolo Bonzini wrote:
Pinging this thread.
Should I put together a patch to make split irqchip work properly with the old
TMR behavior?
>
>
> On 13/08/2015 08:35, Zhang, Yang Z wrote:
> >> You may be right. It is safe if no future hardware plans to use
On Thu, Jul 30, 2015 at 11:26:28PM +, Zhang, Yang Z wrote:
> Paolo Bonzini wrote on 2015-07-29:
> > Do not compute TMR in advance. Instead, set the TMR just before the
> > interrupt is accepted into the IRR. This limits the coupling between
> > IOAPIC and LAPIC.
> >
>
> Uh.., it back to ori
On Wed, Jul 29, 2015 at 03:28:57PM +0200, Paolo Bonzini wrote:
> The PIT is only created if irqchip_in_kernel returns true, so the
> check is superfluous.
I poked around. Looks to me like the existence of an IOAPIC is not
checked on the creation of the in-kernel PIT. Userspace might limit itself to
no notifiers active; however, the IOAPIC does not have to
> do anything special for these interrupts anyway.
>
> This again limits the interactions between the IOAPIC and the LAPIC,
> making it easier to move the former to userspace.
>
> Inspired by a patch from Steve Rutherf
On Wed, Jul 29, 2015 at 03:37:34PM +0200, Paolo Bonzini wrote:
> Do not compute TMR in advance. Instead, set the TMR just before the interrupt
> is accepted into the IRR. This limits the coupling between IOAPIC and LAPIC.
>
> Signed-off-by: Paolo Bonzini
> ---
> arch/x86/kvm/ioapic.c | 9 ++--
On Wed, Jul 29, 2015 at 03:28:58PM +0200, Paolo Bonzini wrote:
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 2d62229aac26..23e47a0b054b 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -3626,30 +3626,25 @@ long kvm_arch_vm_ioctl(struct file *filp,
>
73 matches
Mail list logo