So far accesses to Intel MSRs on an AMD system fall through to the
default case, while accesses to AMD MSRs on an Intel system bail (in
the RDMSR case without updating EAX and EDX). Make the "AMD MSRs on
Intel" case match the "Intel MSR on AMD" one.

Signed-off-by: Jan Beulich <jbeul...@suse.com>

--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -2909,8 +2909,8 @@ static int emulate_privileged_op(struct
 
                     if ( vpmu_do_wrmsr(regs->ecx, msr_content, 0) )
                         goto fail;
+                    break;
                 }
-                break;
             }
             /*FALLTHROUGH*/
 
@@ -3045,8 +3045,8 @@ static int emulate_privileged_op(struct
 
                     regs->eax = (uint32_t)val;
                     regs->edx = (uint32_t)(val >> 32);
+                    break;
                 }
-                break;
             }
             /*FALLTHROUGH*/
 



x86/PV: make PMU MSR handling consistent

So far accesses to Intel MSRs on an AMD system fall through to the
default case, while accesses to AMD MSRs on an Intel system bail (in
the RDMSR case without updating EAX and EDX). Make the "AMD MSRs on
Intel" case match the "Intel MSR on AMD" one.

Signed-off-by: Jan Beulich <jbeul...@suse.com>

--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -2909,8 +2909,8 @@ static int emulate_privileged_op(struct
 
                     if ( vpmu_do_wrmsr(regs->ecx, msr_content, 0) )
                         goto fail;
+                    break;
                 }
-                break;
             }
             /*FALLTHROUGH*/
 
@@ -3045,8 +3045,8 @@ static int emulate_privileged_op(struct
 
                     regs->eax = (uint32_t)val;
                     regs->edx = (uint32_t)(val >> 32);
+                    break;
                 }
-                break;
             }
             /*FALLTHROUGH*/
 
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

Reply via email to