Author: bdragon
Date: Tue Feb  4 20:40:45 2020
New Revision: 357529
URL: https://svnweb.freebsd.org/changeset/base/357529

Log:
  [PowerPC] Fix VSX context handling
  
  In r356767, memcpy/memmove/bcopy optimizations were added to libc to
  improve performance.
  
  This exposed an existing kernel issue in VSX handling. The PSL_VSX flag was
  not being excluded from the psl_userstatic set, which meant that any thread
  that used these and then called swapcontext(3) would get an EINVAL error.
  
  Fixing this exposed a second issue - in r344123, the FPU was being forced
  off in set_mcontext(). However, this was neglecting to ensure VSX was turned
  off at the same time.
  
  While here, add some code comments to explain what's going on.
  
  Reviewed by:  jhibbits, luporl (earlier rev), pkubaj (earlier rev)
  Sponsored by: Tag1 Consulting, Inc.
  Differential Revision:        https://reviews.freebsd.org/D23497

Modified:
  head/sys/powerpc/aim/aim_machdep.c
  head/sys/powerpc/booke/booke_machdep.c
  head/sys/powerpc/powerpc/exec_machdep.c

Modified: head/sys/powerpc/aim/aim_machdep.c
==============================================================================
--- head/sys/powerpc/aim/aim_machdep.c  Tue Feb  4 20:33:01 2020        
(r357528)
+++ head/sys/powerpc/aim/aim_machdep.c  Tue Feb  4 20:40:45 2020        
(r357529)
@@ -251,8 +251,25 @@ aim_cpu_init(vm_offset_t toc)
        psl_userset32 = psl_userset & ~PSL_SF;
 #endif
 
-       /* Bits that users aren't allowed to change */
-       psl_userstatic = ~(PSL_VEC | PSL_FP | PSL_FE0 | PSL_FE1);
+       /*
+        * Zeroed bits in this variable signify that the value of the bit
+        * in its position is allowed to vary between userspace contexts.
+        *
+        * All other bits are required to be identical for every userspace
+        * context. The actual *value* of the bit is determined by
+        * psl_userset and/or psl_userset32, and is not allowed to change.
+        *
+        * Remember to update this set when implementing support for
+        * *conditionally* enabling a processor facility. Failing to do
+        * this will cause swapcontext() in userspace to break when a
+        * process uses a conditionally-enabled facility.
+        *
+        * When *unconditionally* implementing support for a processor
+        * facility, update psl_userset / psl_userset32 instead.
+        *
+        * See the access control check in set_mcontext().
+        */
+       psl_userstatic = ~(PSL_VSX | PSL_VEC | PSL_FP | PSL_FE0 | PSL_FE1);
        /*
         * Mask bits from the SRR1 that aren't really the MSR:
         * Bits 1-4, 10-15 (ppc32), 33-36, 42-47 (ppc64)

Modified: head/sys/powerpc/booke/booke_machdep.c
==============================================================================
--- head/sys/powerpc/booke/booke_machdep.c      Tue Feb  4 20:33:01 2020        
(r357528)
+++ head/sys/powerpc/booke/booke_machdep.c      Tue Feb  4 20:40:45 2020        
(r357529)
@@ -222,6 +222,24 @@ booke_cpu_init(void)
 #ifdef __powerpc64__
        psl_userset32 = psl_userset & ~PSL_CM;
 #endif
+       /*
+        * Zeroed bits in this variable signify that the value of the bit
+        * in its position is allowed to vary between userspace contexts.
+        *
+        * All other bits are required to be identical for every userspace
+        * context. The actual *value* of the bit is determined by
+        * psl_userset and/or psl_userset32, and is not allowed to change.
+        *
+        * Remember to update this set when implementing support for
+        * *conditionally* enabling a processor facility. Failing to do
+        * this will cause swapcontext() in userspace to break when a
+        * process uses a conditionally-enabled facility.
+        *
+        * When *unconditionally* implementing support for a processor
+        * facility, update psl_userset / psl_userset32 instead.
+        *
+        * See the access control check in set_mcontext().
+        */
        psl_userstatic = ~(PSL_VEC | PSL_FP | PSL_FE0 | PSL_FE1);
 
        pmap_mmu_install(MMU_TYPE_BOOKE, BUS_PROBE_GENERIC);

Modified: head/sys/powerpc/powerpc/exec_machdep.c
==============================================================================
--- head/sys/powerpc/powerpc/exec_machdep.c     Tue Feb  4 20:33:01 2020        
(r357528)
+++ head/sys/powerpc/powerpc/exec_machdep.c     Tue Feb  4 20:40:45 2020        
(r357529)
@@ -463,7 +463,17 @@ set_mcontext(struct thread *td, mcontext_t *mcp)
                return (EINVAL);
 
        /*
-        * Don't let the user set privileged MSR bits
+        * Don't let the user change privileged MSR bits.
+        *
+        * psl_userstatic is used here to mask off any bits that can
+        * legitimately vary between user contexts (Floating point
+        * exception control and any facilities that we are using the
+        * "enable on first use" pattern with.)
+        *
+        * All other bits are required to match psl_userset(32).
+        *
+        * Remember to update the platform cpu_init code when implementing
+        * support for a new conditional facility!
         */
        if ((mcp->mc_srr1 & psl_userstatic) != (tf->srr1 & psl_userstatic)) {
                return (EINVAL);
@@ -480,9 +490,19 @@ set_mcontext(struct thread *td, mcontext_t *mcp)
        else
                tf->fixreg[2] = tls;
 
-       /* Disable FPU */
-       tf->srr1 &= ~PSL_FP;
-       pcb->pcb_flags &= ~PCB_FPU;
+       /*
+        * Force the FPU back off to ensure the new context will not bypass
+        * the enable_fpu() setup code accidentally.
+        *
+        * This prevents an issue where a process that uses floating point
+        * inside a signal handler could end up in a state where the MSR
+        * did not match pcb_flags.
+        *
+        * Additionally, ensure VSX is disabled as well, as it is illegal
+        * to leave it turned on when FP or VEC are off.
+        */
+       tf->srr1 &= ~(PSL_FP | PSL_VSX);
+       pcb->pcb_flags &= ~(PCB_FPU | PCB_VSX);
 
        if (mcp->mc_flags & _MC_FP_VALID) {
                /* enable_fpu() will happen lazily on a fault */
_______________________________________________
svn-src-head@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/svn-src-head
To unsubscribe, send any mail to "svn-src-head-unsubscr...@freebsd.org"

Reply via email to