The following pulls the trigger, defaulting --param vect-force-slp to 1.
I know of no features missing but eventually minor testsuite and
optimization quality fallout.

Bootstrapped and tested on x86_64-unknown-linux-gnu.  I'll amend
PR116578 with the list of FAILs this causes (my baseline is outdated,
need to reproduce it).

        * params.opt (vect-force-slp): Default to 1.
---
 gcc/params.opt | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/gcc/params.opt b/gcc/params.opt
index 7c572774df2..10a2b089279 100644
--- a/gcc/params.opt
+++ b/gcc/params.opt
@@ -1207,7 +1207,7 @@ Common Joined UInteger Var(param_vect_induction_float) 
Init(1) IntegerRange(0, 1
 Enable loop vectorization of floating point inductions.
 
 -param=vect-force-slp=
-Common Joined UInteger Var(param_vect_force_slp) Init(0) IntegerRange(0, 1) 
Param Optimization
+Common Joined UInteger Var(param_vect_force_slp) Init(1) IntegerRange(0, 1) 
Param Optimization
 Force the use of SLP when vectorizing, fail if not possible.
 
 -param=vrp-block-limit=
-- 
2.43.0

Reply via email to