Tamar Christina <tamar.christ...@arm.com> writes: > Hi All, > > This RFC is trying to address the following inefficiency when vectorizing > conditional statements with SVE. > > Consider the case > > void f10(double * restrict z, double * restrict w, double * restrict x, > double * restrict y, int n) > { > for (int i = 0; i < n; i++) { > z[i] = (w[i] > 0) ? x[i] + w[i] : y[i] - w[i]; > } > } > > > For which we currently generate at -O3: > > f10: > cmp w4, 0 > ble .L1 > mov x5, 0 > whilelo p1.d, wzr, w4 > ptrue p3.b, all > .L3: > ld1d z1.d, p1/z, [x1, x5, lsl 3] > fcmgt p2.d, p1/z, z1.d, #0.0 > fcmgt p0.d, p3/z, z1.d, #0.0 > ld1d z2.d, p2/z, [x2, x5, lsl 3] > bic p0.b, p3/z, p1.b, p0.b > ld1d z0.d, p0/z, [x3, x5, lsl 3] > fsub z0.d, p0/m, z0.d, z1.d > movprfx z0.d, p2/m, z1.d > fadd z0.d, p2/m, z0.d, z2.d > st1d z0.d, p1, [x0, x5, lsl 3] > incd x5 > whilelo p1.d, w5, w4 > b.any .L3 > .L1: > ret > > Notice that the condition for the else branch duplicates the same predicate as > the then branch and then uses BIC to negate the results. > > The reason for this is that during instruction generation in the vectorizer we > emit > > mask__41.11_66 = vect__4.10_64 > vect_cst__65; > vec_mask_and_69 = mask__41.11_66 & loop_mask_63; > vec_mask_and_71 = mask__41.11_66 & loop_mask_63; > mask__43.16_73 = ~mask__41.11_66; > vec_mask_and_76 = mask__43.16_73 & loop_mask_63; > vec_mask_and_78 = mask__43.16_73 & loop_mask_63; > > which ultimately gets optimized to > > mask__41.11_66 = vect__4.10_64 > { 0.0, ... }; > vec_mask_and_69 = loop_mask_63 & mask__41.11_66; > mask__43.16_73 = ~mask__41.11_66; > vec_mask_and_76 = loop_mask_63 & mask__43.16_73; > > Notice how the negate is on the operation and not the predicate resulting from > the operation. When this is expanded this turns into RTL where the negate is > on > the compare directly. This means the RTL is different from the one without > the > negate and so CSE is unable to recognize that they are essentially same > operation. > > To fix this my patch changes it so you negate the mask rather than the > operation > > mask__41.13_55 = vect__4.12_53 > { 0.0, ... }; > vec_mask_and_58 = loop_mask_52 & mask__41.13_55; > vec_mask_op_67 = ~vec_mask_and_58; > vec_mask_and_65 = loop_mask_52 & vec_mask_op_67;
But to me this looks like a pessimisation in gimple terms. We've increased the length of the critical path: vec_mask_and_65 now needs a chain of 4 operations instead of 3. We also need to be careful not to pessimise the case in which the comparison is an integer one. At the moment we'll generate opposed conditions, which is the intended behaviour: .L3: ld1d z1.d, p0/z, [x1, x5, lsl 3] cmpgt p2.d, p0/z, z1.d, #0 movprfx z2, z1 scvtf z2.d, p3/m, z1.d cmple p1.d, p0/z, z1.d, #0 ld1d z0.d, p2/z, [x2, x5, lsl 3] ld1d z1.d, p1/z, [x3, x5, lsl 3] fadd z0.d, p2/m, z0.d, z2.d movprfx z0.d, p1/m, z1.d fsub z0.d, p1/m, z0.d, z2.d st1d z0.d, p0, [x0, x5, lsl 3] add x5, x5, x6 whilelo p0.d, w5, w4 b.any .L3 Could we handle the fcmp case using a 3->2 define_split instead: convert (set res (and (not (fcmp X Y)) Z)) -> (set res (fcmp X Y)) (set res (and (not res) Z)) ? Thanks, Richard