https://gcc.gnu.org/bugzilla/show_bug.cgi?id=115517

--- Comment #6 from Hongtao Liu <liuhongt at gcc dot gnu.org> ---
(In reply to rguent...@suse.de from comment #5)
> On Tue, 18 Jun 2024, liuhongt at gcc dot gnu.org wrote:
> 
> > https://gcc.gnu.org/bugzilla/show_bug.cgi?id=115517
> > 
> > --- Comment #4 from Hongtao Liu <liuhongt at gcc dot gnu.org> ---
> > (In reply to rguent...@suse.de from comment #3)
> > > On Tue, 18 Jun 2024, liuhongt at gcc dot gnu.org wrote:
> > > 
> > > > https://gcc.gnu.org/bugzilla/show_bug.cgi?id=115517
> > > > 
> > > > --- Comment #2 from Hongtao Liu <liuhongt at gcc dot gnu.org> ---
> > > > (In reply to Richard Biener from comment #1)
> > > > > Btw, I had opened PR115490 with my results for this already.  Some 
> > > > > mitigation
> > > > > should be from optimizing ISEL expansion to vcond_mask and I'd start 
> > > > > with
> > > > > looking at some of the fallout from that side (note that might require
> > > > > the backend reject not natively implemented vec_cmp via its operand 1
> > > > > predicate)
> > > > 
> > > > w/o AVX512, vector integer comparison only supports EQ/GT, others 
> > > > comparison
> > > > rtx_cost is transformed to that. (.i.e GTU is emulated with us_minus + 
> > > > eq +
> > > > negative the vector mask)
> > > > If we restrict the predicate of operand 1, would middle-end reject
> > > > vectorization (or lower it to scalar version)?
> > > 
> > > Richard suggests that we implement the "obvious" transforms like
> > > inversion in the middle-end but if for example unsigned compares
> > > are not supported the us_minus + eq + negative trick isn't on
> > > that list.
> > > 
> > > The main reason to restrict vec_cmp would be to avoid
> > > a <= b ? c : d going with an unsupported vec_cmp but instead
> > > do a > b ? d : c - the alternative is trying to fix this
> > > on the RTL side via combine.  I understand the non-native
> > 
> > Yes, I have a patch which can fix most regressions via pattern match in
> > combine.
> > Still there is a situation that is difficult to deal with, mainly the
> > optimization w/o sse4.1 . Because pblendvb/blendvps/blendvpd only exists 
> > under
> > sse4.1, w/o sse4.1, it takes 3 instructions (pand,pandn,por) to simulate the
> > vcond_mask, and the combine matches up to 4 instructions, which makes it
> > currently impossible to use the combine to recover those optimizations in 
> > the
> > vcond{,u,eq}.i.e min/max.
> > In the case of sse 4.1 and above, there is basically no regression anymore.
> 
> Maybe it's possible to use a define_insn_and_split for blends w/o SSE 4.1?
> That would allow combine matching the high-level blend operation and
> we'd only lower it afterwards?  The question is what we lose in
> combinations of/into the loweredn pand/pandn/por of course.
I'd rather live with those regressions since they're only existed below sse4.1.
> 
> Maybe it's possible to catch the higher-level optimization (min/max)
> on the GIMPLE level instead?
For integral part, I believe the optimization is already there at gimple level.
For floating point part, x86 {max,min}{ps,pd} is not ieee-conformant, it's a
exact match of cond_expr a < b ? a : b (w/ consideration of -0.0 and NAN.)

Reply via email to