On Tue, Sep 26, 2023 at 08:29:16AM +0000, Tamar Christina via Gcc wrote: > Hi, > > I tried to find you two on Sunday but couldn't locate you. Thanks for the > presentation!
Yes, sadly we could not attend on Sunday because we wanted to be back for Monday. > > > > > > We had very interesting discussions during our presentation with Paul > > > on the support of complex numbers in gcc at the Cauldron. > > > > > > Thank you all for your participation ! > > > > > > Here is a small summary from our viewpoint: > > > > > > - Replace CONCAT with a backend defined internal representation in RTL > > > --> No particular problems > > > > > > - Allow backend to write patterns for operation on complex modes > > > --> No particular problems > > > > > > - Conditional lowering depending on whether a pattern exists or not > > > --> Concerns when the vectorization of split complex operations > > > --> performs > > > better > > > than not vectorized unified complex operations > > > > > > - Centralize complex lowering in cplxlower > > > --> No particular problems if it doesn't prevent IEEE compliance and > > > optimizations (like const folding) > > > > > > - Vectorization of complex operations > > > --> 2 representations (interleaved and separated real/imag): cannot > > > impose one > > > if some machines prefer the other > > > --> Complex are composite modes, the vectorizer assumes that the inner > > > mode is > > > scalar to do some optimizations (which ones ?) > > > --> Mixed split/unified complex operations cannot be vectorized easely > > > --> Assuming that the inner representation of complex vectors is let > > > --> to > > > target > > > backends, the vectorizer doesn't know it, which prevent some > > > optimizations > > > (which ones ?) > > > > > > - Explicit vectors of complex > > > --> Cplxlower cannot lower it, and moving veclower before cplxlower is > > > --> a > > > bad > > > idea as it prevents some optimizations > > > --> Teaching cplxlower how to deal with vectors of complex seems to be > > > --> a > > > reasonable alternative > > > --> Concerns about ABI or indexing if the internal representation is > > > --> let > > > to the > > > backend and differs from the representation in memory > > > > > > - Impact of the current SLP pattern matching of complex operations > > > --> Only with -ffast-math > > > --> It can match user defined operations (not C99) that can be > > > simplified with a > > > complex instruction > > > --> Dedicated opcode and real vector type choosen VS standard opcode > > > --> and > > > complex > > > mode in our implementation > > > --> Need to preserve SLP pattern matching as too many applications > > > redefines > > > complex and bypass C99 standard. > > > --> So need to harmonize with our implementation > > > > > > - Support of the pure imaginary type (_Imaginary) > > > --> Still not supported by gcc (and llvm), neither in our > > > --> implementation Issues comes from the fact that an imaginary is not > > > --> a complex with > > > real part > > > set to 0 > > > --> The same issue with complex multiplication by a real (which is > > > --> split > > > in the > > > frontend, and our implementation hasn't changed it yet) > > > --> Idea: Add an attribute to the Tree complex type which specify pure > > > real / pure > > > imaginary / full complex ? > > > > > > - Fast pattern for IEEE compliant emulated operations > > > --> Not enough time to discuss about it > > > > > > Don't hesitate to add something or bring more precision if you want. > > > > > > As I said at the end of the presentation, we have written a paper > > > which explains our implementation in details. You can find it on the > > > wiki page of the Cauldron > > > > > (https://gcc.gnu.org/wiki/cauldron2023talks?action=AttachFile&do=view&tar > > get=Exposing+Complex+Numbers+to+Target+Back-ends+%28paper%29.pdf). > > > > Thanks for the detailed presentation at the Cauldron. > > > > My personal summary is that I'm less convinced delaying lowering is the way > > to go. > > I personally like the delayed lowering for scalar because it allows us to > properly > reassociate as a unit. That is to say, it's easier to detect a * b * c when > they > are still complex ops. And the late lowering will allow beter codegen than > today. > > However I think we should *unconditionally* not lower them, even in situations > such as a * b * imag(b). This situation can happen by late optimizations > anyway > so it has to be dealt with regardless so I don't think it should punt. > > I think you can then conditionally lower if the target does *not* implement > the > optab. i.e. for AArch64 the complex mode wouldn't be useful. > Indeed, our current approach in the vectorizer works only if the complex scalar patterns exist as well, and I agree that it would be better to if the absence of either scalar or vector patterns would not prevent any optimizations. Keeping everything unified until after the vectorizer and the SLP passes and lowering after that might work. But we would have to try, and see if we do not run into any problem with -ffast-math and/or IEE754 compliance. In particular, in order to not lower imag(b) we could promote it to __complex_expr__ (0, b, IMAGINARY). At the cost of adding a field to __complex_expr__ that would help with floating-point compliance and be a step towards the support of _Imaginary. > > I do think that if targets implement complex optabs we should use them but > > eventually re-discovering complex operations from lowered form is going to > > be > > more useful. > > That's because as you said, use of _Complex is limited and > > people inventing their own representation. SLP vectorization can discover > > some ops already with the limiting factor being that we don't specifically > > search for only complex operations (plus we expose the result as vector > > operations, requiring target support for the vector ops rather than > > [SD]Cmode > > operations). > > I don't think the two are mutually exclusive, I do think we should form > complex > instructions from scalar ops as well, because we can generate better > expansions. > > Today we only expand efficiently when the COMPLEX_EXPR node is still there > and bitfield expansion knows then that the entire value will be written. So > rediscovery will help there. > > I also think if we don't lower early, as you mention we should lower the > complex > operations in the vectorizer. I don't think having the complex mode as > vectors > are useful. This can be easily done by using the scalar vect pattern. It'll > have > to handle all arithmetic ops though, but for those the target has an optab we > can form it early which would have the SLP one skip it later. > Our main motivation to introduce complex modes for vectors was not to duplicate common SPN with alternatives like "cmul" and such. It is not a hard requirement from our part. We just thought that it would be cleaner. Paul & Sylvain > This also means vec_lower doesn't have issues anymore. > > Cheers, > Tamar > > > > > There's the gimple-isel.cc or the widen-mul pass that perform instruction > > selection which could be enhanced to discover scalar [SD]Cmode operations. > > > > Richard. > > > > > Sylvain > > > > > > > > > > > > > > > > > > >