On Fri, Aug 22, 2014 at 02:35:02PM -0700, Ian Romanick wrote:
> On 08/22/2014 02:17 PM, Tom Stellard wrote:
> > On Fri, Aug 22, 2014 at 02:10:03PM -0700, Ian Romanick wrote:
> >> On 08/20/2014 11:58 AM, Tom Stellard wrote:
> >>> On Wed, Aug 20, 2014 at 11:13:13AM -0700, Kenneth Graunke wrote:
> >>>> On Wednesday, August 20, 2014 06:41:08 PM Michel Dänzer wrote:
> >>>>> On 20.08.2014 00:04, Connor Abbott wrote:
> >>>>>> On Mon, Aug 18, 2014 at 8:52 PM, Michel Dänzer <mic...@daenzer.net> 
> >>>>>> wrote:
> >>>>>>> On 19.08.2014 01:28, Connor Abbott wrote:
> >>>>>>>> On Mon, Aug 18, 2014 at 4:32 AM, Michel Dänzer <mic...@daenzer.net> 
> >>>>>>>> wrote:
> >>>>>>>>> On 16.08.2014 09:12, Connor Abbott wrote:
> >>>>>>>>>> I know what you might be thinking right now. "Wait, *another* IR? 
> >>>>>>>>>> Don't
> >>>>>>>>>> we already have like 5 of those, not counting all the 
> >>>>>>>>>> driver-specific
> >>>>>>>>>> ones? Isn't this stuff complicated enough already?" Well, there 
> >>>>>>>>>> are some
> >>>>>>>>>> pretty good reasons to start afresh (again...). In the years we've 
> >>>>>>>>>> been
> >>>>>>>>>> using GLSL IR, we've come to realize that, in fact, it's not what 
> >>>>>>>>>> we
> >>>>>>>>>> want *at all* to do optimizations on.
> >>>>>>>>>
> >>>>>>>>> Did you evaluate using LLVM IR instead of inventing yet another one?
> >>>>>>>>
> >>>>>>>> Yes. See
> >>>>>>>>
> >>>>>>>> http://lists.freedesktop.org/archives/mesa-dev/2014-February/053502.html
> >>>>>>>>
> >>>>>>>> and
> >>>>>>>>
> >>>>>>>> http://lists.freedesktop.org/archives/mesa-dev/2014-February/053522.html
> >>>>>>>
> >>>>>>> I know Ian can't deal with LLVM for some reason. I was wondering if
> >>>>>>> *you* evaluated it, and if so, why you rejected it.
> >>>>>
> >>>>> First of all, thank you for sharing more specific information than
> >>>>> 'table-flipping rage'.
> >>>>>
> >>>>>
> >>>>>> * LLVM is on a different release schedule (6 months vs. 3 months), has
> >>>>>> a different review process, etc., which means that to add support for
> >>>>>> new functionality that involves shaders, we now have to submit patches
> >>>>>> to two separate projects, and then 2 months later when we ship Mesa it
> >>>>>> turns out that nobody can actually use the new feature because it
> >>>>>> depends upon an unreleased version of LLVM that won't be released for
> >>>>>> another 3 months and then packaged by distros even later...
> >>>>>
> >>>>> This has indeed been frustrating at times, but it's better now for
> >>>>> backend changes since Tom has been making LLVM point releases.
> >>>>
> >>>> Yeah - absolutely.
> >>>>
> >>>>> As for the GLSL frontend, I agree with Tom that it shouldn't require
> >>>>> that much direct interaction with the LLVM project.
> >>>>>
> >>>>>
> >>>>>> we've already had problems where distros refused to ship newer Mesa
> >>>>>> releases because radeon depended on a version of LLVM newer than the
> >>>>>> one they were shipping, [...]
> >>>>>
> >>>>> That's news to me, can you be more specific?
> >>>>>
> >>>>> That sounds like basically a distro issue though, since different LLVM
> >>>>> versions can be installed in parallel (and the one used by default
> >>>>> doesn't have to be the newest one). And it even works if another part of
> >>>>> the same process uses a different version of LLVM.
> >>>>
> >>>> Yes, one can argue that it's a distribution issue - but it's an 
> >>>> extremely painful problem for distributions.
> >>>>
> >>>> For example, Debian was stuck on Mesa 9.2.2 for 4 months (2013-12-08 to 
> >>>> 2014-03-22), and I was told this was because of LLVM versioning changes 
> >>>> in the other drivers (primarily radeon, I believe, but probably also 
> >>>> llvmpipe).
> >>>>
> >>>> Mesa 9.2.2 hung the GPU every 5-10 minutes on Sandybridge, and we fixed 
> >>>> that in Mesa 9.2.3.  But we couldn't get people to actually ship it, and 
> >>>> had to field tons of bug reports from upset users for several months.
> >>>>
> >>>> Gentoo has also had trouble updating for similar reasons; Matt (the 
> >>>> Gentoo Mesa package mantainer) can probably comment more.
> >>>>
> >>>> I've also heard stories from friends of mine who use radeonsi that they 
> >>>> couldn't get new GL features or compiler fixes unless they upgrade both 
> >>>> Mesa /and/ LLVM, and that LLVM was usually either not released or not 
> >>>> available in their distribution for a few months.
> >>>>
> >>>> Those are the sorts of things I'd like to avoid.  The compiler is easily 
> >>>> the most crucial part of a modern graphics stack; splitting it out into 
> >>>> a separate repository and project seems like a nightmare for people who 
> >>>> care about getting new drivers released and shipped in distributions in 
> >>>> a timely fashion.
> >>>>
> >>>> Or, looking at it the other way: today, everything you need as an Intel 
> >>>> or (AFAIK) Nouveau 3D user is nicely contained within Mesa.  Our 
> >>>> community has complete control over when we do those releases.  New 
> >>>> important bug fixes, performance improvements, or features?  Ship a new 
> >>>> Mesa, and you're done.  That's a really nice feature I'd hate to lose.
> >>>>
> >>>
> >>> It has been a challenge to match versions of LLVM and Mesa for radeonsi,
> >>> but as Michel mention this has been made easier now that we are doing
> >>> LLVM point releases.
> >>>
> >>> However, as I mentioned before if we were using LLVM IR as a common IR
> >>> it is unlikely that there would be any new features in Mesa that would
> >>> depend on changes in LLVM.  The only thing we would need to modify LLVM
> >>> for would be:
> >>> - Extending the C API
> >>> - Bug fixes for optimization passes
> >>> - Optimization pass improvements
> >>>
> >>> And remember all these changes would be for improving common code that
> >>> is shared between drivers.  All of the important compiler features would
> >>> still go into the driver specific backends, which for most drivers are a
> >>> part of Mesa.
> >>>
> >>> Even for a big new feature like geometry shaders you wouldn't need
> >>> to make any modifications to LLVM to add it to the GLSL compiler in
> >>> Mesa.  The only reason radeonsi has such a hard dependency on LLVM
> >>> is because the entire compiler is part of LLVM.
> >>
> >> Speaking of new shader stages... how would LLVM handle the 'precise'
> >> keyword in tesselation shaders?  I can envision ways to handle this in
> >> an IR that we control, but it's less obvious how we would handle it in
> >> LLVM or another external compiler package.
> >>
> > 
> > What exactly does precise do? LLVM has fast-math flags you can apply to
> > instructions and also some global flags.
> 
> Through various means, you can tag a calculation (and expression tree)
> so that the same calculation in different shaders will produce the exact
> same result.  This generally limits a lot of optimizations on such
> calculations.  For example:
> 
>     // shader A
>     precise o = a * b + c;
> 
>     // shader B
>     x = a * b;
>     precise o = a * b + c;
> 
> If shader A generates a MAD for the expression, shader B must also
> generate a MAD... even if that means being less efficient.  This is also
> why ARB_gpu_shader5 adds the fma function: many fma instructions have
> extra precision vs MUL+ADD.
> 
> This is often used in tesselation shaders so that neighboring patches
> with different materials won't have cracks between them.

Is it correct to say that precise disables fast-math optimizations for the
tagged expression?

-Tom
_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to