On 26/08/14 18:59, Matt Turner wrote:
On Tue, Aug 26, 2014 at 9:00 AM, Jose Fonseca <jfons...@vmware.com> wrote:
If LLVM was a useless piece of junk, we wouldn't have any trouble adding it
as a dependency, as we would be the sole user. But precisely because LLVM
is successful in so many use cases, hence several packages depend on it, we
shouldn't depend, so we can avoid the dealing with the oh-so-hard dependency
issue!? I find it ridiculous: it's precisely because LLVM is potentially
that good that it makes sense for us/distros/everybody to put up with the
dependencies issues it may bring, and worth considering.
It sounds like there are enough people in the Mesa community that are
familiar with LLVM and interested in using it in the GLSL compiler
that there would be someone willing to start working on it. Hopefully
that's the case.
I tried going through the LLVM language frontend tutorial on LLVM.org
and only had to get to chapter 4 (the first two chapters don't use
LLVM) before the code didn't compile (and I couldn't figure out why)
with LLVM 3.4 on my system. I found this [1] tiny example (not updated
for 2 years) using the C API and thought it'd probably not work
either, but was happily surprised that it compiled and worked fine. I
see that the C API is used in radeonsi and gallivm in Mesa.
Here's what I think would be compelling: Using the stable C API,
translate from GLSL IR into LLVM IR. Call LLVM's optimizations. Give
backends the option to consume LLVM IR. From here we can evaluate just
how significant the improvement from LLVM's optimizer is. At least two
people have written GLSL IR -> LLVM translators in the past -- Luca
Barbieri (what ever happened to him?!) and Vincent Lejeune (Cc'd).
Their code is [2] and [3]. I think this plan would also fit nicely
with existing LLVM backends, potentially avoiding a trip through TGSI.
I think this is strictly superior to other ideas like throwing out the
GLSL frontend and translating LLVM IR back "up" to the higher level
GLSL IR.
So, maybe some people experienced with LLVM from VMware and AMD are
interested in taking the lead?
[1]
https://urldefense.proofpoint.com/v1/url?u=https://github.com/wickedchicken/llvm-c-example&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=NMr9uy2iTjWVixC0wOcYCWEIYhfo80qKwRgdodpoDzA%3D%0A&m=elDze1klTx%2FGG4BGj4lLADbJiCzPhdZLgxbI5fFm2WE%3D%0A&s=7d02b29a8898b7187583d4d9587afd240f91a33f043aec747ec29d828c1a1e87
[2]
https://urldefense.proofpoint.com/v1/url?u=http://cgit.freedesktop.org/~anholt/mesa/log/?h%3Dllvm-4&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=NMr9uy2iTjWVixC0wOcYCWEIYhfo80qKwRgdodpoDzA%3D%0A&m=elDze1klTx%2FGG4BGj4lLADbJiCzPhdZLgxbI5fFm2WE%3D%0A&s=35a72642df7aa1782206c3ddb5c6a1094e74bb2cf6e9b0ca55656269231a9a87
(Eric's
branch based on Luca's code)
[3]
https://urldefense.proofpoint.com/v1/url?u=http://cgit.freedesktop.org/~vlj/mesa/&k=oIvRg1%2BdGAgOoM1BIlLLqw%3D%3D%0A&r=NMr9uy2iTjWVixC0wOcYCWEIYhfo80qKwRgdodpoDzA%3D%0A&m=elDze1klTx%2FGG4BGj4lLADbJiCzPhdZLgxbI5fFm2WE%3D%0A&s=a81938751d88ad6f85a3c99d9a5215a9968fcce083b147480bea00d41f3961d9
(one of the glsl-to-llvm* branches)
It happens that the optimization quality of the incoming GLSL code is
for the most part irrelevant for VMware drivers (*). So I'm afraid I
don't see us at VMware staffing any sort of GLSL -> LLVM initiative in
the near future.
So, even though it seems to me that Mesa will unavoidably eventually
gravitate towards more wide-spread use of LLVM, I'm unable to help
provide more critical mass myself.
Which is why I don't oppose NIR. Even though it seems a less direct
path than overcome the hurdles of using LLVM now, NIR's SSA-nature seems
a step forward nevertheless. And of course, who does the job gets to
decide how it's done.
But in terms of taking a lead towards GLSL -> LLVM, LunarG seems to be
quite ahead with their GlassyMesa/LunarGlass proof-of-concepts which is
BSD-licensed:
http://lunarg.com/glassymesa/
http://testing.lunarg.com/wp-content/uploads/2014/06/GlassyMesaSlides-08Jun2014.pdf
http://lists.freedesktop.org/archives/mesa-dev/2014-June/061140.html
They use Glslang frontend instead of glsl2 frontend, but it's probably
not hard to retrofit it into consuming glsl2 ir if that makes things
easier. So maybe somebody could look into leveraging some this work.
I believe that a key challenge with LLVM is in extending LLVM IR to
represent graphics shaders effectively, but it looks like this has been
explored quite deeply per http://www.lunarglass.org/documentation , in
particular
http://www.lunarglass.org/documentation/LunarGLASSSpecification1.2.pdf
Jose
(*) Because on all the drivers we care for GLSL shaders end up being
compiled and optimized after we are done with them:
- in the vmware svga driver the host graphics drivers will compile and
optimize them.
- in llvmpipe we pass the shaders LLVM which will optimize them before
it generates native machine code
That is, as long as GLSL doesn't obfuscate/pessimizes the shaders, and
information is preserved relatively unmolested, we're good.
I think we'll care more about the IR if/when we tackle compute on the
drivers mentioned above, as TGSI will probably be inadequate to
represent the incoming compute kernels. So I expect we'll want to
consume LLVM IR directly, and I also suppose we'll eventually want to
drop TGSI and consume LLVM alone.
_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
http://lists.freedesktop.org/mailman/listinfo/mesa-dev