On 05/22/2017 07:15 PM, Timothy Arceri wrote:
On 23/05/17 10:44, Marek Olšák wrote:
On Tue, May 23, 2017 at 12:07 AM, Timothy Arceri
<tarc...@itsqueeze.com> wrote:
On 23/05/17 05:02, Marek Olšák wrote:

On Sun, May 21, 2017 at 12:48 PM, Nicolai Hähnle <nhaeh...@gmail.com>
wrote:

Hi all,

I've been looking into ARB_gl_spirv for radeonsi. I don't fancy
re-inventing
the ~8k LOC of src/compiler/spirv, and there's already a perfectly
fine
SPIR-V -> NIR -> LLVM compiler pipeline in radv, so I looked into
re-using
that.

It's not entirely straightforward because radeonsi and radv use
different
"ABIs" for their shaders, i.e. prolog/epilog shader parts,
different user
SGPR allocations, descriptor loads work differently (obviously),
and so
on.

Still, it's possible to separate the ABI from the meat of the NIR
-> LLVM
translation. So here goes...


The Step-by-Step Plan
=====================

1. Add an optional GLSL-to-NIR path (controlled by R600_DEBUG=nir) for
very
simple VS-PS pipelines.

2. Add GL_ARB_gl_spirv support to Mesa and test it on simple VS-PS
pipelines.

3. Fill in all the rest:
3a. GL 4.x shader extensions (SSBOs, images, atomics, ...)
3b. Geometry and tessellation shaders
3c. Compute shaders
3d. Tests

I've started with step 1 and got basic GLSL 1.30-level vertex shaders
working via NIR. The code is here:
https://cgit.freedesktop.org/~nh/mesa/log/?h=nir

The basic approach is to introduce `struct ac_shader_abi' to
capture the
differences between radeonsi and radv. In the end, the entry point for
NIR
-> LLVM translation will simply be:

     void ac_nir_translate(struct ac_llvm_context *ac,
                           struct ac_shader_abi *abi,
                           struct nir_shader *nir);

Setting up the LLVM function with its parameters is still
considered part
of
the driver.


This sounds good.



Questions
=========

1. How do we get good test coverage?
------------------------------------
A natural candidate would be to add a SPIR-V execution mode for the
piglit
shader_runner. That is, use build scripts to extract shaders from
shader_test files and feed them through glslang to get spv files, and
then
load those from shader_runner if a `-spirv' flag is passed on the
command
line.

This immediately runs into the difficulty that GL_ARB_gl_spirv
wants SSO
linking semantics, and I'm pretty sure the majority of shader_test
files
don't support that -- if only because they don't set a location on the
fragment shader color output.

Some ideas:
1. Add a GL_MESA_spirv_link_by_name extension
2. Have glslang add the locations for us (probably difficult because
glslang
seems to be focused on one shader stage at a time.)
3. Hack something together in the shader_test-to-spv build scripts via
regular expressions (and now we have two problems? :-) )
4. Other ideas?


We have plenty of GLSL SSO shader tests in shader-db, but we can only
compile-test them.

Initially I think we can convert a few shader tests to SSO manually
and use those.



2. What's the Gallium interface?
--------------------------------
Specifically, does it pass SPIR-V or NIR?

I'm leaning towards NIR, because then specialization, mapping of
uniform
locations, atomics, etc. can be done entirely in st/mesa.

On the other hand, Pierre Moreau's work passes SPIR-V directly. On the
third
hand, it wouldn't be the first time that clover does things
differently.


If you passed SPIR-V to radeonsi and let radeonsi do SPIR-V -> NIR ->
LLVM, you wouldn't need the serialization capability in NIR. You can
just use SPIR-V as the shader binary and the major NIR disadvantage is
gone. Also, you won't have to touch GLSL-to-NIR, and the radeonsi
shader cache will continue working as-is.

However, I don't know how much GL awareness is required for doing
SPIR-V -> NIR in radeonsi. Additional GL-specific information might
have to be added to SPIR-V by st/mesa for the conversion to be doable.
You probably know better.

st/mesa or core Mesa just needs to fill gl_program, gl_shader, and
gl_shader_program by parsing SPIR-V and not relying on NIR. I don't
know how feasible that is, but it seems to be the only thing needed in
shared code.

That also answers the NIR vs TGSI debate for the shader cache. The
answer is: Neither.


Just to list some downsides to this approach, not switching the the GLSL
path to also use NIR has the following negatives:

1. We don't get to leverage the large GL test suits and app ecosystem
for
testing the NIR -> LLVM pass both during development and afterwards.

2. Jason has already said it best so to quote his reply:
"There have been a variety of different discussions over the last few
years
about compiler design choices but we've lacked the ability to get any
good
apples-to-apples comparisons.  This may provide some opportunities to do
so."

3. The GLSL IR opts are both slow and not always optimal (possibly
transforming the code to something that's harder to opt later), but
due to
uniform/varying optimisation requirements some optimisations are
required
*before* we can do validation. With NIR we have an opportunity to do
these
optimisations in NIR either by building a nir based linker for the final
linking stage (uniform/varying validation/location assignment) or by a
little bit of back and forth of information between NIR and GLSL IR.
This is
something that can't really be done with LLVM/Gallium. I was working
towards
this while at Collabora.

4. We don't get to drop the glsl_to_tgsi pass which is not the most
maintenance friendly piece of code. Also currently 10% of cpu is
spent in
the slow tgsi optimisations during start-up of Deus EX which equals
around
50 seconds on my machine. Most of this optimisation is clean-up
simply due
to how sloppy the glsl_to_tgsi pass is.

5. It's probably arguable but using GLSL -> NIR should result in more
shared
code paths both between radeonsi/radv and the drivers for other gpus
anv/freedreno/vc4.

Anyway just a few things to think about.

Using GLSL -> NIR for radeonsi won't really change the GLSL linker
situation, because there are 12 other drivers consuming only TGSI.

Ignoring the software drivers Nouveau is the only one in active
development though right?

What are you getting at with that question? The VMware svga driver is actively being developed.

-Brian


I
guess it's OK to switch only radeonsi to NIR if it improves compile
times, but we also have the shader cache, so I don't know if it's
worth it just for the faster compilation that takes place only on the
first run. It's very hard to justify the massive development effort
here.


Rob seemed to think wiring up geom/tess support for glsl_to_nir should
be straightforward.

IMO it would be interesting to be able to play around with the various
NIR optimsation passes in conjunction with LLVM and shader-db it could,
be useful for comparisons and identifying weaknesses in both compilers.

Anyway there is value in either approach I just thought I'd throw some
counterpoints out there :)
_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

_______________________________________________
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev

Reply via email to