> Structured flow control is the issue. What's the solution for getting > that back out of LLVM?
Well, my idea is the following. First, break any self loop edges, then do an SCC decomposition of the CFG, and output the DAG of the SCCs in topological order. SCCs that have size 1 are basic blocks, so just emit them. SCCs that have size >1 become loops. Choose the basic block with the most in-edges from outside the SCC, and make that the loop header. Now, make all the edges from outside to inside the SCC, and those from inside to the header, pass through a new basic block after setting an appropriate flag, which is used by this new basic block to dispatch them to their "old destinations", thus making all in-edges go to this single node, making the CFG reducible. Then, do a similar thing with the out-edges, so that there is a single "after" node for the loop. After that, recurse on the graph formed by the SCC minus the original header, plus the in-edge dispatch node with becomes part of the loop This will give you a CFG which is composed only of loops and conditional forward gotos. Now, process each loop body, which is an acyclic graph after collapsing nested loops. Every time a node has two successors, that's an "if". On each branch, you put those basic blocks that are dominated by the branch out-edge. Now, if these blocks have out-edges to a single node, that's the "endif", and you just recursively process the branches. Otherwise, "cut" those edges by introducing a new dispatch node with a flag. A further issues are switch statements. These can be treated as successive ifs, but the way they are split affects the conciseness of the result. The idea here is to split the graph in biconnected components, and see if any articulation point splits some of the switch cases from the others. If so, you generate an if to separate those cases, and recurse. Post-dominators might also work for this, but it seems biconnected components are required to handle multiple exits due to breaks and returns properly After all this, you rerun all LLVM optimization passes with all parts that could destroy this structure disabled. Then, you convert it to GLSL IR or whatever you want. I have code that does this and seems to work on the tests I did, but it's not yet in a state where it would be mergeable in LLVM upstream. In particular, it's hard to get nice clean code, and avoid quadratic algorithms. _______________________________________________ mesa-dev mailing list mesa-dev@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/mesa-dev