On 2014-06-25, 10:02 AM, Richard Biener wrote:
On Wed, Jun 25, 2014 at 4:00 PM, Vladimir Makarov <vmaka...@redhat.com> wrote:
On 2014-06-25, 5:32 AM, Renato Golin wrote:

On 25 June 2014 10:26, Bingfeng Mei <b...@broadcom.com> wrote:

Why is GCC code size so much bigger than LLVM? Does -Ofast have more
unrolling
on GCC? It doesn't seem increasing code size help performance (164.gzip &
197.parser)
Is there comparisons for O2? I guess that is more useful for typical
mobile/embedded programmers.


Hi Bingfeng,

My analysis wasn't as thorough as Vladimir's, but I found that GCC
wasn't eliminating some large blocks of dead code or inlining as much
as LLVM was.


   That might be a consequence of difference in aliasing I wrote about.  I
looked at the code generated by LLVM and GCC of an interpreter and saw
bigger code generated by GCC too.

   A sequence of bytecodes execution and each bytecode checks types of
variables (small structure in memory) and set up values and types of results
variables.  So GCC was worse to propagate the variable type info (e.g. int)
in the bytecode sequence execution where it would be possible and remove
unnecessary code (cases where other types, e.g. fp, is processed).  LLVM was
more successful with this task.

Maybe LLVM is just too aggressive here (but is lucky to not miscompile
this case).


Maybe. But in this case LLVM did a right thing. The variable addressing was through a restrict pointer. Some (temporary) variables were reused and control flow insensitive aliasing in GCC (as I know that is what GCC have right now although I may be wrong because I never looked at code for aliasing) can not deal with this well.

My overall impression of several years of benchmarking GCC and LLVM that LLVM is more buggy even on major x86/x86-64 paltform. Also I found on this code, that GCC is much better with dead store elimination. It removed a lot of them meanwhile LLVM does not removed anything.

Reply via email to