Robert Dewar wrote:
Vladimir Makarov wrote:
Although it is not right argument to what you mean. But example
about vectorization would be right. ICC vectorizes many more loops
than gcc does. Vectorized loops is much bigger in size than their
non-vectorized variants. So faster code does not mean smaller code
in general. There are a lot of optimization which makes code bigger
and faster: like function versioning (based on argument values),
aggressive inlining, modulo scheduling, vectorization, loop
unrolling, loop versioning, loop tiling etc. So even if the both
compiler do the same optimizations and if one compiler is more
successful in such optimizations, the generated code will be bigger
and faster.
Sure, we can all find such examples, but if you take a large program,
(say hundreds of thousands of lines), you will find that the speed
vs size relation holds pretty well.
Definitely not for Intel compiler and not for modern x86_64 processors
(although it is most probably true for some other processors like ARM).
ICC really generates much bigger code than GCC even we take subtarget
versioning away. The closest analog on x86_64 for gcc -O3 would be -O3
-xT for icc. -xT means generating code only one subtarget which is Core2.
I tried to compile the biggest one file program I have (about 500K
lines). ICC crashed on it because there is not enough memory (8GB) in
comparison with gcc which is doing fine with 2GB memory. So I had to
check SPEC2006. In average the code increase on all SPEC2006 for ICC
was 34%. But because you mentioned programs with hundreds of thousands
of lines, I am also giving numbers for some programs from SPEC2006.
lines Intel code size increase
gromacs 400K 23%
tonto 125K 29%
wrf 115K 44%
gobmk 197K -2%
Already long ago I got impression that ICC is good mostly for FP
programs (for integer benchmarks gcc frequently generates a better code)
but if icc stays its course, gcc will be always a better system compiler.