Yes, GCC is bigger and slower and for several architectures generates bigger, slower code with every release, though saying so won't make you very popular on this list! :)
One theory is that there are now so many different optimization passes (and, worse, clever case-specific hacks hidden in the backends) that the interaction between the lot of them is now chaotic. Selecting optimization flags by hand is no longer humanly possible. There is a project to untangle the mess: Grigori Fursin's MILEPOST GCC at http://ctuning.org/wiki/index.php/CTools:MilepostGCC - an AI-based attempt to autmatically select combinations of GCC optimization flags according to their measured effectiveness and a profile of you source code's characteristics. The idea is fairly repulsive but effective - it reports major speed gains of the order of twice as fast compared to the standard "fastest" -O options, and there is Google Summer of Code 2009 project based on this work. It seems to me that much over-hacked software lives a life cycle much like the human one: infancy, adolescence, adulthood, middle-age (spot the spread!) and ultimately old age and senility, exhibiting characteristics at each stage akin to the mental faculties of a person. If you're serious about speed, you could try MILEPOST GCC, or try the current up-and-coming "adolescent" open source compiler, LLVM at llvm.org M