Most people aren't waiting for compilation of single files. If they do, it is because a single compilation unit requires parsing/compilation of too many unchanging files, in which case the primary concern is avoiding redoing useless compilation.
The common case is that people just don't use the -j feature of make because 1) they don't know about it 2) their IDE doesn't know about it 3) they got burned by bad Makefiles 4) it's just too much typing Making single compilations more complex through threading seems wrong. Right now, in each compilation, we invoke the compiler driver (gcc), which invokes the front end and then the assembler. All these processes need to be initialized, need to communicate, clean up etc. While one might argue to use "gcc -pipe" for more parallelism, I'd guess we win more by writing object files directly to disk like virtually every other compiler on the planet. Just compiling int main() { puts ("Hello, world!"); return 0; } takes 342 system calls on my Linux box, most of them related to creating processes, repeated dynamic linking, and other initialization stuff, and reading and writing temporary files for communication. For every instruction processed, we call printf to produce nicely formatted output with decimal operands which later gets parsed again into binary format. Ideally, we'd just do one read of the source and one write of the object. Then we'd have far below 100 system calls for the entire compilation. Most of my compilations (on Linux, at least) use close to 100% of CPU. Adding more overhead for threading and communication/synchronization can only hurt. -Geert