http://gcc.gnu.org/bugzilla/show_bug.cgi?id=51386
--- Comment #3 from Hans-Peter Nilsson <hp at gcc dot gnu.org> 2011-12-02 11:07:20 UTC --- (In reply to comment #1) > Hans-Peter, can it be a memory issue? The recent changes imply that more > memory > is used by these data structures, and that is largely unavoidable, Unavoidable, really? > thus if > there is nothing wrong algorithmically But if there's no algorithmic effect and just using more memory, then what's the improvement? Ah ok, looking at the changes I see, maybe...is the fix for empty buckets maybe exposing load_factor.cc as an odd case not worth optimizing? Oh, I see floating-point changes, has the patch perhaps increased the number of floating-point computations very much? Bad for soft-float targets. > and the slow down is due to more memory > being used, there isn't much we can do, besides tweaking the test for > simulators, of course. Memory access cost is practically free in a simulator, so I'd expect to see effects the other way round, i.e. non-simulator setups (where cache misses really hurt) would suffer more than (non-cache) simulator setups. :) Ok, half-joking: an increased number of instructions would mean...getting just the same effect. At least the simulation finally ended, taking: /tmp/hpautotest-gcc1/cris-elf/pre/bin/cris-elf-run load_factor.exex 6766.54s user 0.61s system 84% cpu 2:13:52.65 total Yes, two hours, close to fourteen minutes instead of less than 0 hours ten minutes. So, if this comes down to just tweaking the test-case, I'd suggest trivially reducing the loops for simulator targets with a factor of twenty more than the expected scaled timeout, taken to one significant figure, i.e. 20*6800/600 ~= 200.