On Mon, Dec 20, 2004 at 11:10:53PM +0100, Sven Luther wrote: > Actually, as i recall, the 64bit code should be slower, since all pointers are > now 64bit, and thus you have to transfer double amount of code from the ram > and so on.
AIUI, 64-bit powerpc code is generally only slightly larger than 32-bit powerpc code. Like, say, 1%. (I'd be interested in actual numbers, including differences in the sizes of individual sections in a binary.) Since the memory bandwidth of processors greatly exceeds its usage by typical code (i.e., non-altivec and not optimized by hand), I don't see the extra size of pointers contributing to a memory bottleneck. One exception is if you are copying around massive arrays of pointers (or to a lesser extent, structures containing pointers), which although not uncommon, is probably not an interesting optimization case[1]. However, for a given cache size, any data containing pointers will take up a larger chunk of cache. 64-bit processors generally have larger caches to compensate for this, but for a given cache size, one would expect 32-bit code manipulating lots of pointers to have a higher hit-ratio, thus being faster. Such increases would be algorithm-dependent, hard to measure, and small (except in corner cases). I can really only think of two cases where 64-bit code could be faster (not that it _would_ be in practise) -- 1) arithmetic on 64-bit types, and 2) optimized versions of strlen(). All in all, I'd consider it a wash, and would not be too concerned about whether the system I was running was 32-bit native or 64-bit native. It would bother me (slightly), however, if there were two versions of libraries competing for cache/RAM/disk space. dave... -- [1] One might consider significant copying of large arrays a bug in the program itself.