On Sat, Jul 19, 2008 at 06:53:41PM -0400, Robert Dewar wrote: > Andi Kleen wrote: > >"Yair Lifshitz" <[EMAIL PROTECTED]> writes: > >>Basically, as long as the application is in the 32G range (2^32*2^3), > > > >Seems like a strange assumption. If the application can use 32GB > >what stops it from using 40GB or 64GB? Systems with that much > >memory are readily available these days. > > Well the idea would be specifically for applications that can live > in 32 GB. If you compile in this mode, obviously you can't play > this trick, and you have to be sure that malloc cooperates in > limiting memory to 32G. On a system with allocate on use, you > could perhaps allocate 32GB in a chunk, and then use an allocator > within this 32GB which deals with compressed pointers.
An alternative would be to design the program to use integers in the role of pointers, and allocate objects of a given type in pools that are contiguous in virtual address space. Then instead of a pointer to class Foo, you have an unsigned 32-bit integer, and the true address of the object is Foo_pool_base + offset. The offset is automatically scaled by the compiler, multiplied by sizeof(Foo). The application could then scale beyond 32GB in many cases (especially if different kinds of objects are in different pools). The conversion from integers to pointers for dereferencing can be done in inline functions. Real pointers can be used as well where the cost is small. The application can then remain portable C or C++, and no compiler modification is necessary.