On Mon, 14 Feb 2011, Joe Buck wrote: > On Mon, Feb 14, 2011 at 05:57:13PM -0800, Paul Koning wrote: > > It seems that this proposal would benefit programs that need more than 2 GB > > but less than 4 GB, and for some reason really don't want 64 bit pointers. > > > > This seems like a microscopically small market segment. I can't see any > > sense in such an effort. > > I remember the RHEL hugemem patch being a big deal for lots of their > customers, so a process could address the full 4GB instead of only 3GB > on a 32-bit machine. If I recall correctly, upstream didn't want it > (get a 64-bit machine!) but lots of paying customers clamored for it. > > (I personally don't have an opinion on whether it's worth bothering with).
As I've been warning recently in the context of the "operator new[]" overflow checks discussion, even if your process is addressing 4GB in such circumstances it can't safely use single objects of 2GB or more and it's a security problem when malloc/calloc/etc. allow such objects to be created. See PR 45779. (There could well be issues with pointer comparisons as well as pointer differences, although there at least it's possible to be consistent if you don't allow objects to wrap around both in the middle and at the end of the address space.) -- Joseph S. Myers jos...@codesourcery.com