The rule of thumb comes from UNIX days (BSD and even before that with AT&T UNIX). In order to be completely sure you would be able to swap out a program when memory became full, UNIX allocated a page of swap for every page of virtual memory a program occupied. So if vi required 256K to run, there was 256K of swap space allocated to it. The 2 to 1 ratio came from the observation that a busy UNIX time- sharing system with lots of users ran most of it's time with half the users doing something that required CPU/memory resources and the other half thinking, so you could afford to overcommit memory by a factor of two.

These days, Linux uses a less straightforward, but in some ways more effective, algorithm for deciding when and where to swap. That combined with the availability of large cheap memory and lots of bloated programs to fill it up, and users who expect instant response, has made the old rule of thumb obsolete. But old habits die hard, and you often hear the old rules of thumb quoted without thinking where they came from.

On my own systems, I make swap huge (10 GB or more for 1 GB RAM -- Disk is cheap!) so I can mount /tmp on a tmpfs filesystem.

Rick


On Jan 22, 2008, at 8:08 PM, David Brodbeck wrote:


On Jan 21, 2008, at 5:45 PM, Ron Johnson wrote:
The old "you need 2x RAM for swap" rule is hard to forget.

I never really understood the rationale for that rule. It seems like a system with more RAM would need less swap, not more. In particular, it always seemed to me like it'd be a bit silly to use 8 gigs of disk for swap on a system with 4 gigs of RAM. Can someone explain the reasons for the 2x rule? Is there a performance boost?


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED] a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]

Reply via email to