2014/09/11 0:31 "Bzzzz" <lazyvi...@gmx.com>:
>
> On Wed, 10 Sep 2014 10:08:31 -0500
> John Hasler <jhas...@newsguy.com> wrote:
>
> > That has been obsolete for at least a decade and may never have
> > applied to Linux.  IIRC it had to do with specific characteristics
> > of BSD kernels.
>
> IIRC it was 1.5xRAM.
>
> Today, the only "obligation" is to have as swap as ram
> if you plan to hibernate; otherwise, swap isn't even
> mandatory.

The rule was, big enough for a memory dump if you have to dump the whole of
RAM, but not so big that finding a swapped out page takes a whole time
slice.

And, yes, fifteen years ago it still applied to Linux.

There are a variety of swap strategies. MSWindows tends to keep some of the
working set swapped out, aiming to eliminate the hysteresis when going from
all-in-RAM to using swap. (Or some such attempt to help the user not notice
tha real hardware and software have limits.) Mac OS swaps to files,
assuming it can keep enough of the file system defragged to be able to
allocate page file blocks without fragmentation, and it actually sort-of
works. BTW, on a Mac with less than 2G of RAM, swap partitions 3 to 5 times
RAM size made sense on systems being used for heavy video and audio work.

Swapping in Linux tends to depend on the distro, and debian, as I
understand it, has (until this last year or so) had a variety of tools to
change and tune the swapping strategy. The default has, however, been the
simple sgrategy under which, until we started seeing common RAM sizes over
2G, gave the old rule meaning. It's still not really wrong, although we are
seeing RAM sizes that make hibernating not nearly as simple as just taking
a snapshot of RAM.

Anyway, I'm inclined to lean towards the idea that the problems that
started this thread are derived from hardware issues. Maybe.

Joel Rees

Computer memory is just fancy paper,
CPUs just fancy pens.
All is a stream of text
flowing from the past into the future.

Reply via email to