>> >> Then why not have a really big swap file?  If swap is useful as a
>> >> second layer of caching behind RAM, why doesn't everyone with some
>> >> extra hard drive space have a 100GB swap file?
>> >>
>> > You've not understood what I said, I think.  Swap is not useful as
>> > filesystem cache.  Swap is as efficient (probably a little less)
>> than
>> > the files on the disk.  It's RAM that's efficient as filesystem
>> cache.
>> >
>> > Where swap comes in is the kernel can swap out pages from "stale"
>> > processes, and reclaim the RAM as filesystem cache.
>>
>> That all makes perfect sense, but if a small swap is good and a large
>> swap is not any better, I'm missing something.  Maybe the pages from
>> stale processes never total more than a small amount?  I don't see how
>> that could be
>
> Because you're (likely) never going to be using 100GB of memory at one
> time for all your processes, let alone "idle" processes, so what's the
> point of allocating all that swap?
>
> Continuing the analogy, it's like getting a stadium-sized attic that's
> 100x bigger than the house your building it on just to store a Christmas
> tree and a few other items.
>
> Here's another way of looking at it.  The kernel wants to use *all* your
> RAM.  RAM is fast (compared to disk).  But it wants to use the RAM for
> stuff that's actually needed most at the present time. So say you have
> 4G RAM.  You're only using maybe 1.5G memory for applications.  So the
> kernel is going to try to use the remaining 2.5G for cache when/if it
> needs to.  But let's say you're hitting the disk a lot because you're
> compiling something, then the kernel might decide it would like to cache
> more files than the 2.5G.  So it sees you have 300M of paged in process
> memory that hasn't been used in a long while.  A better use of RAM may
> be to swap out those 300M and use it for more filesystem cache, causing
> your compilation to run faster.  But if you have a 100G swap file and
> only 300M of "idle" pages then all that extra swap isn't going to be of
> any use.  Similarly, you don't want to swap out all of the 1.5G RAM
> because some of it is actually being actively used (e.g. by the
> compiler).

That all makes perfect sense.  So the reason a swap larger than maybe
1GB is not usually implemented is because idle processes don't
normally have more than a few hundred MB of pages in memory?

Wouldn't a sufficiently large swap (100GB for example) completely
prevent out of memory conditions and the oom-killer?

- Grant

Reply via email to