Hi All! 
Sticking to Linux, or NT systems,whatever you've heard is absolutely
incorrect. 
The efficiency of any system goes down when it has to swap in and out
very frequently. Whenever any system fills up all it's physical memory,
it starts using SWAP (area on harddisk) as more physical memory. This
process is absolutely transparent to any applications running on the
system. Now, comparing the speeds of data throughput through physical
memory and harddisk tells that there's a HUGE loss of efficiency and
speed when data flows through the swap. The reason is obvious; a
harddisk is a mechanical device, it's speed cannot be compared even to
the speed of a fully semiconductor medium device, the RAM (Physical
Memory). De-jargoned, RAM is much faster than SWAP (space reserved for
use as more memory on harddisk). 
On UNIX, it's clones and Windows NT platforms, this SWAP space is
reserved and pre-allocated on the harddisk. *nix use independent swap
partitions and NT uses variable PAGEFILE. I'm sure you must have come
across the swap partitions under Linux and setting the minimum and
maximum PAGEFILE sizes in Windows NT. On the other hand, Windows 9x
systems never reserve space for swapping/paging purposes; they keep
allocating harddisk space for use as memory whenever and as much
required. Severe problems of slow operation arise when these systems are
unable to allocate the required swap area (for use as more memory)
because of lack of free harddisk space. Acting intelligently, these
systems start pushing unused programs out of the physical memory as well
as the ones already in the swap, to create space for more applications
requiring memory urgently. These programs are usually the ones in
foreground, the ones with higher priorities or any other daemons and
system critical processes. All this results in increased harddisk
activity which instantaneously slows down the system. 
The better operating systems, *nix and NT, are also not spared when
there SWAP or PAGEFILE spaces fill up. They only have a better chance of
not getting into any trouble at all because they had reserved space on
their storage devices for such purposes, and are unaffected by
completely filled up harddisks. 
So all this suggests that you should always supply a system with ample
swap space, pagefile size, or free harddisk space. 
No formulae exist to calculate the amount of ideal free space. It all
depends on the operating system, and the primary role of the machine. A
Windows 9x system for playing heavy games should not have less than
512MB of harddisk space free if it has 128MB of physical memory. A
Windows NT system for the same purpose would require a 300 minimun and
500 maximum sizes set for it's PAGEFILE. A Linux workstation can do
without any swap space at all. 
You should always go with MBs and not percentages. 95% full 40GB
harddisk leaves 2000MB; this amount of space can handle anything. While
95% full 10GB harddisk leaves 500MB; which might be enough but cannot
handle everything. 
If you are asking for performance, then you should certainly go in for
more physical memory (RAM). Nothing can beat that.

--Dhruv

> On Wed, 2003-02-12 at 11:09, surinder makkar wrote: 
> Hi folks,
> 
> Just a little question. I have heard that when a
> partition on a hard disk is about to get full, say if
> 95% is full , the efficiency of the system goes down. 
> 
> Can you tell me how much percentage space should be
> left free on a partition beyond which the system
> performance starts gettinng down and what is the basis
> of your cocnclusion. Is there any formula. 
> 
> Also are there any URLs which are explaining this
> thing. That would be greatly appreciated
> 
> Thanks in Advance
> 


          ================================================
To unsubscribe, send email to [EMAIL PROTECTED] with unsubscribe in subject 
header. Check archives at http://www.mail-archive.com/ilugd%40wpaa.org

Reply via email to