On Sat, 28 Feb 2015 01:22:15 +1100, Chris Angelico wrote: > > If you're trying to use the pagefile/swapfile as if it's more memory ("I > have 256MB of memory, but 10GB of swap space, so that's 10GB of > memory!"), then yes, these performance considerations are huge. But > suppose you need to run a program that's larger than your available RAM. > On MS-DOS, sometimes you'd need to work with program overlays (a concept > borrowed from older systems, but ones that I never worked on, so I'm > going back no further than DOS here). You get a *massive* complexity hit > the instant you start using them, whether your program would have been > able to fit into memory on some systems or not. Just making it possible > to have only part of your code in memory places demands on your code > that you, the programmer, have to think about. With virtual memory, > though, you just write your code as if it's all in memory, and some of > it may, at some times, be on disk. Less code to debug = less time spent > debugging. The performance question is largely immaterial (you'll be > using the disk either way), but the savings on complexity are > tremendous. And then when you do find yourself running on a system with > enough RAM? No code changes needed, and full performance. That's where > virtual memory shines. > ChrisA
I think there is a case for bringing back the overlay file, or at least loading larger programs in sections only loading the routines as they are required could speed up the start time of many large applications. examples libre office, I rarely need the mail merge function, the word count and may other features that could be added into the running application on demand rather than all at once. obviously with large memory & virtual mem there is no need to un-install them once loaded. -- Ralph's Observation: It is a mistake to let any mechanical object realise that you are in a hurry. -- https://mail.python.org/mailman/listinfo/python-list