Brian wrote: > Hi Rbt, > > To give an example of processing a lot of data, I used Python to read > and process every word in a single text file that contained the entire > King James Bible version. It processed it within about one second -- > split the words, etc. Worked quite well. > > Hope this helps, > Hardly "big" by the OP's standards at only 4.5 MB (and roughly 790,000 words, if anyone's interested).
The problem with processing terabytes is frequently the need to accomodate the data with techniques that allow the virtual memory to avoid becoming swamped. The last thing you need to do under such circumstances is create an in-memory copy of hte entire data set, since on most real computers this will inflict swapping behavior. regards Steve -- Steve Holden +1 703 861 4237 +1 800 494 3119 Holden Web LLC http://www.holdenweb.com/ Python Web Programming http://pydish.holdenweb.com/ -- http://mail.python.org/mailman/listinfo/python-list