I've got a short script that loops though a number of files and processes them one at a time. I had a bit of time today and figured I'd rewrite the script to process the files 4 at a time by using 4 different instances of python. My basic loop is:
for i in range(0, len(filelist), CPU_COUNT): for z in range(i, i+CPU_COUNT): doit( filelist[z]) With the function doit() calling up the program to do the lifting. Setting CPU_COUNT to 1 or 5 (I have 6 cores) makes no difference in total speed. I'm processing about 1200 files and my total duration is around 2 minutes. No matter how many cores I use the total is within a 5 second range. This is not a big deal ... but I really thought that throwing more processors at a problem was a wonderful thing :) I figure that the cost of loading the python libraries and my source file and writing it out are pretty much i/o bound, but that is just a guess. Maybe I need to set my sights on bigger, slower programs to see a difference :) -- **** Listen to my FREE CD at http://www.mellowood.ca/music/cedars **** Bob van der Poel ** Wynndel, British Columbia, CANADA ** EMAIL: b...@mellowood.ca WWW: http://www.mellowood.ca -- https://mail.python.org/mailman/listinfo/python-list