On Sat, 08 Jan 2005 14:22:30 GMT, Lee Harr <[EMAIL PROTECTED]> wrote: >>> [http://www.gotw.ca/publications/concurrency-ddj.htm]. It argues that the > >> continous CPU performance gain we've seen is finally over. And that future > >> gain would primary be in the area of software concurrency taking advantage > >> hyperthreading and multicore architectures. > >> > > > Well, yes. However, it's not as bad as it looks. I've spent a good part > > of my professional life with multiprocessors (IBM mainframes) and > > I have yet to write a multi-thread program for performance reasons. > > All of those systems ran multiple programs, not single programs > > that had to take advantage of the multiprocessor environment. > > Your typical desktop is no different. My current system has 42 > > processes running, and I'd be willing to bet that the vast majority > > of them aren't multi-threaded. > > > > Exactly. If every one of your processes had its own 2 Ghz processor > running nothing else, I think you would be pretty happy. Your OS > had better be well-written to deal with concurrent access to > memory and disks, but I think for general application development > there will be huge speed boosts with little need for new > programming paradigms.
Not likely. How often do you run 4 processes that are all bottlenecked on CPU? It's not a common usage pattern. If you have 16 CPUs, and 15 of them are running mostly idle processes and that *one* process you'd wish would hurry the heck up and finish has the 16th pegged at 100% usage, you are not a happy camper. For the case where you do have a lot of competing unrelated processes, no doubt SMP is a big win automatically, but there will still need to be language innovations to make it easier to develop software which can benefit from the additional hardware for the more common case of individual CPU hungry processes. Jp -- http://mail.python.org/mailman/listinfo/python-list