Besides the technical issues, what is the advantage of parallel agent-based simulations ? Can you achieve more with a billion agents than with a few thousand, or is it just an attractive-sounding possibility ? An ant colony with a billion ants will not be significantly different or more intelligent than a colony with 10.000 ants. A swarm with 10.000 birds will look similar to a swarm with 100 birds, only a bit more fine-grained.
Is a simulation with millions or billions of agents somehow qualitative different from a simulation with only a few thousand agents ? Certainly not if they are all alike, if they all do the same or if they all "live" in the same environment. I looks very difficult to construct a billion different agents or to assign different tasks to billions of agents. In evolutionary systems, AI, and ALife, scale certainly matters: a typical human brain has billions of neurons, a chromosome contains roughly a GByte program with a billion bytes, and evolution on Earth took from the earliest forms to the computer nerd today a few billion years. If we expect something interesting in an evolutionary ALife system, do we have to let it run for some billion years using a billion agents in order to get a "genetic code" with a billion bytes ? I bet the first true AI will have more than a billion bytes of code, too (already a few films take easily a few GByte of data). Somehow the lower bound for interesting behavior seems to be a billion interacting units - why is this so ? -J. ============================================================ FRIAM Applied Complexity Group listserv Meets Fridays 9a-11:30 at cafe at St. John's College lectures, archives, unsubscribe, maps at http://www.friam.org
