Dear Friammers, 

This is going to be one of those longish Thompson emails, but I think even
for those who are interested primarily in computation, it might have value.
I am reading a Pop book on the history of the Institute for Advanced Study.
It is sort of the Instutute for Advanced Study equivalent of Waldrop's
"biography" of the Santa FE Institute, COMPLEXITY, but the author, George
Dyson, is a much stronger historian, and the level of detail about the
people involved in the bomb- and computer-projects of the 40's and 50's is
extraordinary.  The book is called, TURING'S CATHEDRAL.   I thought the
following passage might be interesting to you-all, so I keyed it in.  

 

Monte Carlo originated as a form of emergency first-aid, in answer to the
question:  What to do until the mathematician arrives?  "The idea was to try
out thousands of such possibilities and, at each stage, to select by chance,
by means of a 'random number' with suitable probability, the fate or kind of
event, to follow it in a line, so to speak, instead of considering all
branches," Ulam explained.  "After examining the possible histories of only
a few thousand, one will have a good sample and an approximate answer to the
problem."  The new technique propagated widely along with the growing number
of computers on which it could run.  Refinements were made, especially the
so-called Metropolis algorithm (later the Metropolis-Hastings algorithm)
that made Monte Carlo even more effective by favoring more probably
histories from the start.  "The most important property of the algorithm is
. that deviations from the canonical distribution die away," explains
Marshall Rosenbluth, who helped invent it.  "Hence the computation converges
on the right answer!  I recall being quite excited when I was able to prove
this."  [Dyson, p 191]

 

I would love to have this explained to me, either at our Friday meeting or
here, on the list.  Ulam's first sentence, above, seems to contradict all
the others.  I thought the whole point was to consider all the branches.
Confused, as usual. 

 

My personal interest in the passage arises from its relation to Charles
Sanders Peirce's account of induction.  Recall from my earlier diatribes
that (1) Peirce is the inventor of much of what we learned in graduate
schools as "statistics" and that (2) Peirce is a brick-wall monist.  Human
experience  (not just individual experience) is all that there is, and
reality is therefore that upon which human cognition will converge in the
very long run.  Peirce's account of induction goes [very] roughly like this:
Experience either will, or will not converge.  (There either will, or will
not, be a measurement upon which our measurements of the acceleration of
gravity will converge.)  Matters about which experience converges are
particularly value to organisms and therefore they (we) have developed
cognitive mechanisms (habits) to track them (see Hume?).  If our
measurements do converge, our confidence in the "location" upon which they
will converge gradually becomes more confident (or more precise) as
experience is continued.   Since any random process produces periods of
convergence, any such induction is always a hypothesis subject to
disconfirmation.  How similar is this to the Monte Carlo idea?  

 

 

Nicholas S. Thompson

Emeritus Professor of Psychology and Biology

Clark University

http://home.earthlink.net/~nickthompson/naturaldesigns/

 

============================================================
FRIAM Applied Complexity Group listserv
Meets Fridays 9a-11:30 at cafe at St. John's College
to unsubscribe http://redfish.com/mailman/listinfo/friam_redfish.com

Reply via email to