I meant to expound a little on this:

>  (Not to mention that some algorithms are more scalable that others,  I want 
> to talk about that in a minute.)

In humans we often try to measure "intelligence" with tests and we call them IQ 
tests.   It has been said that IQ tests actually only measure your ability to 
take IQ tests because they fall quite short of actually summing up intelligence 
to a single number.    In fact, you can't.   It's probably impossible to 
accurately quantify all the qualities of intelligence.  Many people are 
genius's at some things and well below average in other things.   

Nevertheless, in each human is the machinery required for intelligence, a kind 
of computer we call the brain.   We don't really understand very much about how 
it works,  but it seems to consist of many key features that AI tries to 
emulate, pattern recognition,  memory,  ability to reason,  etc.

I would like to get the groups thoughts on this - because I'm on shaky ground 
here and don't claim any particular insights here - but I have a general theory 
about AI.     It seems to me that you can view "intelligence" as a kind of 
physics.   I would probably compare it to horsepower or "power" in physics.   
You can use whatever unit you choose, perhaps "watt"

Work is not the same as power in physics.  Work is the product of force and 
distance over which it moves in physical terms.  In automotive terms it takes 
the same amount of "work" to move an equally heavy vehicle 100 miles or 
kilometers for instance.   Of course I'm ignoring other physical factors such 
as air resistance in order to simplify.

A scalable program can do any amount of work if you are willing to wait any 
amount of time.  But some have much more horsepower than others.  Mogo is a 
high horsepower engine.   Brown is a seized up engine with so many 
inefficiencies that it works against itself.  It can't really do any work.  

My basic idea here is that intelligence isn't static.  Even in humans, as an 
approximation, it isn't about whether you can solve a problem or not,  it's 
more about how long it takes you to solve the problem.     We don't really 
think about it that way,  but I believe it to be (more or less) true. 

With AI, computer memory is analogous to human memory.  It's more like 
memoization in computer science.   Once we learn something for the first time, 
we can use it over and over again throughout our lifetimes without having to 
rediscovery it.  

Of course the human brain is "hardwired" for many things it has been said.  
Much of what we need to survive and be intelligent, we don't have to learn - it 
was in us the day we were born.   

Sometimes, people who learn a lot of facts are considered intelligent.  Even if 
they don't really understand or have much practical wisdom,  we are impressed 
with someone who has so many facts stored in his brain and this probably should 
be considered one facet of intelligence.  

Computer Go programs have all these elements in them.   They have memory, hard 
coded knowledge, reasoning ability (a life and death analysis can be considered 
a kind of reasoning ability as can an alpha/beta search) and so on.    

Sometimes we consider the ability to "figure something out" as being the main 
component of I.Q. as opposed to just "knowing the answer."   And this is 
probably fair, because one is like having a fish and the other is like learning 
how to fish, a more useful skill in the long run.    

A scalable program has the ability to figure things out.  A non-scalable 
program must be considered the type of AI that either "knows the answer" or 
doesn't.   Sometimes we pretend there is no distinction because we are so 
time-conscious.   We say that it doesn't matter if it can figure something out 
because we are too impatient to wait for the answer.   

We can measure the I.Q. of a GO program in a (very) rough way by calculating 
the ELO strength of the program and the amount of running time to produce this 
level of play.   It's wrong to not consider time in this formula.   In human 
I.Q. tests,  the clock is part of the test and rightly so.  Almost every 
problem in an I.Q. test is of the nature that you could figure out the answer 
eventually (if you are persistent and focused) but the clock holds you back.   
Time is an important variable in human intelligence,  the accomplishments of 
the most brilliant scientists are often measured by the total body of knowledge 
they are able to contribute in a life-time as well as the quality of that 
knowledge.   

So my theory here is that A.I. is not a static quantity divorced from time but 
that time is a very important consideration.   Every reasonably written chess 
program, for example, is equally strong if you don't time-constrain them.   But 
only the best ones get considered as "strong" or "intelligent."   The programs 
that play poorly are called "stupid" but they are only stupid because they are 
not efficient,  not because they can't (eventually) figure out the right move.  

The "intelligent" or strong programs are like Ferrari's and the "stupid" ones 
with low I.Q.  are like the old budget cars with 6 or 4 cylinder engines that 
had no passing power.   But both can make a cross country trip equally well.   

As I mentioned previously, I did the I.Q. test with an early version of Lazarus 
and compared it to a stable version of GnuGo.  Even though Lazarus had a higher 
ELO rating on CGOS,  I concluded that GnuGo had a higher I.Q.    The total 
amount of "work" that GnuGo can do is severely limited due to a very tiny gas 
tank.  It has only enough fuel (energy) to do a quarter mile.   So Lazarus will 
beat it in any long race.    So if Lazarus is set to play at the same time rate 
as GnuGo,  GnuGo will win.    Even though GnuGo is not scalable, we can pretend 
that if it could continue to use it's "intellect" effectively at longer time 
controls, it would beat Lazarus at any level.

I could give my program to Lukasz Lew for an engine overhaul and rebuild.  He 
could boost the horsepower by making the engine parts more efficient.  Even if 
he changed nothing in the program other than it's speed,   it would be an I.Q. 
upgrade.   It would be snappier, quicker to figure things out and better.  

I don't think you can say that Lazarus is really stronger than that particular 
version of GnuGo even though it rated higher on CGOS.   I think it's more 
correct, at least in some sense, to view these as handicap games - even though 
the handicap is not explicitly defined.   I can beat stronger players in chess 
if we make the appropriate time-clock adjustments - that doesn't imply I am 
"smarter" or have more chess I.Q.

In this model,  I claim that A.I. (or I.Q.) is not the total amount of "work" 
you can do (by some measure) but a function of the amount of work you can do in 
a given amount of time.    In the field of A.I. research, much is being done 
today that was not possible yesterday.   Neural Nets would not have been very 
feasible in the days where 8086 ruled.   In fact, UCT and Monte Carlo for GO 
would have been dead-ends 20 years ago. 

I totally agree with those that say it's important to find "smarter" ways to do 
things but I also believe this is just another way to increase the amount of 
useful work that can be done in a given amount of time.    A finely tuned 
racing engine isn't just about raw power, it's about efficiency too because 
efficiency give you more power.   

The general rule (in my opinion) is that playing strength will require a huge 
amount of "power" because that's what A.I. is.  This in no way implies that it 
should not be "efficient" or that it should foolishly squander resources (as an 
internal combustion engine does.)   Instead it should be as efficient as 
possible specifically so that it can do more work.   And your not going to 
squeeze much water out of a rock.  You're not going to get a free ride.  You 
are not going to produce a strong program that doesn't do an enormous amount of 
work.   Naturally, you want to do that work as efficiently as possible.   The 
reason you want the work to be as efficient as possible is so that you can do 
even more work, not because you are seeking the holy grail of a program that 
plays like a master with a few lines of clever code and a constant time 
algorithm.    

- Don
 

 











_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to