-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Some people on this group have claimed that computer go is decades
behind computer chess.    In many ways this is not true,  the
perceptions in part is based on the fact that it's much harder to write
a go program that plays very well in human terms.   But in other ways,
it IS true.

One thing computer chess has had for a very long time and is practically
absent in Go is a rating list.   It's always been possible to identify
who the best programs and where they stand relative to any other.  There
are agencies that play hundreds of thousands of games constantly to
track the progress and build accurate rating lists of the programs
running on various hardware.

Such a thing is a tremendous impetus for improvement.  Is there any such
thing in Go?    If there is, I stand corrected.

This can also help us chart the best way forward.  In computer chess if
a program is even a little better than the previous best, it is clear
public knowledge.   Suddenly every one want to know "how they did it"
and invariably it is discovered.  It is hard to keep secrets in computer
chess and that is a good thing unless you are marketing a program
commercially.

Right now we know that Mogo dominates in 9x9.   Without CGOS this would
be speculation based on who won the last tournament.   But CGOS is not
the right way although it's a useful tool.    There needs to be some
kind of testing agency that is fair and unbiased, visible, and
everything is out in the open.

In computer go the only real instrumentation I am aware of is who won
the last tournament.  And if you are commercial and didn't win the last
tournament you advertise just the ones you did win and you control the
perception of the strength of your program that way.    But with some
kind of rating agency you cannot run and hide.  If your program stinks
you don't submit it to the agency, but then you lose credibility - you
cannot claim anything with any reasonable credibility.

The reason I made this post was that I wondered how good 19x19 Mogo is.
 And I don't have a definitive answer.   Is Many Faces a lot better than
Mogo at 19x19?   Who is the best and by how much?    Do some program run
better on certain hardware?    Does anyone have a precise answer that is
more than a feeling, hunch, or anecdotal (subjective) evidence?


Anyway, I would propose the following experiment:

  1. Test Mogo at 19x19 against one of the strong commercial programs.
  2. Test at many levels.
  3. Test the scalability of Mogo and other strong commercial programs.
  4. Extrapolate.

There are many ways to do this,  but if the question is to build some
POWERFUL hardware to create a Dan level program then we can extrapolate
the result of hardware with software.

We don't have to test against humans if our sole purpose is to see which
way to proceed.  Just test Mogo against programs such as Go++ Many Faces
and others at incredibly deep levels and see where they stand.

By the way, how programs do in tournaments doesn't cut it.  You need
thousands of games unless your program is significantly dominant to be
able to say it's better with reasonably high certainty.

- - Don












-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD4DBQFHDjiuDsOllbwnSikRAuhXAJiRlNzmemhaTgNbZKesyVzENN70AKDT1HR9
nLbhKOW1nOMUSBY70jy0cg==
=SFrB
-----END PGP SIGNATURE-----
_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to