The 9x9 scalability study has been a huge success with 35 cpu's 
participating and several volunteers.    This means we got about a month
of testing done per day and have the equivalent of about a years worth
of data or more.    We are considering whether to extend the study a bit
more to test some more versions of Mogo.   Here are the results so far:

    http://cgos.boardspace.net/study/

There are over 7000 games played at very high levels.     If there is
enough interest,  I will collect all the games together in one place and
make them available when the study is complete.   (This depends on all
the participants sending me a copy of their database(s).)

It appears that mogo, despite internal limits, continues to scale beyond
the point where we are currently testing.    It's my understanding that
because of memory limits mogo must garbage collect tree nodes that could
be useful later,  and this would impact the strength at higher levels.  
Nevertheless the rating plot tells the story,  mogo scales wonderfully,
but not linearly and you can see a nice gradual curve in the plot.    

Now we have something we can argue about for weeks.   Why is it not
mostly linear?   Could it be the memory issue I just mentioned?    Or as
David Fotland suggests,  perhaps we are close enough to the limit that
additional strength increases are hard to come by?     My guess is that
it is a combination of both factors.    At the current level of the
strongest mogo version,   mogo takes roughly an hour per game, or about
45 minutes on a core 2 duo.     That probably translates to perhaps a
couple of minutes per move.

FatMan seems to hit some kind of hard limit rather suddenly.     It
could be an implementation bug or something else - I don't really
understand this.     It's very difficult to test a program for
scalability since you are limited by computer resources and it turned
out that this was a great opportunity to discover this.    Of course
this is something else we can argue about :-)

When this study is complete,   we would like to do a 19x19 study.   It
seems pragmatic to have 2 programs in the study,  just as we do in the
9x9 study.     We already have mogo,  but we are looking for someone to
volunteer their program for the study.    Here is what we need:

1.  One of the stronger scalable programs. 

2.  A 32 bit binary that runs on linux.     Also, a 64 bit binary if you
can make one but not required since it appears most 64 bit linux
machines run 32 bit binaries in most cases.

3.  No opening book or at least extremely limited (mogo always plays e5
but that seems to be the extent of it's "hard coded" opening system.)

4.  Ability to have fine control over the number of nodes searched per move.

5.  Should be able to scale up without hitting a hard memory limit when
thinking as long as 5 or 10 minutes per move. 

6.  Should make reasonable use of memory.   The test machines seem to
have about 1/2 gig of memory and 2 programs run on them - so your
program should be able to run reasonably well on modest hardware.


We understand that your program probably plays poorly at 19x19.   This
should not prevent you from volunteering your program if it's one of the
stronger 9x9 programs.  

We have ironed out most of the wrinkles in making this work so we would
also like to have more volunteers to RUN the study.    If anyone want to
help,  it works this way:

1.   Need a machine running a modern linux with at least 512 meg memory.
2.   It can be 32 bit or 64 bit linux.
3.   I send you a tarball with everything you need including the program
themselves.
4.   unpack the tarball somewhere.
5.   run a script to start the test.
6.   there is a script to stop and restart the test at your convenience.

There is nothing else you have to do.   The results are periodically
ftp'd to my anonymous server for processing.

- Don






_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to