I believe the main problem is that the Elo-rating model is wrong for
bots. The phenomenon with Mogo is probably the same as Crazy Stone: if
there are enough strong MC bots playing to shield the top MC programs
from playing against GNU, then they'll get a high rating because they
are efficient at beating other MC bots. Otherwise, they are forced to
play against GNU, and lose points.
For instance:
http://www.lri.fr/~teytaud/cross/CS-9-17-2CPU.html
GNU 1946 22 / 27 81.48
GnuCvs-10 1969 26 / 31 83.87
AyaMC637_4CPU 2108 18 / 19 94.74
A very easy way to get over-evaluated on CGOS is to have two versions of
the same program that play each other. For instance, if I connect
CS-2CPU and CS-8CPU, they will play most of their games against each
other, ans CS-8CPU will get an incredible rating.
Just incorporate GNU in Don's scalability study, and the rating range
will shrink a lot.
Rémi
_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/