Don Dailey wrote:
Hi Rémi,
For a while I have considered overhauling the rating system for CGOS.
My system is ad-hoc and based on gradually increasing K factor based on
your opponents K in the standard ELO formula.
I don't know if your idea here is feasible for a computer server,
because presumably the players are fixed in strength, but in practice I
think some bots change. Anyway, I'm no expert on this but want to
find something better than what I'm doing and I have considered using
some kind of whole history approach (such as running bayeselo after
every round on every game, which of course is not very scalable :-)
- Don
Hi Don,
Maybe you could consider implementing Glicko. Glicko is described there:
http://math.bu.edu/people/mg/glicko/glicko.doc/glicko.html
It should be better than any intuitive hand-made formula you could come
up with.
Bayeselo would probably produce better ratings than Glicko. Running
Bayeselo from scratch after every round may be too costly. But it is
possible to make very efficient incremental updates: adding a few games,
and running a couple of iterations of MM should be extremely fast. This
would require keeping bayeselo in memory all the time, with current game
results. Since it cannot be done with the current program you'd have to
use my C++ code and somehow incorporate it into the server software.
This would be complicated, and may use a significant amount of memory on
the server. But computation time would be very short (less than 0.001
second).
The algorithm I describe in my paper may be overkill for rating
programs. If you look at table 1, you'll see that even when rating
humans, Bayeselo outperforms Glicko. Since most programs on CGOS are
constant, I believe that Bayeselo would be very difficult to beat.
Rémi
_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/