I changed bayeselo to use the prior command as Rémi suggested I could do.

It raised the ELO rating of the highest rated well established player by
about 60 ELO!

I set prior to 0.1 

  http://cgos.boardspace.net/study/

- Don



Rémi Coulom wrote:
> Don Dailey wrote:
>> They seem under-rated to me also.   Bayeselo pushes the ratings together
>> because that is apparently a valid initial assumption.   With enough
>> games I believe that effect goes away.
>>
>> I could test that theory with some work.    Unless there is a way to
>> turn that off in bayelo (I don't see it) I could rate them with my own
>> program.
>>
>> Perhaps I will do that test.
>>
>> - Don
> The factor that pushes ratings together is the prior virtual draws
> between opponents. You can remove or reduce this factor with the 
> "prior" command. (before the "mm" command, you can run "prior 0" or
> "prior 0.1"). This command indicates the number of virtual draws. If I
> remember correctly, the default is 3. You may get convergence problem
> if you set the prior to 0 and one player has 100% wins.
>
> The effect of the prior should vanish as the number of games grows.
> But if the winning rate is close to 100%, it may take a lot of games
> before the effect of these 3 virtual draws becomes small. It is not
> possible to reasonably measure rating differences when the winning
> rate is close to 100% anyway.
>
> Instead of playing UCT bot vs UCT bot, I am thinking about running a
> scaling experiment against humans on KGS. I'll probably start with 2k,
> 8k, 16k, and 32k playouts.
>
> Rémi
> _______________________________________________
> computer-go mailing list
> computer-go@computer-go.org
> http://www.computer-go.org/mailman/listinfo/computer-go/
>
_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to