It seems i was ambiguous: I was speaking of the simulation player too.
What i meant is a random simulation player is not biased, whereas a "better"
simulation player is biased by its knowledge, and thus can give wrong
evaluation of a position.
I think we have to start defining what the bias. For me the bias is
the difference between the expected value of the outcomes of playouts
by the simulation player and the "real minimax value". In this
definition the uniform random simulation player is VERY biased and
gnugo much less.

A trivial example is GNU Go: its analyze is "sometimes" wrong.
Of course, if not computer go would be solved :-).

Even if it is obviously much stronger than a random player, it would give wrong 
>result if used as a simulation player.
Hum, are you sure? I think that GnuGo with randomisation, (and much
faster of course) would make a very good simulation player (much
better than any existing simulation player). But a weaker player than
GnuGo can make an even better simulation player.

David Doshay experiments with SlugGo showed that
searching very deep/wide does not improve a lot the strength of the engine,
which is bound by the underlying weaknesses of GNU Go.
Yes, this a similar non trivial result. I think there are more
existing experimental and theoritical analysis of this, though.
Perhaps such an analysis already exist for MC also, it is just that I
don't know.

Or maybe i just understood nothing of what you explained ;)
It was not really "explanations", just thoughts. I have no the
solution, just think that it is an interesting question, and that it
may be discussed. May be from a strong explanation of this phenomenon
could come new ideas.

I understand all these counter examples, I just think that it is more
complicated than that.

Sylvain
_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/

Reply via email to