Hear, hear! The question is not one of abandonment of the recognition of
uncertainty. Like Don Dailey, I think it's brilliant that UCT programs
explicitly manage uncertainty and winning probabilities. My concern is that
existing implementations have some serious but possibly fixable flaws in those
estimates; there are numerous situations where the game can actually be
analytically proven to be won by a large margin, but the UCT/MC algorithms are
mis-evaluating the situation considerably.
I'd be careful about looking merely at winning rates against mediocre programs
( and even the best Go programs of today are not that great at 19x19 go ).
Whenever a human thinks beating lots of mid-kyu players makes him Meijin, a few
games with a high-dan player or a pro would dispel such notions. I'm just
asking "what are the next steps?"
It's great that cpu power is getting dramatically cheaper, and great that UCT
algorithms do improve with more cpus and more playouts, but there's a lot of
room for improvement. Here's hope that we find lots of interesting avenues for
such improvements!
Heading back to the central idea, of tuning the predicted winning rates and
evaluations: it might be useful to examine lost games, look for divergence
between expectations and reality, repair the predictor, and test the new
predictor against a large database of such blunders.
When I was learning to shoot, we were taught to focus first on accuracy, second
on speed. Under tournament conditions, speed is very crucial, but tuning the
accuracy of the evaluations is likely to reduce the noise rate, and winnow out
a fair number of losing plays.
Terry McIntyre <[EMAIL PROTECTED]>
They mean to govern well; but they mean to govern. They promise to be kind
masters; but they mean to be masters. -- Daniel Webster
----- Original Message ----
From: Raymond Wold <[EMAIL PROTECTED]>
To: computer-go <computer-go@computer-go.org>
Sent: Wednesday, December 12, 2007 12:23:15 AM
Subject: Re: [computer-go] How does MC do with ladders?
On Tue, 2007-12-11 at 21:17 -0500, Don Dailey wrote:
> But what does this have to do with anything? What we are "arguing"
> about is whether it's good to try to estimate probabilities. That's
> what you have been critical of. Adding ladder code will improve any
> evaluation function if done correctly but that's not relevant if you
> believe estimating probability is foolish.
>
> To the contrary, I believe it is brilliant - in my opinion it is a
key
> factor in the success of these programs and I would call it a key
> breakthrough.
Sorry, it just sounded like you lauded the failures of MC as virtues.
I'm not opposed to random playouts as an evaluator. Just undue hope and
reliance on it. I think that to make a breakthrough in go AI, we need
diversity. Both within a program (use what works when it works,
including dropping any randomness at all when pure knowledge or full
search would yield results), and between bots. What we *don't* need is
people giving up on an approach without even trying it, because others
have failed at something similar before.
_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/
____________________________________________________________________________________
Looking for last minute shopping deals?
Find them fast with Yahoo! Search.
http://tools.search.yahoo.com/newsearch/category.php?category=shopping
_______________________________________________
computer-go mailing list
computer-go@computer-go.org
http://www.computer-go.org/mailman/listinfo/computer-go/