I would also like to congratulate the Alpha-Go team for this fantastic
result now when the dust have settled!

I think after all games have been played me feeling is that Alpha-go is
a little stronger then 9 Dan pro level, but of course still far away from
perfect play.

My suspicion of the main weakness is that the neural networks sometimes
completely overlook surprising moves that touches many areas with overlapping
aji. In these cases local shapes does not really mean much.

Everyone was wondering if Alphago could handle many complex local situations simultaneously. My feeling is that this is not a problem because the mover ordering
straight forward local fights is so good for Alphago.

So what might be remaining weaknesses is that bad aji overlap from many areas move ordering might be difficult if the neural networks cannot handle the aji by generalizing from learning games. There are holes in move ordering and this become
a problem when local branching factor get very high.

At some point global search with the massive amount of hardware will see the problems, but if the value network for example leads Alphago to build a mojo with holes in it without correctly understanding the tactical consequences it might often get in into trouble. But the opponent needs to play perfectly when the opportunity comes, so I think pros in the future would have difficulties provoke these kinds of mistakes. In fact I think they
have to play patiently and let Alphago trap itself.

This is also my experience from playing Correspondence go with Valkyria on 9x9. The program is very far from perfect play, but with long thinking times one must play patient but sharp. Setting traps does not work, but if one is luckily the program will go into some position which it overvalues spontaneously and when the opportunity to win
comes.

Alphago is a little bit similar, but of course on 19x19 using 2 minutes per move which is close to unbelievable if it were not for advances in deep learning networks.

Best
Magnus Persson



On 2016-03-15 13:10, Petr Baudis wrote:
AlphaGo has won the final game, tenaciously catching up after a tesuji
mistake in the beginning - a great data point that it can also deal with
somewhat disadvantageous position well.  It has received honorary 9p
from the KBA.

  I can only quote David Silver: "Wow, what an amazing week this has
been." This is a huge leap for AI in general, maybe the most convincing
application demonstration of deep learning up to now.

(The take-away for me personally, even if obvious in retrospect, would
be not to focus on one field too much.  I got similar ideas not long
after I actually stopped doing Computer Go and took a wide look at other Machine Learning areas - well, three weeks later the Clark&Storkey paper
came out. :) I came to believe that transferring ideas and models from
one field to another has one of the best effort / value tradeoffs, not
just personally but also for the scientific progress as a whole.)

I do hope that Aja will have time and be willing to answer some of our
technical questions now after taking a while to recover from what must
have been an exhausting week.

  But now, onto getting pro Go players on our PCs, and applying this on
new things! :)

                                Petr Baudis
_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to