So I read this kind of study with some skepticism. My guess is that the 
"large-scale pattern" systems in use by leading programs are already pretty 
good for their purpose (i.e., progressive bias).

Rampant personal and unverified speculation follows...
------------------------------------------------------
I found the 14% win rate against Fuego is potentially impressive, but I didn't 
get a sense for Fuego's effort level in those games. E.g., Elo ratings. MCTS 
actually doesn't play particularly well until a sufficient investment is made.

I am not sure what to think about winning 91% against Gnu Go. Gnu Go makes a 
lot of moves based on rules, so it "replays" games. I found that many of 
Pebbles games against Gnu Go were move-for-move repeats of previous games, so 
much so that I had to randomize Pebbles if I wanted to use Gnu Go for 
calibrating parameters. My guess is that the 91% rate is substantially 
attributable to the way that Gnu Go's rule set interacts with the positions 
that the NN likes. This could be a measure of strength, but not necessarily.

My impression is that the progressive bias systems in MCTS programs should 
prioritize interesting moves to search. A good progressive bias system might 
have a high move prediction rate, but that will be a side-effect of tuning it 
for its intended purpose. E.g., it is important to search a lot of bad moves 
because you need to know for *certain* that they are bad.

Similarly, it is my impression is that a good progressive bias engine does not 
have to be a strong stand-alone player. Strong play implies a degree of 
tactical pattern matching that is not necessary when the system's 
responsibility is to prioritize moves. Tactical accuracy should be delegated to 
the search engine. The theoretical prediction is that MCTS search will be 
(asymptotically) a better judge of tactical results.

Finally, I am not a fan of NN in the MCTS architecture. The NN architecture 
imposes a high CPU burden (e.g., compared to decision trees), and this study 
didn't produce such a breakthrough in accuracy that I would give away 
performance.


-----Original Message-----
From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
Hiroshi Yamashita
Sent: Monday, December 15, 2014 10:27 AM
To: computer-go@computer-go.org
Subject: Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play 
Go

I tested Aya's move prediction strength.

Prediction rate is 38.8% (first choice is same as pro's move)

against GNU Go 3.7.10 Level 10

       winrate  games
19x19   0.059     607
13x13   0.170     545
9x9     0.141    1020

I was bit surprised there is no big difference from 9x9 to 19x19.
But 6% in 19x19 is still low, paper's 91% winrate is really high.
It must understand whole board life and death.
I'd like to see their sgf vs GNU Go and Fuego.

Aya's prediction includes local string capture search.
So this result maybe include some look-ahead.
Aya uses this move prediction in UCT, playout uses another prediction.
Aya gets 50% against GNU Go with 300 playout in 19x19 and 100 in 9x9.

Regards,
Hiroshi Yamashita

_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to