>Is it really such a burden?

 
Well, I have to place my bets on some things and not on others.

 

It seems to me that the costs of a NN must be higher than a system based on 
decision trees. The convolution NN has a very large parameter space if my 
reading of the paper is correct. Specifically, it can represent all patterns 
translated and rotated and matched against all points in parallel.

 

To me, that seems like a good way to mimic the visual cortex, but an 
inefficient way to match patterns on a Go board.

 

So my bet is on decision trees. The published research on NN will help me to 
understand the opportunities much better, and I have every expectation that the 
performance of decision trees should be >= NN in every way. E.g., faster, more 
accurate, easier and faster to tune. 

 

I recognize that my approach is full of challenges. E.g., a NN would 
automatically infer "soft" qualities such as "wall", "influence" that would 
have to be provided to a DT as inputs. No free lunch, but again, this is about 
betting that one technology is (overall) more suitable than another.

 

 

 

From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of 
Stefan Kaitschick
Sent: Monday, December 15, 2014 6:37 PM
To: computer-go@computer-go.org
Subject: Re: [Computer-go] Teaching Deep Convolutional Neural Networks to Play 
Go

 

 


Finally, I am not a fan of NN in the MCTS architecture. The NN architecture 
imposes a high CPU burden (e.g., compared to decision trees), and this study 
didn't produce such a breakthrough in accuracy that I would give away 
performance.

 

 Is it really such a burden? Supporting the move generator with the NN result 
high up in the decision tree can't be that expensive.

_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to