On Wed, Mar 30, 2016 at 09:58:48AM -0500, Jim O'Flaherty wrote:
> My own study says that we cannot top down include "English explanations" of
> how the ANNs (Artificial Neural Networks, of which DCNN is just one type)
> arrive a conclusions.

I don't think that's obvious at all.  My current avenue of research
is using neural models for text comprehension (in particular
https://github.com/brmson/dataset-sts) and the intersect with DCNNs is
for example the work on automatic image captioning:

        http://cs.stanford.edu/people/karpathy/sfmltalk.pdf
        https://www.captionbot.ai/ (most recent example)

One of my project ideas that I'm quite convinced could provide some
interesting results would be training a neural network to caption
Go positions based on game commentary.  You strip the final "move
selection" layer from the network and use the previous fully-connected
layer output as rich "semantic representation" of the board and train
another network to turn that into words (+ coordinate references etc).

The challenges are getting a large+good dataset of commented positions,
producing negative training samples, and representing sequences (or just
coordinate points).  But I think there's definitely a path forward
possible here to train another neural network that provides explanations
based on what the "move prediction" network sees.

It could make a great undergraduate thesis or similar.

(My original idea was simpler, a "smarter bluetable" chatbot that'd just
generate "position-informed kibitz" - not necessarily *informative*
kibitz.  Plenty of data for that, probably. ;-)

-- 
                                Petr Baudis
        If you have good ideas, good data and fast computers,
        you can do almost anything. -- Geoffrey Hinton
_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to