Petr suggests "caption Go positions based on game commentary" Without doubt, there has to be a lot of mileage in looking for a way for a machine to learn from expert commentaries.
i see a difference between labelling a cat in a photo and labelling a stone configuration in a picture of a board, but Alpha's and DCNNigo's successes at finding good moves suggests that my intuition must be wrong here. Captioning should be able to pick up static patterns such as bamboo joints, but when it comes to things like the concept of "honte", i think a different technique will be required, one that could discover patterns of movement, rather than patterns of static pixels. Back in August, when i started thinking about Go imagery, i looked to see if there was any research on dynamic image recognition by CNN and didn't come across anything much, but surely someone somewhere is working on it? Whereas it ought to be possible to be able to capture the essence of certain kinds of physical motion, such as a wheel turning or a wing flapping, the peculiar quality of Go that stones can be captured, which changes everything in a flash, makes it unlikely that a technology for identifying smooth motion would pick up anything useful. But then again, it seems that one thing a CNN should be able to detect is atari, so a combination of treesearch and CNN should be able to work out semeais and finding 2 eyes in a picture, both of which it seems Alpha is very good at doing - so if you could find a way to relate expert commentary to sequences rather than static positions, you will be onto something. On the subject of CNNs for text recognition, this is a whole different ballgame because the key thing about text is that it has a hidden structure, which requires being able to first classify words in order to detect. I am personally convinced that i have found the true form of English grammar, but needless to say, it is not flavour of the month as it upsets the applecart that mainstream linguists have been riding on for 200 years. One journal editor wrote to me "English teachers don't want to be told that they have been doing it wrong" !! :) http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2205530 A CNN might be able to see patterns in a string of word classes, where there are just two classes: concepts and relationships. On 31/03/2016, Jim O'Flaherty <jim.oflaherty...@gmail.com> wrote: > Petr, > > {repeat from a different thread slightly modified for this one} > > What I was addressing was more around what Robert Jasiek is describing in > his joseki books and other materials he's produced. And it is exactly why I > think the "explanation of the suggested moves" requires a much deeper > baking into the participating ANN's (bottom up approach). And given what I > have read thus far (including your above information), I am still seeing > the risk extraordinarily high and the payoff exceedingly low, outside an > academic context. > > However, if someone was to do all the dirty work setting up all the > infrastructure, hunt down the training data and then financially facilitate > the thousands of hours of human work and the tens to hundreds of thousands > of hours of automated learning work, I would become substantially more > interested...and think a high quality desired outcome remains a low > probability. > > That said, I wish whomever takes on this project the very best of luck > because I will very much enjoy being wrong about this...at someone else's > expense. :) > > > Jim > > > On Thu, Mar 31, 2016 at 6:04 AM, Petr Baudis <pa...@ucw.cz> wrote: > >> On Wed, Mar 30, 2016 at 09:58:48AM -0500, Jim O'Flaherty wrote: >> > My own study says that we cannot top down include "English >> > explanations" >> of >> > how the ANNs (Artificial Neural Networks, of which DCNN is just one >> > type) >> > arrive a conclusions. >> >> I don't think that's obvious at all. My current avenue of research >> is using neural models for text comprehension (in particular >> https://github.com/brmson/dataset-sts) and the intersect with DCNNs is >> for example the work on automatic image captioning: >> >> http://cs.stanford.edu/people/karpathy/sfmltalk.pdf >> https://www.captionbot.ai/ (most recent example) >> >> One of my project ideas that I'm quite convinced could provide some >> interesting results would be training a neural network to caption >> Go positions based on game commentary. You strip the final "move >> selection" layer from the network and use the previous fully-connected >> layer output as rich "semantic representation" of the board and train >> another network to turn that into words (+ coordinate references etc). >> >> The challenges are getting a large+good dataset of commented positions, >> producing negative training samples, and representing sequences (or just >> coordinate points). But I think there's definitely a path forward >> possible here to train another neural network that provides explanations >> based on what the "move prediction" network sees. >> >> It could make a great undergraduate thesis or similar. >> >> (My original idea was simpler, a "smarter bluetable" chatbot that'd just >> generate "position-informed kibitz" - not necessarily *informative* >> kibitz. Plenty of data for that, probably. ;-) >> >> -- >> Petr Baudis >> If you have good ideas, good data and fast computers, >> you can do almost anything. -- Geoffrey Hinton >> _______________________________________________ >> Computer-go mailing list >> Computer-go@computer-go.org >> http://computer-go.org/mailman/listinfo/computer-go > -- patient: "whenever i open my mouth, i get a shooting pain in my foot" doctor: "fire!" http://sites.google.com/site/djhbrown2/home https://www.youtube.com/user/djhbrown _______________________________________________ Computer-go mailing list Computer-go@computer-go.org http://computer-go.org/mailman/listinfo/computer-go