So, I now have a new version of my bot running on CGOS (
http://cgos.boardspace.net/13x13/cross/Imrscl-016-AMAF.html). It's still
considerably weaker than GnuGo so I'm pretty sure it will loose all games
against it. However, it's now much stronger than any other bot running on
CGOS and I guess it w
On Mar 20, 2015, at 5:11 AM, Urban Hafner wrote:
> So, I now have a new version of my bot running on CGOS
> (http://cgos.boardspace.net/13x13/cross/Imrscl-016-AMAF.html). It's still
> considerably weaker than GnuGo so I'm pretty sure it will loose all games
> against it. However, it's now much
Thanks Christoph!
On Fri, Mar 20, 2015 at 4:17 PM, Christoph Birk <
b...@obs.carnegiescience.edu> wrote:
>
> On Mar 20, 2015, at 5:11 AM, Urban Hafner wrote:
> > So, I now have a new version of my bot running on CGOS (
> http://cgos.boardspace.net/13x13/cross/Imrscl-016-AMAF.html). It's still
>
On 3/17/15, David Silver wrote:
> Reinforcement learning is different to unsupervised learning. We used
> reinforcement learning to train the Atari games. Also we published a more
> recent paper (www.nature.com/articles/nature14236) that applied the same
> network to 50 different Atari games (achi
On 1/12/15, Álvaro Begué wrote:
> A CNN that starts with a board and returns a single number will typically
> have a few fully-connected layers at the end. You could make the komi an
> extra input in the first one of those layers, or perhaps in each of them.
That's an interesting idea. But then,
> But then, the komi wont really
participate in the hierarchical representation we are hoping that the
network will build, that I suppose we are hoping is the key to
obtaining human-comparable results?
Well... it seems that Hinton, in his dropout paper
http://arxiv.org/pdf/1207.0580.pdf , get kin
> Perhaps what we want is a compromise between convnets and fcs though?
ie, either take an fc and make it a bit more sparse, and / or take an
fc and randomly link sets of weights together???
Maybe something like: each filter consists of eg 16 weights, which are
assigned randomly over all input-out
On Fri, Mar 20, 2015 at 8:24 PM, Hugh Perkins wrote:
> On 1/12/15, Álvaro Begué wrote:
> > A CNN that starts with a board and returns a single number will typically
> > have a few fully-connected layers at the end. You could make the komi an
> > extra input in the first one of those layers, or p
On Sat, Mar 21, 2015 at 11:41 AM, Álvaro Begué wrote:
> I don't see why komi needs to participate in the hierarchical representation
> at all.
Yes, fair point. I guess I was taking 'komi' as an example of any
additional natural number that one might wish to feed into a net. But
youre right, in t