Darren wrote:
> I'm wondering if I've misunderstood, but does this mean it is the same
as just training your CNN on the 9-dan games, and ignoring all the 8-dan
and weaker games? (Surely the benefit of seeing more positions outweighs
the relatively minor difference in pro player strength??)
It's ju
> Is "KGS rank" set 9 dan when it plays against Fuego?
Aja replied:
> Yes.
I'm wondering if I've misunderstood, but does this mean it is the same
as just training your CNN on the 9-dan games, and ignoring all the 8-dan
and weaker games? (Surely the benefit of seeing more positions outweighs
the r
2015-01-09 23:04 GMT+00:00 Darren Cook :
> Aja wrote:
> >> I hope you enjoy our work. Comments and questions are welcome.
>
> I've just been catching up on the last few weeks, and its papers. Very
> interesting :-)
>
> I think Hiroshi's questions got missed?
>
I did answer Hiroshi's questions.
h
Yes, it's 0.15 seconds for 128 positions.
A minibatch is a small set of samples that is used to compute an
approximation to the gradient before you take a step of gradient descent. I
think it's not simply called a "batch" because "batch training" refers to
computing the full gradient with all the
Is "KGS rank" set 9 dan when it plays against Fuego?
For me, the improvement from just using a subset of the training data
was one of the most surprising results.
As far as I can tell, they use ALL the training data. That's the point.
They filter by dan, and the CNN must then have less confide
Aja wrote:
>> I hope you enjoy our work. Comments and questions are welcome.
I've just been catching up on the last few weeks, and its papers. Very
interesting :-)
I think Hiroshi's questions got missed?
Hiroshi Yamashita asked on 2014-12-20:
> I have three questions.
>
> I don't understand min
2014-12-21 3:02 GMT+00:00 Hiroshi Yamashita :
>
> I tried Fuego 1.1(2011, Windows version) on Intel Core i3 540,
> 2 cores 4 thread. 3.07GHz.
>
Thanks. You remind me we should write Fuego's version as "1.1.SVN" rather
than "1.1". In Clark's paper they tested against Fuego 1.1. So the reason
why ou
trange.
Regards,
Hiroshi Yamashita
- Original Message -
From: "Aja Huang"
To:
Cc:
Sent: Sunday, December 21, 2014 8:16 AM
Subject: Re: [Computer-go] Move Evaluation in Go Using Deep Convolutional
NeuralNetworks
Hi Hiroshi,
On Sat, Dec 20, 2014 at 3:31 AM, Hiroshi Yamashita wrot
Hi Hiroshi,
2014-12-20 3:31 GMT+00:00 Hiroshi Yamashita :
>
> But it looks playing strength is similar to Clark's CNN.
>
Against GnuGo our 12-layer CNN is about 300 Elo stronger (97% winning rate
against 86% based one the same KGS games). Against Fuego using their time
setting (10 sec per move on
2014-12-20 11:33 GMT+00:00 Hiroshi Yamashita :
>
> I don't understand minibatch.
> Does CNN need 0.15sec for a positon, or 0.15sec for 128 positions?
>
0.15 sec for 128 positions.
> ABCDEFGHJ
> 9. White(O) to move.
> 8...OO Previous Black move is H5(X)
> 7..XXXOO..
> 6.XXO.
Hi Aja,
I hope you enjoy our work. Comments and questions are welcome.
I have three questions.
I don't understand minibatch.
Does CNN need 0.15sec for a positon, or 0.15sec for 128 positions?
ABCDEFGHJ
9. White(O) to move.
8...OO Previous Black move is H5(X)
7..XXXOO..
6
Hi Aja,
We've just submitted our paper to ICLR. We made the draft available at
http://www.cs.toronto.edu/~cmaddis/pubs/deepgo.pdf
97.2% against GNU Go?! Accuracy is 55%?! Incredible!
Thanks for the paper!
But it looks playing strength is similar to Clark's CNN.
MCTS with CNN is interesting. B
12 matches
Mail list logo