Hi!
It turns out that due to mail server misconfiguration, three of Aja
Huang's emails on Dec 20 were not delivered to most or all subscribers:
http://computer-go.org/pipermail/computer-go/2014-December/007061.html
http://computer-go.org/pipermail/computer-go/2014-December/007
Darren wrote:
> I'm wondering if I've misunderstood, but does this mean it is the same
as just training your CNN on the 9-dan games, and ignoring all the 8-dan
and weaker games? (Surely the benefit of seeing more positions outweighs
the relatively minor difference in pro player strength??)
It's ju
> Why don’t you make a dataset of the raw board positions, along with code to
> convert to Clark and Storkey planes? The data will be smaller, people can
> verify against Clark and Storkey, and they have the data to make their own
> choices about preprocessing for network inputs.
Well, a lot o
On 1/11/15, Detlef Schmicker wrote:
> Todays bot tournament nicego19n (oakfoam) played with a CNN for move
> prediction.
Blimey! You coded that quickly. Impressive! :-)
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/ma
> Is "KGS rank" set 9 dan when it plays against Fuego?
Aja replied:
> Yes.
I'm wondering if I've misunderstood, but does this mean it is the same
as just training your CNN on the 9-dan games, and ignoring all the 8-dan
and weaker games? (Surely the benefit of seeing more positions outweighs
the r
2015-01-09 23:04 GMT+00:00 Darren Cook :
> Aja wrote:
> >> I hope you enjoy our work. Comments and questions are welcome.
>
> I've just been catching up on the last few weeks, and its papers. Very
> interesting :-)
>
> I think Hiroshi's questions got missed?
>
I did answer Hiroshi's questions.
h
Sure,
https://bitbucket.org/dsmic/oakfoam
is my bench, but it is not as clean as the original bench (e.g. the
directory of the cnn file is hard coded and the autotools are not
preparing for caffe at the moment:(
But there should be all tools I use to train in script/CNN, I use caffe
Am
2015-01-11 15:59 GMT+00:00 Detlef Schmicker :
>
> By the way:
> Todays bot tournament nicego19n (oakfoam) played with a CNN for move
> prediction.
> It was mixed into the original gamma with some quickly optimized parameter
> leading to >100ELO improvement for selfplay with 2000 playouts/move. I us
Why don’t you make a dataset of the raw board positions, along with code to
convert to Clark and Storkey planes? The data will be smaller, people can
verify against Clark and Storkey, and they have the data to make their own
choices about preprocessing for network inputs.
David
> -Origina
A CNN that starts with a board and returns a single number will typically
have a few fully-connected layers at the end. You could make the komi an
extra input in the first one of those layers, or perhaps in each of them.
Álvaro.
On Sun, Jan 11, 2015 at 10:59 AM, Detlef Schmicker wrote:
> Hi,
Hi,
I am planing to play around a little with CNN for learning who is
leading in a board position.
What would you suggest to represent the komi?
I would try an additional layer with every point having the value of komi.
Any better suggestions:)
By the way:
Todays bot tournament nicego19n (
Made a start here: https://github.com/hughperkins/kgsgo-dataset-preprocessor
- downlaods the html page,with list of download zip urls from kgs
- downlaods the zip files, based on html page
- unzips the zip files
- loads each sgf file in turn
- uses gomill to parse the sgf file, check it is 19x19, a
Hi,
4th World Go Meijin Competition was held 3 days ago, and Chen won
by half point. It was played on Chinese rule, but if it were on
Japanese rule, Chen would have lost by half point. It is because
Japanese rule does not count territory in seki.
I wonder it is maybe interesting KGS 9x9 tourname
Hi,
I have the feeling that cgos won't come back in even the distant future
so I was wondering if there are any alternatives?
E.g. a server that constantly lets go engines play against each other
and then determines an elo rating for them.
Folkert van Heusden
--
Afraid of irssi? Scared of bitc
Thinking about datasets for CNN training, of which I lack one
currently :-P Hence I've been using MNIST , but also since MNIST
results are widely known, and if I train with a couple of layers, and
get 12% accuracy, obviously I know I have to fix something :-P
But now, my network consistently gets
15 matches
Mail list logo