[Computer-go] Datasets for CNN training?

2015-01-11 Thread Hugh Perkins
Thinking about datasets for CNN training, of which I lack one
currently :-P  Hence I've been using MNIST , but also since MNIST
results are widely known, and if I train with a couple of layers, and
get 12% accuracy, obviously I know I have to fix something :-P

But now, my network consistently gets up into the 97-98%s for mnist,
even with just a layer or two, and speed is ok-ish, and probably want
to start running training against 19x19 boards instead of 28x28.  The
optimization is different.  On my laptop, an OpenCL workgroup can hold
a 19x19 board, with one thread per intersection, but 28x28 threads
would exceed the workgroup size.  Unless I loop, or break into two
workgroups, or something else equally buggy, slow, and
high-maintenance :-P

So, I could crop the mnist boards down to 19x19, but whoever heard of
training on 19x19 mnist boards?

So, possibly time to start hitting actual Go boards.  Many other
datasets are available in a standardized generic format, ready to feed
into any machine learning algorithm.  For example, those provided at
libsvm website http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/
, or mnist, yann.lecun.com/exdb/mnist/ .  The go datasets are not
(yet) available in any kind of standard format so I'm thinking, maybe
that could be useful to do so?  But there are three challenges:

1. what data to store?  Clark and Storkey planes? Raw boards? Maddison
et al planes? Something else?  For now, my answer is: something
corresponding to an actual existing paper, and Clark and Storkey's
network has the advantage of costing less than 2000usd to train, so
that's my answer to 'what data to store?'
2. copyright.  gogod is apparently a. copyrighted as a collection b.
compiled by hand as a result of painstakingly going through each game,
move by move, and entering into the computer, one move at a time.
Probably not really likely that one could put this, even preprocessed,
as a standard dataset?  However, the good news is that the gks dataset
seems publically available, and big, maybe just use that?
3. size . this is where I dont have an answer yet.
- 8 million states, where each state is 8 planes * 351 locations = 20GB :-P
- the raw sgfs only take 3KB per game, for a total of about 80MB,
but needs a lot of preprocessing, and if one were to feed each game
through, in order, might not be the best sequence for effective
learning?
- current idea: encode one column through the planes as a single
byte?  For Clark and Storkey they only have 8 planes, so this should
be easy enough :-)
- which would be 2.6GB instead
- but still kind of large, to put on my web hosting :-P

I suppose a compromise could be needed, which would also solve problem
number 1 somewhat, of just providing a tool, eg in Python, or C, or
Cython, which will take the kgs downloads, possibly the gogod
download, and transform it into a 2.6GB dataset, ready for training,
and possibly pre-shuffled?

But this would be quite non-standard, although this is not unheard of,
eg for imagenet, there is a devkit
http://image-net.org/challenges/LSVRC/2011/index#devkit

Maybe I will create a github project, like 'kgs-dataset-preprocessor'?
 Could work something like ?:

   python kgs-dataset-preprocessor.py [targetdirectory]

Results:
- the datasets are downloaded from http://u-go.net/gamerecords/
- decompressed
- loaded once at a time, and processed into a 2.5GB datafile, in
sequence (clients can handle shuffling themselves I suppose?)

Thoughts?

Hugh
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] alternative for cgos

2015-01-11 Thread folkert
Hi,

I have the feeling that cgos won't come back in even the distant future
so I was wondering if there are any alternatives?
E.g. a server that constantly lets go engines play against each other
and then determines an elo rating for them.


Folkert van Heusden

-- 
Afraid of irssi? Scared of bitchx? Does xchat gives you bad shivers?
In all these cases take a look at http://www.vanheusden.com/fi/ maybe
even try it or use it for all your day-to-day IRC conversations!
---
Phone: +31-6-41278122, PGP-key: 1F28D8AE, www.vanheusden.com
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] Different result Chinese and Japanese rule

2015-01-11 Thread Hiroshi Yamashita

Hi,

4th World Go Meijin Competition was held 3 days ago, and Chen won
by half point. It was played on Chinese rule, but if it were on
Japanese rule, Chen would have lost by half point. It is because
Japanese rule does not count territory in seki.

I wonder it is maybe interesting KGS 9x9 tournament in Japanese
rule, once a year.


19.XXO...
18OXOXXO...O..XXO
17X...XO.   Final position B+0.5.
16..OOXXOOXXXXXOO
15...OX.XXXOOXOXX.XXO
14..OOXXOOOXX..OXX..X
13...OOO.O.O.O.OO
12X.O.O.O.XOO
11...OOO.
10XOX..XOXOXO.OXXOOOX   Right side is seki.
9O.O..OXXOXXOOXXXOX.   T-9 is a terriory for Chinese rule.
8.OO.OOXXOXX
7OXXOXOXXOO.OOOXOOO.
6OXOOXOOOXXXOXOO   Aya can not understand T-9 is not
5..OXOX..XXXOXO.XXOOterittory in Japanese rule, because
4OX.XX...XXOB fills T-9 in playout.
3OXOXOXX..XO
2OX...XX
1X..
 ABCDEFGHJKLMNOPQRST


(;GM[1]SZ[19]
PB[Chen Yaoye]
PW[Iyama Yuta]
DT[2015-01-08]RE[B+0.5]KM[7.5]RU[Chinese]
PC[China, Xi'an]EV[4th World Meijin]GN[Final]
;B[pd];W[dp];B[qp];W[dd];B[fq];W[cn];B[kq];W[qf];B[pi];W[qc]
;B[qd];W[pc];B[od];W[rd];B[re];W[rc];B[qe];W[nc];B[po];W[rf]
;B[sf];W[rh];B[pf];W[qj];B[qh];W[qi];B[rg];W[ph];B[qg];W[qm]
;B[om];W[pj];B[ko];W[cf];B[fo];W[eq];B[fr];W[dl];B[gl];W[mk]
;B[ok];W[ln];B[ll];W[lo];B[lp];W[kn];B[jo];W[no];B[mp];W[lk]
;B[mo];W[kl];B[mn];W[fk];B[cj];W[ck];B[db];W[fb];B[cc];W[cd]
;B[ec];W[dc];B[eb];W[cb];B[ed];W[gd];B[ef];W[gf];B[eh];W[dg]
;B[gg];W[eg];B[ff];W[fg];B[ge];W[hf];B[he];W[if];B[ie];W[je]
;B[jf];W[jg];B[kf];W[gh];B[jd];W[ke];B[le];W[kd];B[kc];W[ld]
;B[lc];W[md];B[ic];W[lg];B[gk];W[fl];B[gm];W[gj];B[hj];W[hi]
;B[dr];W[pn];B[qn];W[rn];B[qo];W[pm];B[on];W[cq];B[nb];W[oc]
;B[mc];W[nd];B[fj];W[gi];B[cr];W[ro];B[rp];W[ij];B[bq];W[bp]
;B[ar];W[eo];B[fn];W[ob];B[mb];W[im];B[bk];W[bl];B[in];W[og]
;B[ne];W[nf];B[of];W[ng];B[nj];W[fe];B[fd];W[sh];B[oe];W[bc]
;B[em];W[el];B[jl];W[jm];B[kk];W[ml];B[pl];W[ql];B[km];W[lm]
;B[hk];W[kl];B[jk];W[li];B[jj];W[ii];B[hm];W[me];B[ki];W[kh]
;B[mi];W[mj];B[so];W[rm];B[oi];W[mh];B[oj];W[sn];B[sj];W[sp]
;B[rk];W[qk];B[rq];W[ri];B[sl];W[sq];B[sr];W[so];B[rr];W[de]
;B[ee];W[ik];B[il];W[jn];B[en];W[ap];B[dn];W[co];B[dq];W[ep]
;B[cp];W[nm];B[nn];W[cq];B[fp];W[ji];B[cp];W[hn];B[io];W[cq]
;B[cm];W[dm];B[cp];W[jb];B[jc];W[cq];B[ig];W[hg];B[cp];W[nl]
;B[ol];W[cq];B[hh];W[ih];B[cp];W[hd];B[id];W[cq];B[rl];W[rj]
;B[cp];W[fm];B[gn];W[cq];B[bn];W[cp];B[bm];W[cl];B[al];W[bj]
;B[aj];W[ak];B[ca];W[am];B[bb];W[kj];B[hl];W[cc];B[oa];W[pa]
;B[na];W[ab];B[ba];W[er];B[es];W[aq];B[br];W[an];B[ni];W[sd]
;B[pg];W[oh];B[da];W[df];B[aa];W[ac];B[do];W[nh];B[sg];W[se]
;B[nk];W[mm];B[pk])

Regards,
Hiroshi Yamashita

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Datasets for CNN training?

2015-01-11 Thread Hugh Perkins
Made a start here: https://github.com/hughperkins/kgsgo-dataset-preprocessor
- downlaods the html page,with list of download zip urls from kgs
- downlaods the zip files, based on html page
- unzips the zip files
- loads each sgf file in turn
- uses gomill to parse the sgf file, check it is 19x19, and no handicap

... and on the other hand created some classes to handle the mechanics
of a Go game:
- GoBoard: represents a go board, can apply moves, handles captures,
detects Ko, contains GoStrings
- a GoString is a string of contiguous pieces of the same color.
Holds also a full list of all liberties
- Bag2d is a double-indexed bag of 2d locations:
   - given any location, know whether it is in the bag or not, in O(1)
   - can iterate the locatinos, O(1) per location iterated
   - can erase a location in O(1)

... so now just need to link these together, and pump out the binary data file


On 1/11/15, Hugh Perkins  wrote:
> Thinking about datasets for CNN training, of which I lack one
> currently :-P  Hence I've been using MNIST , but also since MNIST
> results are widely known, and if I train with a couple of layers, and
> get 12% accuracy, obviously I know I have to fix something :-P
>
> But now, my network consistently gets up into the 97-98%s for mnist,
> even with just a layer or two, and speed is ok-ish, and probably want
> to start running training against 19x19 boards instead of 28x28.  The
> optimization is different.  On my laptop, an OpenCL workgroup can hold
> a 19x19 board, with one thread per intersection, but 28x28 threads
> would exceed the workgroup size.  Unless I loop, or break into two
> workgroups, or something else equally buggy, slow, and
> high-maintenance :-P
>
> So, I could crop the mnist boards down to 19x19, but whoever heard of
> training on 19x19 mnist boards?
>
> So, possibly time to start hitting actual Go boards.  Many other
> datasets are available in a standardized generic format, ready to feed
> into any machine learning algorithm.  For example, those provided at
> libsvm website http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/
> , or mnist, yann.lecun.com/exdb/mnist/ .  The go datasets are not
> (yet) available in any kind of standard format so I'm thinking, maybe
> that could be useful to do so?  But there are three challenges:
>
> 1. what data to store?  Clark and Storkey planes? Raw boards? Maddison
> et al planes? Something else?  For now, my answer is: something
> corresponding to an actual existing paper, and Clark and Storkey's
> network has the advantage of costing less than 2000usd to train, so
> that's my answer to 'what data to store?'
> 2. copyright.  gogod is apparently a. copyrighted as a collection b.
> compiled by hand as a result of painstakingly going through each game,
> move by move, and entering into the computer, one move at a time.
> Probably not really likely that one could put this, even preprocessed,
> as a standard dataset?  However, the good news is that the gks dataset
> seems publically available, and big, maybe just use that?
> 3. size . this is where I dont have an answer yet.
> - 8 million states, where each state is 8 planes * 351 locations = 20GB
> :-P
> - the raw sgfs only take 3KB per game, for a total of about 80MB,
> but needs a lot of preprocessing, and if one were to feed each game
> through, in order, might not be the best sequence for effective
> learning?
> - current idea: encode one column through the planes as a single
> byte?  For Clark and Storkey they only have 8 planes, so this should
> be easy enough :-)
> - which would be 2.6GB instead
> - but still kind of large, to put on my web hosting :-P
>
> I suppose a compromise could be needed, which would also solve problem
> number 1 somewhat, of just providing a tool, eg in Python, or C, or
> Cython, which will take the kgs downloads, possibly the gogod
> download, and transform it into a 2.6GB dataset, ready for training,
> and possibly pre-shuffled?
>
> But this would be quite non-standard, although this is not unheard of,
> eg for imagenet, there is a devkit
> http://image-net.org/challenges/LSVRC/2011/index#devkit
>
> Maybe I will create a github project, like 'kgs-dataset-preprocessor'?
>  Could work something like ?:
>
>python kgs-dataset-preprocessor.py [targetdirectory]
>
> Results:
> - the datasets are downloaded from http://u-go.net/gamerecords/
> - decompressed
> - loaded once at a time, and processed into a 2.5GB datafile, in
> sequence (clients can handle shuffling themselves I suppose?)
>
> Thoughts?
>
> Hugh
>
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] Representing Komi for neural network

2015-01-11 Thread Detlef Schmicker

Hi,

I am planing to play around a little with CNN for learning who is 
leading in a board position.


What would you suggest to represent the komi?

I would try an additional layer with every point having the value of komi.

Any better suggestions:)


By the way:
Todays bot tournament nicego19n (oakfoam) played with a CNN for move 
prediction.
It was mixed into the original gamma with some quickly optimized 
parameter leading to >100ELO improvement for selfplay with 2000 
playouts/move. I used the Clark and Storkey Network, but with no 
additional features (only a black and a white layer). I trained it on 
6 kgs games and reached about 41% prediction rate. I have no delayed 
evaluation, as I evaluate no mini-batch but only one position taking 
about 1.6ms on the GTX-970. A little delay might happen anyway, as only 
one evaluation is done at once and other threads might go on playing 
while one thread is doing CNN. We have quite slow playouts anyway, so I 
had around 7 playouts/move during the game.


If you want to get an impression, how such a bot plays, have a look at 
the games :)


Detlef
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Representing Komi for neural network

2015-01-11 Thread Álvaro Begué
A CNN that starts with a board and returns a single number will typically
have a few fully-connected layers at the end. You could make the komi an
extra input in the first one of those layers, or perhaps in each of them.

Álvaro.



On Sun, Jan 11, 2015 at 10:59 AM, Detlef Schmicker  wrote:

> Hi,
>
> I am planing to play around a little with CNN for learning who is leading
> in a board position.
>
> What would you suggest to represent the komi?
>
> I would try an additional layer with every point having the value of komi.
>
> Any better suggestions:)
>
>
> By the way:
> Todays bot tournament nicego19n (oakfoam) played with a CNN for move
> prediction.
> It was mixed into the original gamma with some quickly optimized parameter
> leading to >100ELO improvement for selfplay with 2000 playouts/move. I used
> the Clark and Storkey Network, but with no additional features (only a
> black and a white layer). I trained it on 6 kgs games and reached about
> 41% prediction rate. I have no delayed evaluation, as I evaluate no
> mini-batch but only one position taking about 1.6ms on the GTX-970. A
> little delay might happen anyway, as only one evaluation is done at once
> and other threads might go on playing while one thread is doing CNN. We
> have quite slow playouts anyway, so I had around 7 playouts/move during
> the game.
>
> If you want to get an impression, how such a bot plays, have a look at the
> games :)
>
> Detlef
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Datasets for CNN training?

2015-01-11 Thread David Fotland
Why don’t you make a dataset of the raw board positions, along with code to 
convert to Clark and Storkey planes?  The data will be smaller, people can 
verify against Clark and Storkey, and they have the data to make their own 
choices about preprocessing for network inputs.

David

> -Original Message-
> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On
> Behalf Of Hugh Perkins
> Sent: Sunday, January 11, 2015 12:24 AM
> To: computer-go
> Subject: [Computer-go] Datasets for CNN training?
> 
> Thinking about datasets for CNN training, of which I lack one currently
> :-P  Hence I've been using MNIST , but also since MNIST results are
> widely known, and if I train with a couple of layers, and get 12%
> accuracy, obviously I know I have to fix something :-P
> 
> But now, my network consistently gets up into the 97-98%s for mnist,
> even with just a layer or two, and speed is ok-ish, and probably want
> to start running training against 19x19 boards instead of 28x28.  The
> optimization is different.  On my laptop, an OpenCL workgroup can hold
> a 19x19 board, with one thread per intersection, but 28x28 threads
> would exceed the workgroup size.  Unless I loop, or break into two
> workgroups, or something else equally buggy, slow, and high-maintenance
> :-P
> 
> So, I could crop the mnist boards down to 19x19, but whoever heard of
> training on 19x19 mnist boards?
> 
> So, possibly time to start hitting actual Go boards.  Many other
> datasets are available in a standardized generic format, ready to feed
> into any machine learning algorithm.  For example, those provided at
> libsvm website http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/
> , or mnist, yann.lecun.com/exdb/mnist/ .  The go datasets are not
> (yet) available in any kind of standard format so I'm thinking, maybe
> that could be useful to do so?  But there are three challenges:
> 
> 1. what data to store?  Clark and Storkey planes? Raw boards? Maddison
> et al planes? Something else?  For now, my answer is: something
> corresponding to an actual existing paper, and Clark and Storkey's
> network has the advantage of costing less than 2000usd to train, so
> that's my answer to 'what data to store?'
> 2. copyright.  gogod is apparently a. copyrighted as a collection b.
> compiled by hand as a result of painstakingly going through each game,
> move by move, and entering into the computer, one move at a time.
> Probably not really likely that one could put this, even preprocessed,
> as a standard dataset?  However, the good news is that the gks dataset
> seems publically available, and big, maybe just use that?
> 3. size . this is where I dont have an answer yet.
> - 8 million states, where each state is 8 planes * 351 locations =
> 20GB :-P
> - the raw sgfs only take 3KB per game, for a total of about 80MB,
> but needs a lot of preprocessing, and if one were to feed each game
> through, in order, might not be the best sequence for effective
> learning?
> - current idea: encode one column through the planes as a single
> byte?  For Clark and Storkey they only have 8 planes, so this should be
> easy enough :-)
> - which would be 2.6GB instead
> - but still kind of large, to put on my web hosting :-P
> 
> I suppose a compromise could be needed, which would also solve problem
> number 1 somewhat, of just providing a tool, eg in Python, or C, or
> Cython, which will take the kgs downloads, possibly the gogod download,
> and transform it into a 2.6GB dataset, ready for training, and possibly
> pre-shuffled?
> 
> But this would be quite non-standard, although this is not unheard of,
> eg for imagenet, there is a devkit http://image-
> net.org/challenges/LSVRC/2011/index#devkit
> 
> Maybe I will create a github project, like 'kgs-dataset-preprocessor'?
>  Could work something like ?:
> 
>python kgs-dataset-preprocessor.py [targetdirectory]
> 
> Results:
> - the datasets are downloaded from http://u-go.net/gamerecords/
> - decompressed
> - loaded once at a time, and processed into a 2.5GB datafile, in
> sequence (clients can handle shuffling themselves I suppose?)
> 
> Thoughts?
> 
> Hugh
> ___
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Representing Komi for neural network

2015-01-11 Thread Aja Huang
2015-01-11 15:59 GMT+00:00 Detlef Schmicker :
>
> By the way:
> Todays bot tournament nicego19n (oakfoam) played with a CNN for move
> prediction.
> It was mixed into the original gamma with some quickly optimized parameter
> leading to >100ELO improvement for selfplay with 2000 playouts/move. I used
> the Clark and Storkey Network, but with no additional features (only a
> black and a white layer). I trained it on 6 kgs games and reached about
> 41% prediction rate. I have no delayed evaluation, as I evaluate no
> mini-batch but only one position taking about 1.6ms on the GTX-970. A
> little delay might happen anyway, as only one evaluation is done at once
> and other threads might go on playing while one thread is doing CNN. We
> have quite slow playouts anyway, so I had around 7 playouts/move during
> the game.
>
> If you want to get an impression, how such a bot plays, have a look at the
> games :)
>

Congrats on oakfoam's significant improvement with the CNN. The game
oakfoam beat HiroBot[1d] is very nice

http://files.gokgs.com/games/2015/1/11/HiraBot-NiceGo19N.sgf

Would you release the newest version of oakfoam and the CNN? I couldn't
find your git or svn repository at

http://oakfoam.com/#downloads

Aja
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Representing Komi for neural network

2015-01-11 Thread Detlef Schmicker

Sure,

https://bitbucket.org/dsmic/oakfoam

is my bench, but it is not as clean as the original bench (e.g. the 
directory of the cnn file is hard coded and the autotools are not 
preparing for caffe at the moment:(


But there should be all tools I use to train in script/CNN, I use caffe


Am 11.01.2015 um 22:41 schrieb Aja Huang:



2015-01-11 15:59 GMT+00:00 Detlef Schmicker >:


By the way:
Todays bot tournament nicego19n (oakfoam) played with a CNN for
move prediction.
It was mixed into the original gamma with some quickly optimized
parameter leading to >100ELO improvement for selfplay with 2000
playouts/move. I used the Clark and Storkey Network, but with no
additional features (only a black and a white layer). I trained it
on 6 kgs games and reached about 41% prediction rate. I have
no delayed evaluation, as I evaluate no mini-batch but only one
position taking about 1.6ms on the GTX-970. A little delay might
happen anyway, as only one evaluation is done at once and other
threads might go on playing while one thread is doing CNN. We have
quite slow playouts anyway, so I had around 7 playouts/move
during the game.

If you want to get an impression, how such a bot plays, have a
look at the games :)


Congrats on oakfoam's significant improvement with the CNN. The game 
oakfoam beat HiroBot[1d] is very nice


http://files.gokgs.com/games/2015/1/11/HiraBot-NiceGo19N.sgf

Would you release the newest version of oakfoam and the CNN? I 
couldn't find your git or svn repository at


http://oakfoam.com/#downloads
Aja


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go


___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Move Evaluation in Go Using Deep Convolutional NeuralNetworks

2015-01-11 Thread Aja Huang
2015-01-09 23:04 GMT+00:00 Darren Cook :

> Aja wrote:
> >> I hope you enjoy our work. Comments and questions are welcome.
>
> I've just been catching up on the last few weeks, and its papers. Very
> interesting :-)
>
> I think Hiroshi's questions got missed?
>

I did answer Hiroshi's questions.

http://computer-go.org/pipermail/computer-go/2014-December/007063.html

Aja
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Move Evaluation in Go Using Deep Convolutional NeuralNetworks

2015-01-11 Thread Darren Cook
> Is "KGS rank" set 9 dan when it plays against Fuego?

Aja replied:
> Yes.

I'm wondering if I've misunderstood, but does this mean it is the same
as just training your CNN on the 9-dan games, and ignoring all the 8-dan
and weaker games? (Surely the benefit of seeing more positions outweighs
the relatively minor difference in pro player strength??)

Darren

P.S.

> I did answer Hiroshi's questions.
> 
> http://computer-go.org/pipermail/computer-go/2014-December/007063.html

Thanks Aja! It seems you wrote three in a row, and I only got the first
one. I did a side-by-side check from Dec 15 to Dec 31, and I got every
other message. So perhaps it was just a problem on my side, for those
two messages.

___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Representing Komi for neural network

2015-01-11 Thread Hugh Perkins
On 1/11/15, Detlef Schmicker  wrote:
> Todays bot tournament nicego19n (oakfoam) played with a CNN for move
> prediction.

Blimey!  You coded that quickly.  Impressive! :-)
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Datasets for CNN training?

2015-01-11 Thread Hugh Perkins
> Why don’t you make a dataset of the raw board positions, along with code to 
> convert to Clark and Storkey planes?  The data will be smaller, people can 
> verify against Clark and Storkey, and they have the data to make their own 
> choices about preprocessing for network inputs.

Well, a lot of the data are dynamic, eg 'moves since last move', and
cannot be obtained by looking at a single, isolated position.  The
most compact way of representing the information required is the sgf
files in fact...

What I'm thinking of doing is making the layers created some kind of
options to the script, like "I want 3 layers for liberties, no matter
which side, and one layer for illegal moves, and ... etc", something
like that?

As far as downloading the data, all the sgfs, the script already does
that.  Actually, the script is pretty much finished, as far as Clark
and Storkey layers, just need to debug a bit...
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Re: [Computer-go] Move Evaluation in Go Using Deep Convolutional NeuralNetworks

2015-01-11 Thread Hugh Perkins
Darren wrote:
> I'm wondering if I've misunderstood, but does this mean it is the same
as just training your CNN on the 9-dan games, and ignoring all the 8-dan
and weaker games? (Surely the benefit of seeing more positions outweighs
the relatively minor difference in pro player strength??)

It's just an additional data fed into the neural net (via 9 full
layers in fact :-O), so the net can decide to what extent the data it
saw for 2 dan or 1 dan games are useful for predicting the next move
in 9 dan games.
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

[Computer-go] [ADMIN] Three lost emails by Aja Huang on Dec 20

2015-01-11 Thread Petr Baudis
  Hi!

  It turns out that due to mail server misconfiguration, three of Aja
Huang's emails on Dec 20 were not delivered to most or all subscribers:

http://computer-go.org/pipermail/computer-go/2014-December/007061.html
http://computer-go.org/pipermail/computer-go/2014-December/007062.html
http://computer-go.org/pipermail/computer-go/2014-December/007063.html

Please read them via the web archive, and my sincere apologies.


  Thanks to Darren Cook + Aja Huang for noticing:

On Sun, Jan 11, 2015 at 10:32:53PM +, Darren Cook wrote:
> P.S.
> 
> > I did answer Hiroshi's questions.
> > 
> > http://computer-go.org/pipermail/computer-go/2014-December/007063.html
> 
> Thanks Aja! It seems you wrote three in a row, and I only got the first
> one. I did a side-by-side check from Dec 15 to Dec 31, and I got every
> other message. So perhaps it was just a problem on my side, for those
> two messages.


  P.S.: What happenned? My home server pasky.or.cz was offline on Dec 20
between 13:57 and ~15:30 UTC for some hardware upgrades - related to my
other project https://github.com/brmson/yodaqa ;-).  Unfortunately, the
computer-go.org mail server did not have a proper reverse DNS record
for its IP address configured early on so to enable reliable delivery,
I had to configure relaying all email via my server pasky.or.cz;
I used the `relayhost = pasky.or.cz` postfix directive.
  Unfortunately, that turns out not to configure relaying via pasky.or.cz,
but via pasky.or.cz's MX - which is typically pasky.or.cz again so it
would appear to work, except when pasky.or.cz was down at that time.
The backup MX engine.or.cz didn't know anything about the relay
arrangement and so obviously refused to relay any of those mailing list
emails and they were discarded with a permanent delivery error (except
the first one for at least some people, since pasky.or.cz was actually
in the middle of shutdown when this one was being relayed).
  I have now fixed the error, the lesson is to use `relayhost
= [pasky.or.cz]` to really relay to a host instead of its MX records.
No other emails were lost due to this problem, as far as I can grep.

  P.P.S.: It seems that computer-go.org's reverse DNS record actually
did get fixed by now, so I should be able to remove the relay hack when
time permits.

-- 
Petr Baudis
If you do not work on an important problem, it's unlikely
you'll do important work.  -- R. Hamming
http://www.cs.virginia.edu/~robins/YouAndYourResearch.html
___
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go