ts take about
>>> 5 days to train (about 20 epochs on about 30M positions). The last few
>>> percent is just trial and error. Sometimes making the net wider or deeper
>>> makes it weaker. Perhaps it's just variation from one training run to
>>> another. I hav
.
>>
>> David
>>
>> > -Original Message-
>> > From: Computer-go [mailto:computer-go-boun...@computer-go.org] On
>> Behalf
>> > Of Gian-Carlo Pascutto
>> > Sent: Tuesday, August 23, 2016 12:42 AM
>> > To: computer-go@computer
e same net more than once.
>
> David
>
> > -Original Message-
> > From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> > Of Gian-Carlo Pascutto
> > Sent: Tuesday, August 23, 2016 12:42 AM
> > To: computer-go@computer-go.org
> >
rom: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf
> Of Gian-Carlo Pascutto
> Sent: Tuesday, August 23, 2016 12:42 AM
> To: computer-go@computer-go.org
> Subject: Re: [Computer-go] Converging to 57%
>
> On 23-08-16 08:57, Detlef Schmicker wrote:
>
> > So, if somebo
: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of
Brian Lee
Sent: Tuesday, August 23, 2016 7:00 AM
To: computer-go@computer-go.org
Subject: Re: [Computer-go] Converging to 57%
I've been working on my own AlphaGo replication (code on github
https://github.com/brilee
I've been working on my own AlphaGo replication (code on github
https://github.com/brilee/MuGo), and I've found it reasonably easy to hit
45% prediction rate with basic features (stone locations, liberty counts,
and turns since last move), and a relatively small network (6 intermediate
layers, 32 f
There are situations where carefully crafting the minibatches makes sense.
For instance, if you are training an image classifier it is good to build
the minibatches so the classes are evenly represented. In the case of
predicting the next move in go I don't expect this kind of thing will make
much
On 23/08/2016 11:26, Brian Sheppard wrote:
> The learning rate seems much too high. My experience (which is from
> backgammon rather than Go, among other caveats) is that you need tiny
> learning rates. Tiny, as in 1/TrainingSetSize.
I think that's overkill, as in you effectively end up doing batc
bet that the Google & FB results are just their
final runs.
From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf Of
Robert Waite
Sent: Tuesday, August 23, 2016 2:40 AM
To: computer-go@computer-go.org
Subject: [Computer-go] Converging to 57%
I had subscribed to
On 23-08-16 08:57, Detlef Schmicker wrote:
> So, if somebody is sure, it is measured against GoGod, I think a
> number of other go programmers have to think again. I heard them
> reaching 51% (e. g. posts by Hiroshi in this list)
I trained a 128 x 14 network for Leela 0.7.0 and this gets 51.1%
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
good to start this discussion here. I had the discussion some times,
and we (discussion partner and me) were not sure, against which test
set the 57% was measured.
If trained and tested with kgs 6d+ dataset, it seems reasonable to
reach 57% (I re
I had subscribed to this mailing list back with MoGo... and remember
probably arguing that the game of go wasn't going to be beat for years and
years. I am a little late to the game now but was curious if anyone here
has worked with supervised learning networks like in the AlphaGo paper.
I have be
12 matches
Mail list logo