>... my value network was trained to tell me the game is balanced at the
>beginning...
:-)
The best training policy is to select positions that correct errors.
I used the policies below to train a backgammon NN. Together, they reduced the
expected loss of the network by 50% (cut the error rate
Finally found the problem. In the end, it was as stupid as expected:
When I pick a game for the batch creation I select randomly a limited
number of moves inside the game. In the case of the value network I use
like 8-16 moves to not overfit the data (I can't take 1 or then the I/O
operations
On 19/06/2017 21:31, Vincent Richard wrote:
> - The data is then analyzed by a script which extracts all kind of
> features from games. When I'm training a network, I load the features I
> want from this analysis to build the batch. I have 2 possible methods
> for the batch construction. I can eith
This is what have been thinking about, yet unable to find an error.
Currently, I'm working with:
- SGF Database: fuseki info Tygem -> http://tygem.fuseki.info/index.php
(until recently I was working with games of all level from KGS)
- The data is then analyzed by a script which extracts all k
On 19-06-17 17:38, Vincent Richard wrote:
> During my research, I’ve trained a lot of different networks, first on
> 9x9 then on 19x19, and as far as I remember all the nets I’ve worked
> with learned quickly (especially during the first batches), except the
> value net which has always been probl
Hello everyone,
For my master thesis, I have built an AI that has a strategical approach
to the game. It doesn’t play but simply describe the strategy behind all
possible move for a given strategy ("enclosing this group", "making life
for this group", "saving these stones", etc). My main idea
e:
>> "relu10" type: RELU bottom: "conv10" top: "conv10" }
>>
>> layers { name: "conv11_3x3_128" type: CONVOLUTION blobs_lr: 1.
>> blobs_lr: 2. bottom: "conv10" top: "conv11" convolution_param {
>> num_output: 1
ge -
From: "Aja Huang"
To:
Sent: Saturday, March 19, 2016 10:25 PM
Subject: Re: [Computer-go] Value Network
Good stuff, Hiroshi. Looks like I don't need to answer the questions
regarding value network. :)
Aja
___
Computer-go
stant"
}
}
}
layers {
name: "relu11"
type: RELU
bottom: "conv11"
top: "conv11"
}
layers {
name: "conv12_1x1_1"
type: CONVOLUTION
blobs_lr: 1.
blobs_lr: 2.
bottom: "conv11"
top: "conv12"
convolution_param {
num_outpu
gt; type: "xavier"
>}
>bias_filler {
> type: "constant"
>}
> }
> }
> layers {
> name: "relu10"
> type: RELU
> bottom: "conv10"
> top: "conv10"
> }
>
> layers {
> name
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
What are you using for loss?
this:
layers {
name: "loss4"
type: EUCLIDEAN_LOSS
loss_weight: 2.0
bottom: "vvv"
bottom: "pool2"
top: "accloss4"
}
?
Am 04.03.2016 um 16:23 schrieb Hiroshi Yamashita:
> Hi,
>
> I tried to make Value netwo
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
thanks a lot for sharing! I try a slightly different approach at the
moment:
I use a combined policy / value network (adding 3-5 layers with about
16 filters at the end of the policy network for the value network to
avoid overfitting) and I use t
Hi,
I tried to make Value network.
"Policy network + Value network" vs "Policy network"
Winrate Wins/Games
70.7%322 / 455,1000 playouts/move
76.6%141 / 184, 1 playouts/move
It seems more playouts, more Value network is effetctive. Games
is not enough though. Search is simil
13 matches
Mail list logo