-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Aja, please try to answer the discrepancies between you loss values in
text and figures,

Detlef

Am 19.03.2016 um 14:25 schrieb Aja Huang:
> Good stuff, Hiroshi. Looks like I don't need to answer the
> questions regarding value network. :)
> 
> Aja
> 
> On Sat, Mar 19, 2016 at 9:23 PM, Hiroshi Yamashita
> <y...@bd.mbn.or.jp> wrote:
> 
>> What are you using for loss?
>>> 
>> 
>> I use this,
>> 
>> layers { name: "loss" type: EUCLIDEAN_LOSS bottom: "fc14" bottom:
>> "label" top: "loss" }
>> 
>> -------------------------------------------------------- name:
>> "AyaNet" layers { name: "mnist" type: DATA top: "data" data_param
>> { source: "train_i50_v_2k_leveldb" #    backend: LMDB batch_size:
>> 256 } include: { phase: TRAIN } } layers { name: "mnist" type:
>> HDF5_DATA top: "label" hdf5_data_param { source:
>> "/home/yss/test/train_v_2k_i50_11_only_hdf5/aya_data_list.txt" 
>> batch_size: 256 } include: { phase: TRAIN } } layers { name:
>> "mnist" type: DATA top: "data" data_param { source:
>> "test_i50_v_2k_leveldb" #    backend: LMDB batch_size: 256 } 
>> include: { phase: TEST } } layers { name: "mnist" type:
>> HDF5_DATA top: "label" hdf5_data_param { source:
>> "/home/yss/test/test_v_2k_i50_11_only_hdf5/aya_data_list.txt" 
>> batch_size: 256 } include: { phase: TEST } }
>> 
>> 
>> #this part should be the same in learning and prediction network 
>> layers { name: "conv1_5x5_128" type: CONVOLUTION blobs_lr: 1. 
>> blobs_lr: 2. bottom: "data" top: "conv1" convolution_param { 
>> num_output: 128 kernel_size: 5 pad: 2 weight_filler { type:
>> "xavier" } bias_filler { type: "constant" } } } layers { name:
>> "relu1" type: RELU bottom: "conv1" top: "conv1" }
>> 
>> layers { name: "conv2_3x3_128" type: CONVOLUTION blobs_lr: 1. 
>> blobs_lr: 2. bottom: "conv1" top: "conv2" convolution_param { 
>> num_output: 128 kernel_size: 3 pad: 1 weight_filler { type:
>> "xavier" } bias_filler { type: "constant" } } } layers { name:
>> "relu2" type: RELU bottom: "conv2" top: "conv2" }
>> 
>> layers { name: "conv3_3x3_128" type: CONVOLUTION blobs_lr: 1. 
>> blobs_lr: 2. bottom: "conv2" top: "conv3" convolution_param { 
>> num_output: 128 kernel_size: 3 pad: 1 weight_filler { type:
>> "xavier" } bias_filler { type: "constant" } } } layers { name:
>> "relu3" type: RELU bottom: "conv3" top: "conv3" }
>> 
>> layers { name: "conv4_3x3_128" type: CONVOLUTION blobs_lr: 1. 
>> blobs_lr: 2. bottom: "conv3" top: "conv4" convolution_param { 
>> num_output: 128 kernel_size: 3 pad: 1 weight_filler { type:
>> "xavier" } bias_filler { type: "constant" } } } layers { name:
>> "relu4" type: RELU bottom: "conv4" top: "conv4" }
>> 
>> 
>> layers { name: "conv5_3x3_128" type: CONVOLUTION blobs_lr: 1. 
>> blobs_lr: 2. bottom: "conv4" top: "conv5" convolution_param { 
>> num_output: 128 kernel_size: 3 pad: 1 weight_filler { type:
>> "xavier" } bias_filler { type: "constant" } } } layers { name:
>> "relu5" type: RELU bottom: "conv5" top: "conv5" }
>> 
>> layers { name: "conv6_3x3_128" type: CONVOLUTION blobs_lr: 1. 
>> blobs_lr: 2. bottom: "conv5" top: "conv6" convolution_param { 
>> num_output: 128 kernel_size: 3 pad: 1 weight_filler { type:
>> "xavier" } bias_filler { type: "constant" } } } layers { name:
>> "relu6" type: RELU bottom: "conv6" top: "conv6" }
>> 
>> layers { name: "conv7_3x3_128" type: CONVOLUTION blobs_lr: 1. 
>> blobs_lr: 2. bottom: "conv6" top: "conv7" convolution_param { 
>> num_output: 128 kernel_size: 3 pad: 1 weight_filler { type:
>> "xavier" } bias_filler { type: "constant" } } } layers { name:
>> "relu7" type: RELU bottom: "conv7" top: "conv7" }
>> 
>> layers { name: "conv8_3x3_128" type: CONVOLUTION blobs_lr: 1. 
>> blobs_lr: 2. bottom: "conv7" top: "conv8" convolution_param { 
>> num_output: 128 kernel_size: 3 pad: 1 weight_filler { type:
>> "xavier" } bias_filler { type: "constant" } } } layers { name:
>> "relu8" type: RELU bottom: "conv8" top: "conv8" }
>> 
>> layers { name: "conv9_3x3_128" type: CONVOLUTION blobs_lr: 1. 
>> blobs_lr: 2. bottom: "conv8" top: "conv9" convolution_param { 
>> num_output: 128 kernel_size: 3 pad: 1 weight_filler { type:
>> "xavier" } bias_filler { type: "constant" } } } layers { name:
>> "relu9" type: RELU bottom: "conv9" top: "conv9" }
>> 
>> layers { name: "conv10_3x3_128" type: CONVOLUTION blobs_lr: 1. 
>> blobs_lr: 2. bottom: "conv9" top: "conv10" convolution_param { 
>> num_output: 128 kernel_size: 3 pad: 1 weight_filler { type:
>> "xavier" } bias_filler { type: "constant" } } } layers { name:
>> "relu10" type: RELU bottom: "conv10" top: "conv10" }
>> 
>> layers { name: "conv11_3x3_128" type: CONVOLUTION blobs_lr: 1. 
>> blobs_lr: 2. bottom: "conv10" top: "conv11" convolution_param { 
>> num_output: 128 kernel_size: 3 pad: 1 weight_filler { type:
>> "xavier" } bias_filler { type: "constant" } } } layers { name:
>> "relu11" type: RELU bottom: "conv11" top: "conv11" }
>> 
>> layers { name: "conv12_1x1_1" type: CONVOLUTION blobs_lr: 1. 
>> blobs_lr: 2. bottom: "conv11" top: "conv12" convolution_param { 
>> num_output: 1 kernel_size: 1 pad: 0 weight_filler { type:
>> "xavier" } bias_filler { type: "constant" } } } layers { name:
>> "relu12" type: RELU bottom: "conv12" top: "conv12" }
>> 
>> layers { name: "fc13" type: INNER_PRODUCT bottom: "conv12" top:
>> "fc13" inner_product_param { num_output: 256 weight_filler { 
>> type: "xavier" } bias_filler { type: "constant" } } } layers { 
>> name: "relu13" type: RELU bottom: "fc13" top: "fc13" }
>> 
>> layers { name: "fc14" type: INNER_PRODUCT bottom: "fc13" top:
>> "fc14" inner_product_param { num_output: 1 weight_filler { type:
>> "xavier" } bias_filler { type: "constant" } } } layers { name:
>> "tanh14" type: TANH bottom: "fc14" top: "fc14" }
>> 
>> layers { name: "loss" type: EUCLIDEAN_LOSS bottom: "fc14" bottom:
>> "label" top: "loss" } 
>> --------------------------------------------------------
>> 
>> Thanks, Hiroshi Yamashita
>> 
>> ----- Original Message ----- From: "Detlef Schmicker"
>> <d...@physik.de> To: <computer-go@computer-go.org> Sent: Saturday,
>> March 19, 2016 7:41 PM Subject: Re: [Computer-go] Value Network
>> 
>> 
>> 
>> What are you using for loss?
>>> 
>>> this:
>>> 
>>> layers { name: "loss4" type:  EUCLIDEAN_LOSS loss_weight: 2.0 
>>> bottom: "vvv" bottom: "pool2" top: "accloss4" }
>>> 
>>> 
>>> ?
>>> 
>>> Am 04.03.2016 um 16:23 schrieb Hiroshi Yamashita:
>>> 
>>>> Hi,
>>>> 
>>>> I tried to make Value network.
>>>> 
>>>> "Policy network + Value network"  vs  "Policy network"
>>>> Winrate Wins/Games 70.7%    322 / 455,    1000 playouts/move
>>>> 76.6%    141 / 184,   10000 playouts/move
>>>> 
>>>> It seems more playouts, more Value network is effetctive.
>>>> Games is not enough though. Search is similar to AlphaGo.
>>>> Mixing parameter lambda is 0.5. Search is synchronous. Using
>>>> one GTX 980. In 10000 playouts/move, Policy network is called
>>>> 175 times, Value network is called 786 times. Node Expansion
>>>> threshold is 33.
>>>> 
>>>> 
>>>> Value network is 13 layers, 128 filters. (5x5_128, 3x3_128
>>>> x10, 1x1_1, fully connect, tanh) Policy network is 12 layers,
>>>> 256 filters. (5x5_256, 3x3_256 x10, 3x3_1), Accuracy is
>>>> 50.1%
>>>> 
>>>> For Value network, I collected 15804400 positions from
>>>> 987775 games. Games are from GoGoD, tygem 9d,      22477
>>>> games http://baduk.sourceforge.net/TygemAmateur.7z KGS 4d
>>>> over, 1450946 games http://www.u-go.net/gamerecords-4d/
>>>> (except handicaps games). And select 16 positions randomly
>>>> from one game. One game is divided 16 game stage, and select
>>>> one of each. 1st and 9th position are rotated in same
>>>> symmetry. Then Aya searches with 500 playouts, with Policy
>>>> network. And store winrate (-1 to +1). Komi is 7.5. This 500 
>>>> playouts is around 2730 BayesElo on CGOS.
>>>> 
>>>> I did some of this on Amazon EC2 g2.2xlarge, 11 instances. It
>>>> took 2 days, and costed $54. Spot instance is reasonable.
>>>> However g2.2xlarge(GRID K520), is 3x slower than GTX 980. My
>>>> Pocicy network(12L 256F) takes 5.37ms(GTX 980), and
>>>> 15.0ms(g2.2xlarge). Test and Traing loss are 0.00923 and
>>>> 0.00778. I think there is no big overfitting.
>>>> 
>>>> Value network is effective, but Aya has still fatal semeai 
>>>> weakness.
>>>> 
>>>> Regards, Hiroshi Yamashita
>>>> 
>>> 
>> _______________________________________________ Computer-go
>> mailing list Computer-go@computer-go.org 
>> http://computer-go.org/mailman/listinfo/computer-go
>> 
> 
> 
> 
> _______________________________________________ Computer-go mailing
> list Computer-go@computer-go.org 
> http://computer-go.org/mailman/listinfo/computer-go
> 
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2.0.22 (GNU/Linux)

iQIcBAEBAgAGBQJW7ZeFAAoJEInWdHg+Znf4RJAQAKwTOidHeQjutSUYKhNCcAcj
X5LWSBg72PEGInlvS6qz3BDIlLI/ftOmQwcHpAvA+Ci91wCbiZlH7n+DI+YZqixm
s1lAryvpQgp8EyhgNqH4H3URtQvbZsjaEqjIeDPA8Xiqvx+Yi0sKlH5Tbkcyhy5H
7uHb0ls0VTf0q0DOCTcwkbdOd3nfXNj0xKwZ4JMh0s5d1OE1XFRqzNjZsre2uTUN
Fdj1YBOkAsW1Ja31IDwEK9eM/aoBoaxWrbnLV/1pLhhHYDwxEJvo9V9JroxN3sTR
0ll1xrNzfMAXyPY+yRk7SgYTayD8dUZKj0WbThvx389CJqnWZFtXog8HuybVLeI3
fr9PDGOx9quN07SXvdjVAjrOP01YZHfTqh31nKK4hnfH/krXpFivc/l2zs5CvkGs
PtsS61wfRflUPZiiTwrnRT/sHJn8Eqw99u9GeS4v2J3of9BtnKs8JAKUL4pbXcVT
5Lfxml1stBVABAXJoPXrHyFbUkSusPoHHppaGfG+E9uBoaEGXE2xTpdXzr3u1rUv
aSOvqt+Pbe4u1eboStOVtDjwOAGmrLBSu9X5HkcnvOQ6L10dS52WTkvPzB7i6Hoa
RuMZIFT1iIzJ9ZHiJRx+icgEE/Kh3bObbPuCuueHH2315eaIshLqtlrj65g5M+sU
r/z6Oc8pk5xRDcfTpfK5
=4Apv
-----END PGP SIGNATURE-----
_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to