Since Zen's engine is improved sololy by Yomato, I have no idea in detail but I believe Yamato has used one Mac Pro so far (Linux and Windows). #He has implemented DCNN by himself, not using tools.
Hideki David Fotland: <0a0301d15de7$1180d760$34828620$@smart-games.com>: >Detlef, Hiroshi, Hideki, and others, > >I have caffelib integrated with Many Faces so I can evaluate a DNN. Thank you >very much >Detlef for sample code to set up the input layer. Building caffe on windows >is painful. If >anyone else is doing it and gets stuck I might be able to help. > >What hardware are you using to train networks? I dont have a cuda-capable >GPU yet, so I'm >going to buy a new box. I'd like some advice. Caffe is not well supported on >Windows, so I >plan to use a Linux box for training, but continue to use Windows for testing >and >development. For competitions I could use either windows or linux. > >Thanks in advance, > >David > >> -----Original Message----- >> From: Computer-go [mailto:computer-go-boun...@computer-go.org] On Behalf >> Of Hiroshi Yamashita >> Sent: Monday, February 01, 2016 11:26 PM >> To: computer-go@computer-go.org >> Subject: *****SPAM***** Re: [Computer-go] DCNN can solve semeai? >> >> Hi Detlef, >> >> My study heavily depends on your information. Especially Oakfoam code, >> lenet.prototxt and generate_sample_data_leveldb.py was helpful. Thanks! >> >> > Quite interesting that you do not reach the prediction rate 57% from >> > the facebook paper by far too! I have the same experience with the >> >> I'm trying 12 layers 256 filters, but it is around 49.8%. >> I think 57% is maybe from KGS games. >> >> > Did you strip the games before 1800AD, as mentioned in the FB paper? I >> > did not do it and was thinking my training is not ok, but as you have >> > the same result probably this is the only difference?! >> >> I also did not use before 1800AD. And don't use hadicap games. >> Training positions are 15693570 from 76000 games. >> Test positions are 445693 from 2156 games. >> All games are shuffled in advance. Each position is randomly rotated. >> And memorizing 24000 positions, then shuffle and store to LebelDB. >> At first I did not shuffle games. Then accuracy is down each 61000 >> iteration (one epoch, 256 mini-batch). >> http://www.yss-aya.com/20160108.png >> It means DCNN understands easily the difference 1800AD games and 2015AD >> games. I was surprised DCNN's ability. And maybe 1800AD games are also >> not good for training? >> >> Regards, >> Hiroshi Yamashita >> >> ----- Original Message ----- >> From: "Detlef Schmicker" <d...@physik.de> >> To: <computer-go@computer-go.org> >> Sent: Tuesday, February 02, 2016 3:15 PM >> Subject: Re: [Computer-go] DCNN can solve semeai? >> >> > Thanks a lot for sharing this. >> > >> > Quite interesting that you do not reach the prediction rate 57% from >> > the facebook paper by far too! I have the same experience with the >> > GoGoD database. My numbers are nearly the same as yours 49% :) my net >> > is quite simelar, but I use 7,5,5,3,3,.... with 12 layers in total. >> > >> > Did you strip the games before 1800AD, as mentioned in the FB paper? I >> > did not do it and was thinking my training is not ok, but as you have >> > the same result probably this is the only difference?! >> > >> > Best regards, >> > >> > Detlef >> >> _______________________________________________ >> Computer-go mailing list >> Computer-go@computer-go.org >> http://computer-go.org/mailman/listinfo/computer-go > >_______________________________________________ >Computer-go mailing list >Computer-go@computer-go.org >http://computer-go.org/mailman/listinfo/computer-go -- Hideki Kato <mailto:hideki_ka...@ybb.ne.jp> _______________________________________________ Computer-go mailing list Computer-go@computer-go.org http://computer-go.org/mailman/listinfo/computer-go