On 04-10-16 23:47, Yuandong Tian wrote:
> Hi all, 
> 
> DarkForest training code is open source now. Hopefully it will help the
> community.
> 
> https://github.com/facebookresearch/darkforestGo
> <https://github.com/facebookresearch/darkforestGo>
> 
> With 4 GPUs, the training procedure gives 56.1% top-1 accuracy in KGS
> dataset in 3.5 days, and 57.1% top-1 in 6.5 days (see the simple log
> below). The parameters used are the following: --epoch_size 256000 --GPU
> 4 --data_augmentation --alpha 0.1 --nthread 4
It's probably due to my unfamiliarity with Torch but I couldn't find
where the actual network structure is defined.

I think the script runs with alpha=0.05, not alpha=0.1.

I understood from previous comments you didn't find momentum to be
beneficial. This highly surprises me. Is that still the case?

-- 
GCP
_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to