Yes, it's 0.15 seconds for 128 positions.

A minibatch is a small set of samples that is used to compute an
approximation to the gradient before you take a step of gradient descent. I
think it's not simply called a "batch" because "batch training" refers to
computing the full gradient with all the samples before you take a step of
gradient descent. "Minibatch" is standard terminology in the NN community.

Álvaro.





On Fri, Jan 9, 2015 at 6:04 PM, Darren Cook <dar...@dcook.org> wrote:

> Aja wrote:
> >> I hope you enjoy our work. Comments and questions are welcome.
>
> I've just been catching up on the last few weeks, and its papers. Very
> interesting :-)
>
> I think Hiroshi's questions got missed?
>
> Hiroshi Yamashita asked on 2014-12-20:
> > I have three questions.
> >
> > I don't understand minibatch. Does CNN need 0.15sec for a positon, or
> > 0.15sec for 128 positions?
>
> I also wasn't sure what "minibatch" meant. Why not just say "batch"?
>
> > Is "KGS rank" set 9 dan when it plays against Fuego?
>
> For me, the improvement from just using a subset of the training data
> was one of the most surprising results.
>
> Darren
>
>
> --
> Darren Cook, Software Researcher/Developer
> My new book: Data Push Apps with HTML5 SSE
> Published by O'Reilly: (ask me for a discount code!)
>   http://shop.oreilly.com/product/0636920030928.do
> Also on Amazon and at all good booksellers!
> _______________________________________________
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to