I was thinking about bootstrapping possibilities, and wondered whether
it would be possible to use a shallower mimic net for positional
evaluation playouts from a specific depth on after having generated
positions with a certain branching factor that typically allows the
actual pro move to be included, hopefully finding even stronger moves,
which then are fed back as targets for the primary function/net.
Perhaps even apply different amounts of shallowness in mimic function
NN configuration as well as depth/branching for move tree generation.

No idea if there are kind of depth/branching configurations that would
make sense or seem promising, given the existing hardware options.

On Sun, Mar 15, 2015 at 2:56 AM, Hugh Perkins <hughperk...@gmail.com> wrote:
> To be honest, what I really want is for it to self-learn, like David
> Silver's TreeStrap did for chess, but on the one hand I guess I should
> start by reproducing the existent, and on the other hand if we need
> millions of moves to train the net, that's going to make for very slow
> self-play...  Also, David Silver was associated with Aja Huang's
> paper, and I'm guessing therefore that it is very non-trivial to do,
> otherwise David Silver would have done it already :-)
> _______________________________________________
> Computer-go mailing list
> Computer-go@computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
_______________________________________________
Computer-go mailing list
Computer-go@computer-go.org
http://computer-go.org/mailman/listinfo/computer-go

Reply via email to